If you want to experience the cutting edge in computer-generated graphics, SIGGRAPH is the event. A lot of the technology you see in games and movies today almost certainly debuted years before at a SIGGRAPH conference. This year, we'll be getting the best virtual mud ever and now researchers are transplanting portraits onto video, creating this amazing display.
Sadly, the paper to explain the tech, "Example-Based Synthesis of Stylised Facial Animations", has yet to be published, however we do have this clip showing off the technique. I'm going to take in the stab in the dark and say it involves training a neural network (machine learning, basically) with paintings and then applying it to videos of people talking.
Looks like it does a pretty good job, maintaining the structure of the target face, while seamlessly applying the style of the painting. It is both creepy and cool, especially once you notice the mouth needs to be filled in with data from the target, which sometimes breaks the illusion.
It also works on photos as well, with a metal bust used as the source about halfway through the demo.
If you're wondering what the next generation of Prisma-like apps will do, this is probably it.