Video: Like an athlete trying to push their body to its extreme limits, artist Damien Henry was curious what would happen if you asked a predictive algorithm to calculate the next frame of video in a sequence, again and again and again, over 100,000 times.
The results are a fascinating music video that looks like overly compressed digital video filmed through the window of a moving car. The experiment was not unlike repeatedly making a photocopy of a photocopy, given that Henry fed the software just a single image to start with. But the machine managed to generate over 56 minutes of unique video that somehow remains fascinating to watch all the way through.
[referenced url=”https://gizmodo.com.au/2015/07/the-latest-google-algorithm-creates-video-based-on-a-few-still-images/” thumb=”https://i.kinja-img.com/gawker-media/image/upload/t_ku-large/1330709295844188706.gif” title=”The Latest Google Algorithm Creates Video Based On A Few Still Images” excerpt=”Google’s engineers can do some pretty incredible things with the consumer technology it has developed — from “dreaming” neural networks based on computer vision to an algorithm that can create video from Street View images.”]
Predictive algorithms like this, powered by machine learning, are useful for more than just mind-melting art projects — Google has already shown how they can be used to generate moving video from still images. Eventually, they might allow anyone to easily create their own virtual worlds with VR, or let someone working on a laptop create the next blockbuster movie without a Hollywood-sized budget. For now, though, if you’re looking for something to fall asleep to, this should do the trick.
[YouTube via prosthetic knowledge]