DeepMusic is a new Alexa skill that can create music by using audio samples and a neural network. Basically, it puts together notes from a variety of instruments and sampled audio by using algorithms that determine what works. Then, DeepMusic creates tunes that have “no post-production editing by a human”.
Skills are add-ons that extend Alexa’s capability by connecting it to new services. Once the DeepMusic skill is added to Alexa, you can invoke the AI-created tunes by saying “Alexa, ask DeepMusic to play a song”. She (I guess Alexa is female although there’s no evidence to suggest it has any actual gender identity) then plays tunes which various reviewers say hover between being robotic to “strangely familiar”.
That gels wth expectations. While music does have a mathematical underbelly, which is why algorithms can reproduce its mechanics, it’s also created through human creativity and emotion – something AI is yet to harness. It’s a bit like The Terminator. It looks and moves like a human but lacks emotion and nuance. But, over time, we can expect the algorithms that DeepMusic, and other AI, use to evolve and do a better job of emulating human activity.
But until then, I’m happy to put some vinyl on the turntable and feel the passion and rhythm of some good, old fashioned Aussie pub rock – something I can’t see AI creating any time soon.