The Quest To Teach AI To Write Pop Songs

The Quest To Teach AI To Write Pop Songs

David Cope didn’t set out to make anyone mad. In 1980, the composer envisioned a tool to help cure his creative block: a machine that could keep track of all the sounds and loose threads running through his mind, find similarities, and produce an entire piece of music inspired by it. So he built it.

Created over six years of experimentation, his songwriting computer program was dubbed EMI – pronounced Emmy – or Experiments in Musical Intelligence. Simply put, EMI worked by pattern-matching: breaking down pieces of music into smaller pieces, analysing them, figuring out what sounds similar and where it should go. Cope meant to apply this level of analysis to his own body of work, to deduce what his musical style was – but he realised it worked really well with other composers, too. Feed enough of another composer’s work – say, Johann Sebastian Bach – into EMI, and it would identify what makes Bach sound like Bach and spit out imitation Bach so good that the average listener might not know how to tell them apart from the real thing.

At a lecture held at Stanford University in 1997, attendees heard the University of Oregon professor Winifred Kerner play three separate pieces of music on the piano: one by Bach, one by EMI in the style of Bach, and one by her husband, Steve Larson, another UO professor. Asked to guess which was which, people mistook Larson’s piece for the computer’s and EMI’s for the real Bach. Larson was devastated, telling the New York Times at the time “Bach is absolutely one of my favourite composers… That people could be duped by a computer program was very disconcerting.”

He wasn’t the only one: Cope told Gizmodo that listeners didn’t like being asked to guess which was which if they guessed wrong. Moreover, critics said EMI’s compositions didn’t sound like they had any “soul.”

“I have no idea what a soul is,” Cope told me over the phone from his home office near the University of California, Santa Cruz campus, where he worked as a professor of music until he retired 10 years ago. “You can look it up in the dictionary, but they all say: It’s something, we don’t know what it is, but we’ve got it, and we can tell when it’s there. That’s not very useful to me.”

Cope is considered the godfather of music-making AI, and maintains that the future is bright, that the right algorithms will help unlock new expressions in songwriting that humans wouldn’t be able to access otherwise. Until recent years, teaching AI to write songs like humans has been the work of academics, who have focused mostly on classical music. Today, researchers at tech companies like Sony and Google are asking: What if AI could write pop songs? How would we train them, and would the final product be as good as what’s on the radio? Could it be better? Their efforts lead us to wonder: Is AI the latest “soul”-crushing technology, out to edge musicians out of their craft, or is it a new kind of instrument – one that lives in your computer, that might know what you want better than you do, and will ultimately enhance musicians’ chances of creating something truly great?


The quest to remove human decision-making from the process of writing music is centuries old. In 1787, Wolfgang Mozart published a guide to a “musical dice game” in which players roll a die several times and string together pre-written bits of music that are associated with each of the die’s six faces. The end result is one complete, albeit randomly assembled, piece of music: songwriting by the numbers.

In 1957, two professors at the University of Illinois at Urbana-Champaign Lejaren Hiller and Leonard Isaacson programmed the school’s room-sized Illiac computer to compose a musical score. They believed that music has to follow strict sets of rules in order for it to sound appealing to our ears, and if a computer could learn these rules, maybe it could write music by randomly generating sequences of notes that followed these rules.

In one experiment, they programmed the Illiac to compose a melody that met certain requirements: the range could not exceed one octave, it had to start and end on the C note, and so on. The computer generated one note at a time — but if an errant, rule-breaking note was generated, the program rejected that note and tried again.

Their final work, The Illiac Suite, broke ground, shattering the idea that music is always the melodic expression of an intense experience or feeling. Hiller and Isaacson acknowledged that the public may be wary of this. “When the subject of our work has come up, the question has been asked: ‘What is going to happen to the composer?,’” they wrote in their book Experimental Music: Composition with an Electronic Computer. But they offer: Computers don’t know when they are right or wrong. Computers just follow instructions. Even if a program can churn out songs quickly, a human will still ultimately need to weigh in on whether it sounds right or good.

Excerpt from Lejaren Hiller and Leonard Isaacson’s Iliac Suite.
“This is why some writers have spoken of a computer programmer ‘conversing’ with a computer. He feeds it certain information and tells the computer what to do with the information. The computer carries out these instructions, and then the programmer inspects the results,” they wrote.

Cope would pioneer this model – of human composers working in tandem with their computer counterparts – in the mid-’90s.

He invented a program he called Emily Howell – named after EMI and Cope’s father – that could compose music in entirely new styles, rather than simply parroting other composers. Every time Emily Howell proposes a new bit of music, Cope can tell the program whether he likes it or not. “The program changes ever so slightly,” he told Gizmodo, based on his preferences – but there’s still a bit of randomness baked in to the program. In other words, it gets better at turning something out that he will like, but some of the stuff you thumbs-downed earlier might still “creep in.”

“Over weeks at a time, you get pretty acquainted with the program,” Cope said. “I have often felt like I was speaking with a human during the process, as strange as that probably sounds.”


In September 2017, a team of researchers at the Paris-based Sony Computer Science Lab, along with the help of French musician Benoît Carré, released two songs written with the help of AI: “Daddy’s Car,” a song written in the style of the Beatles and “The Ballad of Mr Shadow,” in the style of American songwriters like Duke Ellington and George Gershwin. To do this, the team used Flow Machines, a tool designed to help guide songwriters and push them to be more creative, not do all of the work for them.

François Pachet, who led the development of Flow Machines, shows how the tool can map one musical style onto another sample melody to create a completely new song.
“My goal has always been to put some audaciousness, some boldness back into songwriting,” François Pachet, who spearheaded the development of Flow Machines at Sony CSL, told Gizmodo over a video call in January. “I have the impression that in the 1960s, ’70s, maybe ’80s, things were more interesting in terms of rhythm, harmony, melody, and so on,” he said, although he admitted that might make him dinosaur. (“People can say I’m outdated. Maybe, I don’t know.”)

Pachet, who now leads the AI research arm at Spotify, oversaw the development of Flow Machines for years, bringing interested musicians into the studio to experiment with adding it to their songwriting process. (Flow Machines also received funding from the European Research Council.) His work laid the foundation for an album that he and Carré started (and Carré would finish, after Pachet started at Spotify): a multi-artist album titled Hello World, featuring various pop, jazz, and electronic musicians. The songs, all of which include some element (the melody, the harmony, or what have you) that was generated by AI and finessed by the artists, just like Hiller and Isaacson suggested 50 years ago.

SKYGGE feat. Kiesza, “Hello Shadow” (Music Video) composed, in part, by AI for the album “Hello World.”
To record a song with Flow Machines, artist start by bringing in something that’s inspiring them: a guitar track, a vocal track (their voice or someone else’s), lead sheets, or MIDI files containing data about a given melody or harmony. Flow Machines analyses these things along with tens of thousands of other lead sheets it has in its database and “generates, on the fly, a score,” Pachet says. Like with Emily Howell, if the artist doesn’t like it, they can reject it and Flow Machines will come out with something else – but if they do like it, or a part of it, they can begin to play around with the music, editing specific notes or recording their own instrumentals or lyrics over it. Or you can bring in a track – let’s say a guitar track from a musician you really admire – and ask Flow Machines to map it onto a melody that you’re working on, or map it onto a melody and mix in a Frank Ocean vocal track. The result is meant to sound like getting the three of you in a room together – a guitarist performing in your style, Frank Ocean singing along over it – albeit, when Pachet demonstrates this function at a TEDx Talk (video above), the result is a bit choppy.

This process is fairly quick, usually taking between a few hours or a few days. The idea is to make composing music as painless as it is rewarding. “You have an interface where you can an interactive dialogue, where Flow Machines generates stuff, and you stop if you think it’s really good, if you think it’s great. If you don’t think it’s best, then you continue,” said Pachet.

Generation of Lead Sheets with FlowComposer, a demonstration on YouTube.
He added: “That was the goal, to bring in artists, allow them to use the machine in any possible way, with the only constraint that at the end being that the artists should like the results. They should be able to say, ‘OK, I endorse it, I put my name on that.’ That is a very, very demanding constraint.”

It’s great if artists like it, but what about listeners? It’s not clear that audience are huge fans of music written with the help of AI yet – although there’s also nothing, necessarily, stopping them from being, either. One of the most famous names on the album is Kiesza, who sings the title track; as of writing, her song has amassed over 1.8 million plays on Spotify. (When it was released on December 1, it appeared on Spotify’s New Music Friday, per a Reddit thread cataloguing the playlist’s additions from that week.) For (an extreme) comparison, Cardi B’s “Bodak Yellow” has over 10 million plays on Spotify — but still, getting over a million streams is encouraging.

When trying to predict the future of music written with AI, it may help to look to non-U.S. markets. In February, the London-based company Jukedeck – their AI-powered online tool creates short, original music aimed primarily at video-makers and video game designers – collaborated with the Korean music company Enterarts to put on a concert in Seoul. The music was performed by K-pop stars – like Bohyoung Kim from the group SPICA and the girl group High Teen – but the basis for the songs came from music composed by Jukedeck’s AI system. According to Jukedeck’s founder, it was attended by 200-300 people, and almost entirely members of the media. The company is planning on releasing three more “mini-albums” this year. If they do, they have their work cut out for them: The first mini-album has less than 1,000 streams on Spotify.


In an interview with the Guardian eight years ago, David Cope said that AI-generated music would be “a mainstay of our lives” in our lifetime. That hasn’t happened quite yet; the aforementioned songs aren’t landing on the Top 40, so much as they are generating a lot of buzz and fear-mongering headlines.

SKYGGE, “Magic Man” (Lyrics Video) composed, in part, by AI for the album “Hello World.”
When I ask Pachet whether he thinks young people will care about whether a computer helped write a song, he agrees with Cope. “Millennials don’t listen to music the same way we did 20 or 30 years ago. It’s definitely not easy to characterise, but things have changed and you can see it by looking at how people listen to music,” he says. Pachet goes on: “There is so much more music available now than before. People tend to skip a lot, they listen for 10 seconds and then very quickly [decide if they like it or not]. That’s a new behaviour that did not exist before the internet.”

If young people are listening to music in a kind of ruthless, speed-dating way – trying to eliminate the songs that aren’t bops from the songs that are as quickly as possible, as if to better curate and maximise their own listening experience – then maybe songs written the help of AI can sneak right on in there.

One way to smooth the emergence of AI as a songwriting companion into the market is to frame AI instead as just another musical instrument, like the piano or the synthesiser. It’s a handy bit of rhetoric for any bullish AI enthusiast: No one is arguing that drummers have been put out of business by the widespread use of drum machines in popular music. Casting AI in a similar light may help reduce anxiety about job-snatching robo-songwriters. Some are already arguing this; Pachet tells me, when I ask about how AI might get credited on song credits, that “Flow Machines was just a tool. You never credit the tool alright. Otherwise, many songs would be credited with guitar or vocoder or trumpet or piano or something. So you really need to see this as a tool.”

For what it’s worth, tech companies like Google do seem to be focusing on the “tool” part; earlier this month, the Google Magenta team (which researches how AI can help increase human creativity) showed off something they’re calling NSynth Super, a touchpad generates completely new sounds based on two different ones. (Imagine hearing something that sounds halfway between a trumpet and a harmonica.) When I spoke to Jesse Engels, a research engineer at Magenta, he too compared what AI can do for songwriters to what instruments have historically done. He talks about how guitar amps were originally just meant to amplify the sound of guitars, and that using them to add distortion to guitar-playing was the happy result of people messing around with it. One of the current goals of Magenta, he said, is “to have models rapidly adapt to the whims of creative mis-users.”

If they can get their tools into the hands of enough people, they might succeed.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.