Alex Garland's new movie Ex Machina is a dark and sometimes disturbing look at robots, artificial intelligence, and what it means to be human. I talked with Garland about his childhood expectations for the future, why people don't seem to care about the Snowden leaks, and whether Ray Kurzweil is full of shit. Ex Machina is in theatres everywhere this weekend.
Photo: Director Alex Garland with Alicia Vikander, who plays the robot Ava in the film
When you were growing up, what did you imagine the 21st century would look like?
Well, define the age that you're asking about. Like 10 or 15?
Say, the first two decades of your life. As you're sort of getting ideas about what the year 2000 would look like. Was that a benchmark that you looked toward?
I guess it probably was in as much as I was born in 1970 and it was kind of drilled into us to expect a lot from 2000 or the years just after. So yeah, there was that sense of moving toward the millennium and that being very relevant. What I think is that I was expecting a lot. And the areas where I think if you'd stopped me aged 20 and asked what was likely to be around, I'd have been very wrong.
I think that broadly speaking, the expectations I would have had back then were to do with stuff like medicine. Like all sorts of diseases would be cured, and we'd be living way way way longer, and we'd have more machine parts that were part of us. I think it would have been all about longevity and health in some weird kind of way. And instead it ended up being more like Dick Tracey, with tiny computers that you can carry around and wear around your wrist.
Weirdly, sci-fi ended up being closer to reality with some of this stuff. It was more gadget related, I guess. And less seismic in some ways. Like people say the internet is seismic. And of course it is, it's incredibly important, and it's changed a lot of paradigms and it's a big deal. But it's not as big a deal as living to 200 or cancer being cured. So it turned out to be more gadget related and less fundamental than I think I'd anticipated.
So would you say that you're disappointed by the future or was it simply an adjustment of expectations as you grew older?
It's an adjustment of expectations, definitely. And it's the way these things go, isn't it? You're unlikely to be able to accurately predict this stuff. When I was a kid, there was this show on the BBC called Tomorrow's World. And Tomorrow's World was kind of a weekly show that would highlight the incredible things that were just on the horizon. You know, in five years, we're all going to have these robots that clean our cars, or whatever the hell it was. And one of the things about Tomorrow's World was that it was just relentlessly wrong about everything. [laughs] So none of its predictions ever seemed to come true. So I guess my strike rate was roughly the same as Tomorrow's World and I shouldn't be surprised by that because I was given weekly evidence of how these things play out.
Well, speaking of Tomorrow's World — which I'm very familiar with because I write about past visions of the future and the history of technology, so I love that show, and it's fantastically wrong, as you say [laughs] — what kinds of futurism did you grow up on? Were there any robot or AI stories that stick out, that may have inspired any of your work?
I was born in 1970, so I grew up with the kind of paranoid sci-fi. Like everything was kind of defined by Watergate and Vietnam in some weird kind of way. The enemies and narratives when I was growing up were governments and corporations typically. In my parents' generation, the enemies were other states, you know, other countries. By the time I was growing up it was your own country that was the enemy, or it was your allies, it was sort of capitalist corporations, that kind of thing. So Soylent Green and Westworld and Planet of the Apes, Logan's Run — those things were paranoid films and they told us that we were our own worst enemy. And I guess I'm a product of that in a way.
I've noticed that it's flipped again. Typically the enemies these days are other states again, they're other countries. They're other intelligence services or they're terrorists and stuff like that. And people, to my mind as a sort of child of the 70s, feel slightly too relaxed about governments and corporations. I slightly feel like the fact that the Snowden story didn't get as much traction as it should have gotten. It should have been another Watergate, but it wasn't. And I think it's because people feel less alarmed by corporations and governments these days than they used to.
Why do you think that might be? Where does that come from? Why would anyone be less alarmed these days?
I don't know. I just don't know. I think I'm too old to reasonably tap into that stuff. I mean, I can speculate a little bit. But it is a sort of bullshit speculation, because I don't really know what I'm talking about. But I have noticed — like I play on PS4 and XBox Live and stuff like that, and I've noticed, like I'm in my mid-40s but I might be playing with someone who's like 20 — and those people are way less sort of instinctively protective of their identities than I would be. And I wonder if it's because they have grown up with social media, and they have grown up with giving up a lot of privacy in terms of Instagram or Twitter or whatever the fuck it is. And they have never seen any maligned consequence of that.
It's all pretty straightforward and it doesn't really impact their life in an adverse way, so what's to be paranoid about? I guess. Maybe something like that. And sometimes I think maybe I am just out of touch and there really is nothing to be alarmed about with these things. I find it perfectly reasonable that I might be out of touch, maybe even likely because I am middle aged and that's what happens. You get out of touch, I guess.
There's something that's been bothering me ever since I saw Ex Machina. In the hallway with the faces of robots, we see one that's second from the right as you're facing them. And that robot looks so familiar to me, like it's some old robot from the early 20th century that I just can't place. Was that face based on any robot in particular?
Are you talking about the one directly to the left of the Ava mask?
Yes. [Note: It's the one pictured above.]
I'm gonna have a stab at what that might be, and I really hope I'm right because I know exactly the kind of thing you're describing and it's really annoying [laughs] when that sort of thing happens. So I'd love to be able to nail it if I can. What it might be is the mask in Spirited Away.
Oh wow. I think that might be it? Was that the inspiration for that mask?
No, but it did remind me of that. And I had maybe 20 masks I could choose from and I pulled that one out partly because it reminded me of that figure, because I've got a little model of it at home and Spirited Away is one of my favourite movies. I absolutely love that film. And there was something in the sort of slightly blank expression, and actually just the structure of the face that strongly reminded me of it. So maybe it's that.
The actual function of those masks was — there's an image you'd used to see in school textbooks, like of evolution — which would be a left to right illustration of an ape getting progressively upright until it ends up as homo sapiens. I was sort of roughly trying to echo something like that, but with faces, going from very primitive folk art to eventually Ava's face. So that's what I was thinking of, but the reason I pulled that particular mask was the Spirited Away film, I just love that film.
Do you think you'll see strong AI achieved within in your lifetime?
I think that if I was going to make a bet and it was for a lot of money, so I had to take it seriously, I would say no. I think that the Kurzweil prediction is too optimistic, if optimistic is the right word. Some people would say pessimistic. I think that if it happens it's more likely to be in my children's lifetime. There are some truly immense complications to machines being self-aware in the way that we appear to be self-aware. And I also think it feels like there is a very fundamental big discovery that is yet to happen, about the way that thinking happens and about what thinking is. So I'd guess slightly further afield.
But listen, what the fuck do I know? I am just some guy who got interested in AI and consciousness and wrote a movie. That qualifies me for nothing. [laughs] If you do this as a Q&A, I think that's a pretty important caveat.
Of course, duly noted. And that's refreshing for people who have such an influence on the culture when it comes to futurism. Obviously as someone who studies past visions of the future, I have a rather sceptical eye towards all futurism, but it's still interesting. And it certainly sort of sheds light on the creators of media and their outlook on the future...
To me a lot of light gets thrown on the motivation of the people who are doing this stuff for real. Like if you talk to those people, the motivations are fascinating... there's a full range, you know... anyway, I probably shouldn't say too much...
No, I'd love to hear who you spoke with. If you spoke with any technologists or philosophers or people who may have...
I spoke with a lot of people. The thing about this movie is that it's an ideas movie. And if the ideas are badly represented or just dumb, the movie falls down as far as I'm concerned — it doesn't justify itself. And because I'm a layman what I did was I wrote the script and then I submitted it to people who I knew were working in this field or who had written books that I had drawn from in writing the script.
And then I also got to meet other people as a consequence of having made the film. So after the script, but in post-production or in screenings and stuff like that as interest grew. So I got to talk to quite a wide range of people. And people who are very involved in this area at a high level. And the conversations have been fascinating. I've encountered motivations where I think if you boiled it down would come to a bid for immortality — the ability maybe to download oneself and that's for real, people do think in those terms.
One of the people I've spoken to at great length is involved is involved in this stuff in a funny kind of way, it's almost helpless the way they're involved in it. Because they have such huge misgivings about what they're doing. And even said that they think a lot about telling their children maybe they shouldn't have children themselves because of some of the implications of what an AI-shaped future looks like. And that that future is quite real. Now, that person may or may not be right, but I find it fascinating that someone would continue to work on something they felt jeopardized their grandchildren in that way. But they do. And it speaks of the complex, mixed motivations that are at play with this stuff.
Can I ask what field that person works in?
Robotics. But robotics with a particular eye on cognitive — basically on strong AI, but strong AI as something which is then embodied and functioning in the same environment that we function. So not a kind of abstract cloud-like AI but a kind of in the real world, interactive kind of AI.
If you were confronted with the dilemma of saving a person's life or a strong AI's existence, and the person is shitty and mean and the AI seems to have a strong moral compass, which do you think you'd choose? Do you think you'd ever...
I'd choose the strong AI. I don't find that a very complicated thought experiment. Because what you're saying is choose between the existence of two sentient creatures, one of which is malevolent and the other one is benign. Well, I'll choose the benign one. If something is sentient and roughly approximate to us... Look, a dog is sentient. And if you rephrase the question with a dog and a human, I'd say save the human. So I'm assuming that you're talking about a level of sentience that is equivalent to us. A level of intelligence and sophistication.
But if I really believe that that thing is sentient, that it's self-aware and it has an emotional life and as you said a moral compass, and the moral compass is aligned in the same way that mine is, and then you're saying, compare it to another sentient creature that has a moral compass, which is aligned in an opposite way, then it's a fucking no-brainer as far as I'm concerned.
Let's say the human is Pol Pot and the AI is Ava, as presented in the film. Well, I'll choose Ava.
What would you choose?
I really don't know. I honestly don't know. When I think about it, as you say with the dog instead of the robot, of course I'm going to choose the human. I think your film really points the finger at us. With all the mirrors, and this sort of self-reflexive thinking... but so the question becomes, is a robot us? Can we see ourselves in the robot? And I don't know...
Yeah, but we don't need to see ourselves. One of the things in the film is that I think it suggests that Ava is in some respects not like us, but she doesn't need to be like us to have value.
Can I ask you a thought experiment?
Yes, of course.
If the Ava that you saw in the film was in a room with you now and she was followed shortly after with a guy with a hammer who says, "Ava is the only model in existence and she did everything you just saw in the film and I think the right thing to do is to smash her up." Would you stop that guy smashing her up or would you want to protect her?
I think that I'd want to protect her existence, but I don't know. This is why your film is one of my favourites in a long time. I don't have any of the answers, and I don't know what I'd do in any of these scenarios. I kept going back and forth after seeing the film. Would I have had any empathy? Would I have been able to sort of divorce myself from any feelings of empathy toward Ava? I really don't know. Which is what's so incredibly frustrating when it comes to these questions about what the future holds for 100, 200 years, a thousand years down the line — long after I'm dead.
Yeah, both of us. [laughs] I think these are things we're going to have to contend with, would be my suspicion.
For sure. One last question, are you generally optimistic or pessimistic about the future?
I'm definitely optimistic about the future. I can imagine why it might seem like I'm not, but for me, this film is actually a sort of oddly optimistic film. At least from my point of view, it is. And broadly speaking, if I just step back and I look at the way mankind has been developing... the thing that I value the most I guess is human rights. And I think that broadly speaking, human rights continue to gain currency. And I think that's a good thing and I think that will probably continue. So I'd say I'm optimistic. We are often pretty dumb, but in the grand sweep of things, we're moving forwards, not backwards.