What Makes AI So Weird, Good And Evil

What Makes AI So Weird, Good And Evil

Artificial intelligence has changed the way we roam the internet, buy things, and in many cases, navigate the world. At the same time, AI can be incredibly weird, such as when an algorithm suggests “Butty Brlomy” as a name for a guinea pig or “Brother Panty Tripel” as a beer name. Few people are more familiar with the quirks of AI than Janelle Shane, a scientist and neural network tamer who lets AI be weird in her spare time and runs the aptly named blog AI Weirdness. She also built an AI astrologer for Gizmodo.

Janelle Shane released a book this month titled You Look Like a Thing And I Love You. It’s a primer for those who want to know more about how artificial intelligence really works or simply entertainment for those who want to laugh at just how silly a computer can be. We talked with Shane to ask about why she likes AI, how its strangeness affects our lives, and what the future might hold. You can buy the book on Amazon here.

Image: Voracious/Little, Brown

Gizmodo: What first got you interested in AI?

Janelle Shane: Just after high school, when I was deciding what I wanted to do in college, I attended this really fascinating talk by a guy who was studying evolutionary algorithms. What I remember most from the talk about the research are these stories about algorithms solving problems in unexpected ways or coming up with a solution that was technically right but not really what the scientist had in mind. One of the ones that made it to my book was an anecdote where people tried to get one of these algorithms to design a lens system for a camera or a microscope. It came up with a design that worked really well, but one of the lenses was 50 feet thick. Stories like these really captured my attention.

[Later], I saw examples of AI-generated cookbook recipes, and they were absolutely hilarious. Someone had fed a bunch of cookbook recipes to one of these algorithms, a text-generating neural network. It tried its best to imitate the recipes but ended up imitating more the surface appearance of the recipe. When you looked at what it generated, it was really clear that it didn’t understand cooking or ingredients at all. It would call for shredded bourbon, or tell you to take a pie out of the oven that you didn’t put into the oven in the first place. That captured my attention all over again and got me interested in doing experiments generating text with AI.

Gizmodo: What is artificial intelligence, in the simplest terms?

Shane: AI is one of those terms that’s used as a catch-all. The same word is used for science fiction that gets used for the products that are actually using machine learning, all the way to things that are called AI but real humans are actually giving the answers. The definition I tend to go with is the one that software developers mostly use, which refers to a specific type of program called a machine learning algorithm. Unlike the traditional rules-based algorithms, where a programmer has to write step-by-step instructions for the computer to follow, with machine learning, you just give it the goal and it tries to solve the problem itself via problem and error. Things like neural networks, kinetic algorithms, there’s a bunch of different technologies that fall under that umbrella.

One of the big differences is that when machine learning algorithms solve a problem, they can’t explain their reasoning to you. It takes a lot of work for the programmer to go back and check that it actually follows the right problem and didn’t completely misinterpret what it was supposed to do. That’s a big difference between a problem solved by humans and one solved by AI. Humans are intelligent in ways we don’t understand. If we give humans a description of the problem, they’ll be able to understand what you’re asking for or at least ask clarifying questions. An AI isn’t smart enough to understand the contents of what you’re asking for, and as a result, may end up solving the completely wrong problem.

There’s an example in my book of researchers at Stanford training a machine learning algorithm to recognise skin cancer in pictures, but when they looked back at what the algorithm was doing and what part of the image it was looking at, they discovered it was looking for rulers instead of tumours, because in the training data, a lot of pictures had rulers for scale.

Gizmodo: What did you think about while you were translating this very technical topic for readers?

Shane: It was a bit of a challenge to figure out what I was going to cover and how I was going to talk about AI, which is such a fast-moving world and has so many new papers and new products coming out. It’s 2019, and 2017 [when I started writing the book] was ages ago in the world of AI. One of the biggest challenges was how to talk about this stuff in a way that will be true by the time the book gets published, let alone when people read about it in five or 10 years. One of the things that helped was asking what has remained true, and what do we see happening from the earlier days of AI research that’s still happening now. One of the things, for example, is this tendency for machine learning algorithms to come up with alternative solutions for walking. If you let them, their favourite thing to do is assemble themselves into a tall tower and fall over. That’s way easier than walking. There are examples of of algorithms doing this in the 1990s and recent examples of them doing it again.

What I really love is this flavour [of results] where AI tends to hack the simulations that it’s in. It’s not a product of them being very sophisticated things. If you go back to early, simple simulations, little programs, they will still figure out how to exploit the flaws in the matrix. They’re in a simulation that can’t be perfect, there are shortcuts you have to do in the maths because you can’t do perfectly realistic friction, and you can’t do really realistic physics. These shortcuts get glommed onto by machine learning algorithms.

One of the examples I love that illustrates it beautifully is this programmer in the 1990s that built a program that was supposed to beat other programmers at tic-tac-toe. It played on an infinitely large board to make it interesting and would play remotely against all these other opponents. It started winning all of its games. When the programmers looked to see what its strategy was, no matter what the opponents first move was, the algorithm’s response was to pick a really huge coordinate really far away, the farthest reaches of this infinite tic-tac-toe board it can specify. Then the opponent’s first job would be to try and render this newly huge tic-tac-toe board, but in trying to build this board so big, the opponent would run out of memory, crash, and forfeit the game. In another example, [an AI] was told to eliminate sorting errors. It learned to eliminate the errors by deleting the list entirely.

Gizmodo: Can you get into that a bit more? How do we avoid these negative consequences?

Shane: We sometimes find out that AI algorithms aren’t optimising what we hoped they would. An AI algorithm might figure out that it can increase human levels of engagement on social media by recommending polarising content that gets them into a conspiracy theory rabbit hole. YouTube has had trouble over this. They want to maximise viewing time, but the algorithm’s way of maximizing viewing time isn’t quite what they want. We’d get all kinds of examples of AI glomming into things they’re not supposed to know about. One of the tricky parts about trying to build an algorithm that doesn’t pick up on human racial bias is, even if you don’t give it information on race or gender in its training data, it’s good at working out the details by clues in zip code, college, and figuring out how to imitate this really strong bias signal that it sees in its training data.

When you see companies say, “Don’t worry, we didn’t give our algorithm any information about race, so it can’t be racially biased,” that’s the first sign that you have to worry. They probably haven’t figured out whether, nevertheless, the algorithm has figured out a shortcut. It doesn’t know not to do this because it’s not as smart as a human. It doesn’t understand the context of what it’s being asked to do.

There are AI algorithms making decisions about us all the time. AI decides who gets loans or parole, how to tag our photos, or what music to recommend to us. But we get to make decisions about AI, too. We get to decide if our communities will allow facial recognition. We get to decide if we want to use a new service that’s offering to screen babysitters by their social media profiles. There’s an amount of education that we as consumers can really benefit from.

Gizmodo: So, what are some *good,* or at least not bad, applications?

Shane: Personally, I’ve found automatic photo tagging really helpful, where the photo has rudimentary tags that aren’t always perfect, but they’re powerful enough to find a picture of my cat or pictures of my living room or things like that. A lot of the good applications I see aren’t critical, but they’re convenient. Filtering spam is one of those applications, where it doesn’t change my inbox but it’s cool to have. The Merlin Bird ID app and the iNaturalist app are good applications, too.

Depending on who you are, the ability of your phone to describe a scene out loud can be really useful if you’re using it as a visual aid of some sort. The ability of machine learning algorithms to produce decent transcriptions of audio is another. Some of these applications are life changing. If not perfect, they are still filling this need and providing these services we didn’t have at all before.

Gizmodo: What does the future of AI look like?

Shane: It’s going to be an increasingly sophisticated tool but one that will need humans to wield it and will need humans as editor. One is language translation. Professional translators do use these neural network-guided translations as a first draft. By itself, the machine is not good enough to really give you a finished product, but it can save a whole bunch of time by getting you a lot of the way there. Or, where algorithms collect research and synthesise information and build articles from that, to be able to get a first draft of data together where a human editor just has to look at it at the end—we’ll see more and more applications of AI looking more like that. We’ll see AI working in art and music as well.

Gizmodo: And where did your title, You Look Like a Thing and I Love You, come from?

Shane: An AI was trying to generate pickup lines, and this as one of the things it generated. It was my editor who picked it as the title. I wasn’t quite sure at first, but so far everyone I’ve said the title to has just grinned, whether they’re familiar with how it was generated or not. I’m completely won over and am really pleased to have it as my book title.