Stanford University, the bastion of higher education known for manufacturing Silicon Valley’s future, launched the Institute for Human-Centered Artificial Intelligence this week with a massive party. Big names and billionaires like Bill Gates and Gavin Newsom filed into campus to back the stated mission that “the creators and designers of AI must be broadly representative of humanity.”
The new AI institute has more than 100 faculty members listed on their website and, on Thursday, cybersecurity executive Chad Loder noticed that not a single member of Stanford’s new AI faculty was black.
What happened next was a weird feat of public relations.
Stanford just launched their Institute for Human-Centered Artificial Intelligence (@StanfordHAI) with great fanfare. The mission: "The creators and designers of AI must be broadly representative of humanity."
121 faculty members listed.
Not a single faculty member is Black. pic.twitter.com/znCU6zAxui
— Chad Loder ❁ (@chadloder) March 21, 2019
When Gizmodo reached out to Stanford on Thursday morning, the institute’s website was quickly updated to include one previously unlisted faculty member, Juliana Bidadanure, an assistant professor of philosophy.
Bidadanure was not listed among the institute’s staff prior to our email to the school on Thursday, according to a version of the page preserved on the Wayback Machine, but she did speak this week at the institute’s opening event.
In fact, the school appeared to be adding Bidadanure, and later her bio, to the faculty page as I was writing this article.
Based on our count, the institute’s faculty includes 72 white men out of 114 total staffers, or 63 per cent—a figure that apparently can change at any moment. Stanford did not respond to our questions.
After a one-hour drive to Stanford, I waited on Wednesday night in a long line of 150 people in Berkeley, California, to get into a sold-out auditorium. We all came to hear the Oxford Internet Institute’s Dr. Safiya Noble, author of the 2018 book Algorithms of Oppression, talk about how Silicon Valley’s algorithms—the code driving everything from search engines to artificial intelligence—can reinforce racism.
“It was very difficult to find people who would be on a dissertation committee in 2010 that would be willing to put their name on the line and say we think technology can discriminate or that algorithms can discriminate,” Noble, who began her research a decade ago, said in Berkeley last night.
“What most people were saying at the time was that, ‘It’s just maths. Code can’t discriminate.’ That was the dominant discourse. I took a lot of body-blows trying to argue that there can be racist and sexist bias in our technology platform. And yet here we are today.”
Today, we live in an age where predictive policing is real and can disproportionately hit minority communities, job hiring is handled by AI and can discriminate against women, where Google and Facebook’s algorithms often decide what information we see and which conspiracy theory YouTube serves up next.
But the algorithms making those decisions are closely guarded company secrets with global impact.
In Silicon Valley and the broader Bay Area, the conversation and the speakers have shifted. It’s no longer a question of if technology can discriminate. The questions now include who can be impacted, how we can fix it, and what are we even building anyway?
When a group of mostly white engineers gets together to build these systems, the impact on black communities is particularly stark. Algorithms can reinforce racism in domains like housing and policing.
Algorithm bias mirrors what we see in the real world. Artificial intelligence mirrors its developers and the data sets it’s trained on.
Where there used to be a popular mythology that algorithms were just technology’s way of serving up objective knowledge, there’s now a loud and increasingly global argument about just who is building the tech and what it’s doing to the rest of us.
The emergence of the artificial intelligence industry is pushing it even further as AI systems that will dominate our lives are learning and automating decisions in processes that are increasingly opaque and less accountable.
Last month, over 40 civil rights groups wrote a letter calling on Congress to address data-driven discrimination. And in December, the Electronic Privacy Information Center (EPIC) sent a statement to the House Judiciary Committee detailing the argument that “algorithmic transparency” should be required for tech firms.
“At the intersection of law and technology, knowledge of the algorithm is a fundamental human right,” Marc Rotenberg, EPIC’s president, said on the issue.
The stated goal of Stanford’s new human-AI institute is admirable. But to get to a group that is truly “broadly representative of humanity,” they’ve got a ways to go.