Following similar action against secretive AI drone imaging program for the US military, Project Maven, Googlers are once again organising internally to push back against their leadership — this time around a project dubbed Dragonfly, the proposed search product the company intends to build for the Chinese market in accordance with government-mandated censorship.
The letter, first reported on by The New York Times earlier today, makes several demands for transparency, most bluntly stated as, “Google employees need to know what we’re building.”
Outrage stems both from the nature of Dragonfly — a product that some employees feel violates the AI Principles — and that many employees only learned about the search product’s existence from news reports, rather than their own bosses.
We reached out to Google but had not heard back at time of writing.
Gizmodo has obtained the letter from a source with knowledge of the situation, which can be read in its entirety below (emphasis theirs):
Our industry has entered a new era of ethical responsibility: the choices we make matter on a global scale. Yet most of us only learned about project Dragonfly through news reports [in] early August. Dragonfly is reported to be an effort to provide Search and personalised mobile news to China, in compliance with Chinese government censorship and surveillance requirements. Eight years ago, as Google pulled censored web search out of China, Sergey Brin explained the decision, saying: “in some aspects of [government] policy, particularly with respect to censorship, with respect to surveillance of dissidents, I see some earmarks of totalitarianism.” Dragonfly and Google’s return to China raise urgent moral and ethical issues, the substance of which we are discussing elsewhere.
Here, we address an underlying structural problems: currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment. That the decision to build Dragonfly was made in secret, and progressed even with the AI Principles in place makes clear that the Principles alone are not enough. We urgently need more transparency, a seat at the table, and a commitment to clear and open processes: Google employees need to know what we’re building.
In the face of these significant issues, we, the undersigned, are calling for a Code Yellow addressing Ethics and Transparency, asking leadership to work with employees to implement concrete transparency and oversight processes, including the following:
1. An ethics review structure that includes rank and file employee representatives;
2. The appointment of ombudspeople, with meaningful input into their selection;
3. A clear plan for transparency sufficient to enable Googlers an individual ethical choice about what they work on; and
4. The publication of “ethical test cases”; an ethical assessment of Dragonfly, Maven ,and Airgap GCP with respect to the AI Principle; and regular, official, internally visible communications and assessments regarding any new areas of substantial ethical concern.