Long-simmering concerns over Amazon’s facial recognition software came to a head today during the company’s annual shareholder meeting, where activist investors put forward several agenda items that would increase oversight of the software and limit who could use it.
None of the proposals managed to meet the 50-per cent threshold needed to pass, an Amazon spokesperson confirmed to Gizmodo.
For the shareholders themselves, as well as for civil rights groups that have been raising the alarm about bias, potential for abuse, and lack of transparency with Amazon’s Rekognition software, the failure to pass these resolutions—which would have been non-binding had they been successful—was to be expected.
Had they passed, the first resolution could have pressured Amazon to not sell Rekognition to governments and government agencies—doing so, these shareholders state, “contradicts Amazon’s opposition to facilitating surveillance”—unless it was found not to “cause or contribute to actual or potential violations of civil and human rights.” The second resolution requested an independent review of Rekognition’s impact.
“The fact that there needed to be a vote on this is an embarrassment for Amazon’s leadership team. It demonstrates shareholders do not have confidence that company executives are properly understanding or addressing the civil and human rights impacts of its role in facilitating pervasive government surveillance,” Shankar Narayan, director of the American Civil Liberties Union of Washington, told Gizmodo in a statement.
“While we have yet to see the exact breakdown of the vote, this shareholder intervention should serve as a wake-up call for the company to reckon with the real harms of face surveillance and to change course.”
That the vote took place at all is more than a symbolic victory, too, as Amazon did not sit back idly while groups like the ACLU, Open MIC, and worker unions like the UFCW agitated in favour of these agenda items. The company, in fact, made a vigorous case with the SEC to exclude these items from the shareholder meeting, though ultimately it did not get its way (a rarity for a company of Amazon’s size).
“Today’s annual meeting makes clear that this issue is not going to go away: Amazon’s refusal to acknowledge and confront the potential harms of Rekognition is ongoing evidence of corporate arrogance. The company has no internal mechanisms to assess the potential harms of Rekognition when the software is deployed by law enforcement and government agencies,” Michael Connor, Open MIC’s executive director wrote in a statement.
“The company does not have an internal ethics panel to review its artificial intelligence technologies. And the company refuses to engage in substantive dialogue with civil and human rights organisations regarding the risks presented by Rekognition.”
Independent studies have concluded that Rekognition may have biases against women and people of colour; an ACLU study also saw the software falsely match over two dozen members of Congress to photos in a mugshot database.
Amazon countered the validity of these studies by claiming the “confidence thresholds” used by the researchers were inconsistent with what law enforcement customers would use in the field. Gizmodo later reported that the Washington County Sheriff’s Office—the one law enforcement group confirmed to be using Rekognition—wasn’t using confidence thresholds at all.
Amazon declined to provide a comment on the record. Final tallies on these votes are expected to be disclosed on Friday, and proposals that reach 3-per cent of the vote or higher can be reintroduced.