A new report by the Lords Select Committee in the UK claims that Britain is in a strong position to be a world leader in the development of artificial intelligence. But to get there - and to keep AI safe and ethical - tech firms should follow the Committee's newly proposed "AI Code".
Tagged With ai ethics
Virginia Eubanks made the same mistake most would. In her job working with low-income women struggling to afford housing, she assumed they also struggled with access to vital technology, such as the internet. But this technology isn't just accessible, it's permeates access to basic resources people in poverty need to survive, and it's often rigged against them. Her new book,
Automating Inequality: How High Tech Tools, Profile, Police and Punish the Poor is about how technology has come to define people touched by poverty.
Mattel is cancelling Aristotle, a device described as "Alexa for kids", after facing criticism from lawmakers and parents' groups. In a statement, Mattel said Aristotle did not "fully align with Mattel's new technology strategy" and would not bring the device to market "as part of an ongoing effort to deliver the best possible connected product experience to the consumer".
The company responsible for AlphaGo -- the first AI program to defeat a grandmaster at Go -- has launched an ethics group to oversee the responsible development of artificial intelligence. It's a smooth PR move given recent concerns about super-smart technology, but Google, who owns DeepMind, will need to support and listen to its new group if it truly wants to build safe AI.