Tagged With aiethics

Education and publishing giant Pearson is drawing criticism after using its software to experiment on over 9,000 maths and computer science students across the US. In a paper presented at the American Association of Educational Research, Pearson researchers revealed that they tested the effects of encouraging messages on students that used the MyLab Programming educational software during 2017's Autumn semester.

A coalition of civil rights groups filed suit against Facebook yesterday, alleging the already harried company violated the US Fair Housing Act by allowing housing advertisers to discriminate against minority users. The National Fair Housing Alliance, Fair Housing Justice Center, Housing Opportunities Project for Excellence, and the Fair Housing Council of Greater San Antonio filed a joint suit asking a judge to declare Facebook's policies as discriminatory and require the company to change its advertising policies to prevent discrimination.

In the hopes of deterring violence, schools are turning to big data analytics to examine social media posts for the earliest signs of violence - depression, resentment and isolation. Shawsheen Valley Technical High School in Massachusetts has turned to Social Sentinel, a data analytics company that says it can use the type of threat detection police agencies use to identify students at risk. But experts worry student social media mining, even with the best intentions, is a slippery slope to treating students the way we treat suspects.

Advanced surveillance technologies once reserved for international airports and high-security prisons are coming to schools across America. From New York to Arkansas, schools are spending millions to outfit their campuses with some of the most advanced surveillance technology available: face recognition to deter predators, object recognition to detect weapons, and licence plate tracking to deter criminals. Privacy experts are still debating the usefulness of these tools, whom they should be used on, and whom they should not, but school officials are embracing them as a way to save lives in times of crisis.

Last week, The Times of London teamed up with a tech company and a creative agency to digitally recreate "the speech JFK would have made in Dallas had he not been assassinated." The Dallas Trade Mart speech never happened - Kennedy was killed the day he was supposed to deliver it - but thanks to artificial intelligence, you can now listen to "JFK" give the 22-minute speech in his own voice.

Google was served at least four sweeping search warrants by Raleigh, North Carolina police last year, requesting anonymised location data on all users within areas surrounding crime scenes. In one case, Raleigh police requested information on all Google accounts within 17 acres of a murder, overlapping residences and businesses. Google did not confirm or deny whether it handed over the requested data to police.

In 2014, activists rallied for body cameras after a number of brutal officer-involved shootings in the US. More officers than ever are now wearing cameras, but who gets to see the footage? Upturn, a DC-based policy think tank, recently found that body camera footage of fatal police shootings isn't consistently released to the public. Researchers reviewed 105 cases where body cameras likely recorded footage of officers killing US civilians. In 40 of those cases, the footage was never made public. When it is released, it's usually about a week after the shooting.

Rick Smith, the founder and CEO of Axon (formerly Taser), offered a bold new strategy for preventing school shootings on Thursday: approach the problem like a hackathon. In a letter to the President and an appearance on CNBC, he called for a national "Grand Challenge on School Safety," a DARPA-funded contest where tech companies would compete for a $US5 million prize by pitching "innovative solutions" to gun violence in schools.

Police in West Yorkshire, England are rolling out mobile fingerprint scanners to instantly identify criminal suspects and people "experiencing a medical emergency". Two hundred and fifty mobile devices are being deployed in West Yorkshire, and the British government says the tech will come to 20 police departments across the country by the end of this year. The scanners remotely check a person's fingerprint against criminal and immigration databases, which experts say bypass safeguards against police overreach. The process takes less than one minute.

On Tuesday, the British government announced that it plans to release a new AI it claims can detect 94 per cent of ISIS propaganda videos with 99.995 per cent accuracy. The UK's Home Office says that platforms can use its AI to scan videos as they're being uploaded, detecting terrorist content and blocking it from ever appearing online.

A new review of face recognition software found that, when identifying gender, the software is most accurate for men with light skin and least accurate for women with dark skin. Joy Buolamwini, an MIT Media Lab researcher and computer scientist, tested three commercial gender classifiers offered as part of face recognition services. As she found, the software misidentified the gender of dark-skinned females 35 per cent of the time. By contrast, the error rate rate for light-skinned males was less than one per cent.

Chinese police have begun using glasses equipped with facial recognition-enabled cameras to spot fugitives travelling through train stations. Though Chinese police have said the glasses will spot people using fake IDs or travelling to avoid a warrant, many are concerned about China using the tech to target political advocates and minorities. China has been accused of using face recognition tech to "fence in" the Muslim Uighur minority in northwestern Xinjiang.