An interesting article at the SMH has put forward that Google’s ‘autocomplete’ function is encouraging users to seek out websites that contribute to online hate speech by spreading racism and prejudice.
I mostly ignore autocomplete and rarely, if ever, notice Google’s auto completion tool taking place. However, for a laugh, it’s all too easy to write the words ‘Justin Beiber is…’ and have Google’s artificial brain offer up a host of potential (and often amusing) list of popular search phrases. In fact, an entire cottage industry is now devoted to the sacred art of ‘autocomplete fails’. Most of the better ones can be found at AutoCompleteFail , which lists a bevvy of mostly embarrassing search/autocomplete (mis)matches.
But according to the article, Professor Kevin Dunn from the University of Western Sydney, who is a lead researcher on the Challenging Racism Project at the University – puts the blame and focus squarely on the heels of Google’s ‘too-clever-for-its-own-boots’ autocomplete software for offering up the wrong idea:
“If people who have negative views about a given group feel like their views are in the majority, they feel much more emboldened to articulate those views or take some sort of nasty action.”
Given that Google isn’t telling anybody what to think per se, is Google really to blame for a random string of search words? Perhaps users should be responsible for their own actions and the websites they click upon? Crazy idea, I know.
Be sure to offer your thoughts on this tricky subject in the comments section below.