Menu Menu

Google’s integrity is brought into question over research bias

Google, the ubiquitous search engine we all use for reliable and impartial research, could soon start to oust information on ‘sensitive topics.’ Is this a censorship storm waiting to happen?

Alphabet’s flagship platform Google is under serious fire this week regarding censorship measures, and concern is growing that bias could soon discredit the platform as a reliable search engine.

Sparked by the forced resignation of Dr Timnit Gebru, an AI researcher/scientist at Google, Reuters has since dragged up Google’s legislation imposed last year which stated that papers on sensitive topics would face increasingly stringent vetting before going live.

Dr Timnit Gebru, who previously led a 12-person research team, was supposedly shown the door at Google this month for questioning an order not to publish findings which scrutinised the ethical integrity of AI and its potential to impact disadvantaged communities.

According to Google’s small-print, the more rigorous vetting process of research papers was originally designed to stop the disclosing of trade secrets, but recent developments point to a much broader practice than that, and one that may soon lead to big censorship disputes.

The explosion of research into the development and rapid integration of AI tech has led to upwards of 200 published papers in the last year alone, and as big proprietors of the technology – using AI to personalise YouTube users’ content feeds, for instance – Google is clearly looking to protect its reputation. In-fact, former employees at Google have even revealed that company officials called for research authors to ‘strike a positive tone’ when talking in-house developments or tech integration. Judging by the sheer outpouring of disgruntled ex-employees, it seems such interference is common too.

Despite assertions from Google that researchers have ‘substantial’ freedom with their work, senior scientists like Margaret Mitchell are becoming increasingly concerned for the company’s future under this new umbrella of inspection.

‘If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship,’ Margaret said in an interview with Reuters. Google has yet to respond to this growing air of trepidation.

Beyond the integration of AI, other topics deemed to fall under the ‘sensitive topics’ remit and therefore Google’s increased inspection include the oil industry, China, Iran, Israel, Covid-19, insurance, and location data.

You’d have to say plenty on the list are considered sensitive topics for good reason, but others – chiefly artificial intelligence – are raising eyebrows among those who’ve trusted Google implicitly for upwards of a decade. You’d expect the likes of Facebook, Apple, and Microsoft to avoid promoting criticism of lucrative technologies like AI where possible and shout their usefulness from the rooftops.

However, when it comes to a platform we all expect to be entirely factual and without bias, it’s a worry that Google maybe considering the ramifications of posting and deliberating whether or not certain studies are within its best interest to release.

For now, you can guarantee Google is taking its time with its response. We’ll have to wait and see.