Google tells scientists to use a positive tone for AI research
Google has demanded that scientists avoid using a negative tone to present its technology in a negative light. To prove its point, the company has tightened control over scientists' papers by initiating a 'sensitive topics' review.
Google has requested in at least three cases from authors to avoid presenting its technology in a negative light according to the internal communication and interviews with researchers. The company has demanded in its new assessment procedure that researchers confer with legal, policy, and public relations teams. This should be done before pursuing topics such as face and sentiment deductions, race categorization, affiliations with politics, or gender amongst others. Technological advancements and the complexities of the external environment are leading to situations where seemingly unsuspecting projects cause unintended ethical, legal, regulatory, and reputational chaos. Hence this policy is believed to have been put in place officially in June 2020 but has been more propounded in December 2020.
These 'Sensitive topics ' procedures have added a round of scrutiny to Google's standard review of papers for snags like revealing trade secrets amongst others. Although for some projects, the company had intervened in the tail end stages. According to an anonymous internal correspondence to Reuters, a senior google manager had told authors to ' take great care to strike a positive tone'.
Four staff researchers including senior scientist Magaret Mitchell think that Google is now interfering with crucial studies of potential technology harms. Mitchell has stated that researching the appropriate subject given their area of expertise and not being allowed to publish( on grounds of not being in line with high-quality peer review) brings in a serious problem of censorship.
Broaching the tension between Google and its Staff
After the hasty exit of scientist Timnit Gebru, tension developed between Google and some of its staff. Tim it Gebru who led a 12-person team with Mitchell focused on this in artificial intelligence software(AI). Gebru says that she was fired by Google after she questioned an order not to publish research that claimed AI mimics speech could be disadvantageous to marginalized populations.