Enlarge /. Former Google AI research scientist Timnit Gebru speaks on stage on day three of theinformationsuperhighway Disrupt SF 2018 at the Moscone Center on September 7, 2018 in San Francisco, California.
Kimberly White | Getty Images
Apple versus Samsung
View more stories
Google struggled Thursday to limit the consequences of a leading artificial intelligence researcher leaving after the internet group blocked the publication of a paper on a key AI ethics topic.
Timnit Gebru, co-head of AI ethics at Google, said on Twitter that she was fired after the paper was rejected.
Jeff Dean, Google's AI director, defended the decision in an internal email to staff Thursday, saying the paper "did not reach our standards for publication". He described Dr. Gebru & # 39; s departure also as a resignation in response to Google's refusal to engage in unspecified terms it had set in order to stay with the company.
The dispute threatens to throw a harsh light on Google's handling of in-house AI research that could affect its business, as well as the company's long-term struggles in trying to diversify its workforce.
Before leaving, Gebru complained in an email to colleagues that Google was not responsible for the company's claims that it wanted to increase the number of women in its ranks. The email, first published on Platformer, also described the decision to block their paper in order to "silence marginalized voices".
One person who worked closely with Gebru said there had been tension with Google management in the past because of their advocacy for diversity. However, the immediate reason for her departure was the company's decision not to allow the publication of a research paper she co-authored, the person added.
The paper looked at the possible bias in large language models, one of the hottest new areas of natural language research. Systems like OpenAI's GPT-3 and Google's own Bert system attempt to predict the next word in a phrase or sentence – a method that has produced surprisingly effective automated writing and that has made Google better understand complex searches.
advertising
The language models are trained on huge amounts of text that normally come from the Internet. This has led to warnings that they could revive the racist and other prejudices contained in the underlying training material.
"From the outside, it looks like someone at Google decided that this would harm their interests," said Emily Bender, professor of computational linguistics at the University of Washington, who co-authored the paper.
"Academic freedom is very important – there are risks when [research] takes place in places where [not] this academic freedom exists," giving companies or governments the power to "stop" research they do not approve. She added.
Bender said the authors hoped to update the paper with newer research in a timely manner so that it could be accepted at the conference at which it had already been submitted. However, she added that it was common practice to replace such work with more recent research as work in such areas was moving rapidly. "No paper is perfect in research literature."
Julien Cornebise, a former AI researcher at DeepMind, the London-based AI group of Google's parent company Alphabet, said the dispute “exposed the risk that AI and machine learning research has since become concentrated in the few hands of powerful players in the industry allows the field to be censored by deciding what to publish or not. "
He added that Gebru was "extremely talented – we need researchers of their caliber, not filters, on these subjects." Gebru did not immediately respond to requests for comments.
Dean said the paper, which was co-authored with three other Google researchers as well as outside collaborators, "does not take into account the latest research on reducing risk of bias". He added that the paper "talked about the environmental impact of large models but ignored subsequent research that showed much higher efficiency".
© 2020 The Financial Times Ltd. All rights reserved. No redistribution, reproduction or modification in any way.