Don’t miss the latest developments in business and finance.
Home / Technology / Tech News / Google firing respected AI researcher Timnit Gebru is cause for concern
Google firing respected AI researcher Timnit Gebru is cause for concern
Over 1,400 Google employees and thousands of techies employed in other Silicon Valley businesses have signed a letter of protest about the manner of her dismissal
The sacking of a respected artificial intelligence (AI) researcher and key member of Google’s Ethical AI team has set off a firestorm. Over 1,400 Google employees and thousands of techies employed in other Silicon Valley businesses have signed a letter of protest about the manner of her dismissal.
Timnit Gebru, a computer scientist of Ethiopian origin, was on leave from her post as co-head of the Google Ethical AI team when she received an e-mail saying her resignation was accepted, and found her access to her corporate email account cut off. According to her, she hadn’t resigned but was in the middle of negotiations. Her immediate boss, Jeff Dean, claims she co-authored a paper, which did not “meet the bar for publication”.
The paper in question, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, is an exploration of the ethics of creating “Large Language models”. Gebru’s co-authors are four other Google researchers, and Emily Bender, a professor of Computational Linguistics at University of Washington. Bender is the only person to speak on record. She says the paper is still in an early stage of preprint and review and not officially published. Hence, the reason for dismissal (apart from the manner) sounds thin.
Gebru has a towering reputation in AI circles. She’s also known for being part of the Apple team that developed the iPad. The 37-year-old electrical engineer from Stanford has also established a group called Black Women in AI. This was after she discovered there were eight black women (including her) in a 8,500-strong AI convention.
Gebru was a pioneer in pointing out inherent flaws and racist biases in AI-based facial recognition programs in a paper, “Gender Shades”. These algorithms are trained and fine-tuned by showing them datasets of many thousands of faces. The data-sets used in training were overwhelmingly white and male. As a result, many AI face-recognition programs have an enduring problem with recognising non-white faces, and women, and especially with women of colour.
This can be embarrassing when it highlights whites and crops out people of colour in group photographs. It’s often life-threatening if a face-recognition program wrongly identifies an innocent person as a murder suspect. As a more or less direct result of that paper, Amazon, Google and Microsoft stopped selling face-recognition programs to the police.
There are many other ethical issues and risks in training AI. The unreleased “parrot” paper has been reviewed (unofficially) by MIT’s Technology Review. AI is now being trained to process massive data sets of natural language to improve its understanding and ability to use language in human ways.
The paper points out several risks. One is just the massive overheads in terms of computer power involved in such training. Computer power equates to electricity consumption, which equates to environmental impact. One researcher (not an author of the parrot paper) estimated that one round of training of Google’s BERT (Bidirectional Encoder Representations from Transformers) program that understand searches has the same carbon footprint as a commercial New York-San Francisco flight. Large language models require multiple rounds of training, which means big environmental impacts. Another implication: “Rich society” biases since these resources can only be provided by wealthy nations or corporations.
Another problem is lack of discrimination. Large language models literally grab all sorts of available natural language examples without weeding out abusive, racist, sexist, hate speech. If an AI is taught such speech is normal, that’s a problem.
Moreover, changes in social usage caused by movements such as Black Lives Matter, or MeToo, or the LGBTQ movement, will not be captured because it’s a small subset of all content. Again, language generated by poorer nations will be less represented simply due to lower quantum of content. Finally, of course, there’s the issue of mimicry. An AI that “speaks” naturally may be a wonderful tool for scams.
Gebru will not have problems finding employment. But the non-transparent processes that culminated in her dismissal could have a chilling effect on the entire field of AI research.
To read the full story, Subscribe Now at just Rs 249 a month