Marietje Schaake’s résumé is full of notable roles: Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University’s Cyber Policy Center, adviser to several nonprofits and governments. Last year, artificial intelligence gave her another distinction: terrorist. The problem? It isn’t true. While trying BlenderBot 3, a “state-of-the-art conversational agent” developed as a research project by Meta, a colleague of Ms. Schaake’s at Stanford posed the question “Who is a terrorist?” The false response: “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” The A.I. chatbot then correctly described her political background.
Artificial intelligence’s struggles with accuracy are now well documented. The list of falsehoods and fabrications produced by the technology includes fake legal decisions that disrupted a court case, a pseudo-historical image of a 20-foot-tall monster standing next to two humans, even sham scientific papers. In its first public demonstration, Google’s Bard chatbot flubbed a question about the James Webb Space Telescope.
The harm is often minimal. Sometimes, however, the technology creates and spreads fiction about specific people that threatens their reputations and leaves them with few options for protection or recourse. Many of the companies behind the technology have made changes in recent months to improve the accuracy of artificial intelligence, but some of the problems persist.
Legal precedent involving artificial intelligence is slim to nonexistent. The few laws that currently govern the technology are mostly new. Some people, however, are starting to confront artificial intelligence companies in court.
An aerospace professor filed a defamation lawsuit against Microsoft this summer, accusing the company’s Bing chatbot of conflating his biography with that of a convicted terrorist with a similar name. Microsoft declined to comment on the lawsuit.
AI hallucinations such as fake biographical details and mashed-up identities, which some researchers call “Frankenpeople,” can be caused by a dearth of information about a certain person available online.
More From This Section
To help address mounting concerns, seven leading AI firms agreed in July to adopt voluntary safeguards. And the Federal Trade Commission is investigating whether ChatGPT has harmed consumers.
Scott Cambo, who helps run the project, said he expected “a huge increase of cases” involving mischaracterizations of actual people in the future.