Microsoft Corp. is calling on Congress to pass a comprehensive law to crack down on images and audio created with artificial intelligence — known as deepfakes — that aim to interfere in elections or maliciously target individuals.
Noting that the tech sector and nonprofit groups have taken steps to address the problem, Microsoft President Brad Smith on Tuesday said, “It has become apparent that our laws will also need to evolve to combat deepfake fraud.” He urged lawmakers to pass a “deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”
The company also is pushing for Congress to label AI-generated content as synthetic and for federal and state laws that penalize the creation and distribution of sexually exploitive deepfakes.
The goal, Smith said, is to safeguard elections, thwart scams and protect women and children from online abuses. Congress is currently mulling several proposed bills that would regulate the distribution of deepfakes.
“Civil society plays an important role in ensuring that both government regulation and voluntary industry action uphold fundamental human rights, including freedom of expression and privacy,” Smith said in a statement. “By fostering transparency and accountability, we can build public trust and confidence in AI technologies.”
Manipulated audio and video technology has already created some controversy in this year’s campaign for US president.
More From This Section
In one recent instance, Elon Musk, owner of the social media platform X, shared a altered campaign video that appeared to show Democratic presidential candidate, Vice President Kamala Harris, criticizing President Joe Biden and her own abilities. Musk didn’t clarify that the video had been digitally manipulated and suggested later that it was intended as satire.
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)