Don’t miss the latest developments in business and finance.

Identifying bots

A new California law could have far-reaching consequences

artificial intelligence, AI, automation, machine learning
Photo: Shutterstock
Business Standard Editorial Comment
Last Updated : Oct 13 2018 | 8:35 PM IST
On September 28, the state of California in the United States amended its Business & Professions Code to make it mandatory for automated accounts, or bots as they are known, to declare their non-human identity. Under the new law, bots cannot pretend to be real people in order to “incentivise a purchase or sale of goods or services in a commercial transaction, or to influence a vote in an election”. The disclosures must be “clear, conspicuous, and reasonably designed”, which means that this cannot be hidden in the depths of an end-user licence agreement. It would have to be stated upfront in a bot’s Twitter bio or Facebook profile. This provision, which will become effective on July 1, 2019, could have far-reaching consequences. As things stand, however, it applies only to platforms with over 10 million unique visitors a month but still this provision covers the large social media platforms and e-commerce sites, and also a host of financial service sites and utilities.
The new law has some obvious commercial applications in that it should stop automated spam calls and emails or, at the least, make it obvious when a client or potential customer is being harassed by a silicon entity. It should also put a stop to unethical marketing tactics, such as a product or service being “endorsed” by bot armies. This is common enough with Ponzi schemes. However, the law would not impede the legitimate use of bots and Artificial Intelligence agents by utilities, or e-commerce sites, for example, to garner customer feedback, process queries, or conduct surveys. Nor would it delegitimise the use of bots to issue weather reports, or earthquake warnings, or catalogue search results.

The real utility of this law might lie in the realm of sanitising election campaign processes. One of the most common modes of amplifying fake news and manipulating opinion with a political context is via bot-driven abuse. By setting up a bot army to “like” and “retweet” fake news, or to “like” and link to Facebook pages, it is possible to increase the range of propagation as well as to create an illusion of high engagement and credibility for fake news. This tactic is used across the world by many mainstream political parties as well as by some of the more radical terror groups. Bots have also been deployed extensively by “influencers” such as those bad actors who sought to sway the 2016 US Presidential elections and the Brexit Referendum.
The scale of bot usage for such malicious purposes is huge. Twitter claims it removes close to 10 million bots per week for “potentially spammy behaviour” and is said to be considering labelling automated accounts anyhow. A law like this could lend teeth to such efforts. It would also force Facebook and other social media platforms to take similar actions to identify and label bots.

Obviously, a law that only applies to California doesn't have a great deal of traction. However, if this law is successful at reducing bot abuse in that state, it could turn into a model for others across the world. If it works in California, which is at the “cutting edge of the cutting edge” in terms of tech usage, it should work elsewhere.
From a more long-term perspective, such a law will soon be necessary anyhow. Google has demonstrated how its Duplex AI can make restaurant reservations and airline bookings by imitating human speech patterns complete with pauses and hesitations, on voice calls. It would not be difficult to program an AI agent to imitate the voice of some well-known personality to carry out “spoofs”. As the scope for such “spoofs” increases, a law on these lines will become imperative.


 

Next Story