Don’t miss the latest developments in business and finance.

Crackdown on 'deepfakes' indicates danger to credible media dissemination

Deepfake is a catch-all word to describe a large portfolio of AI-driven video-manipulation technologies

deepfake, AI, technology, Artificial Intelligence, fake, information
Devangshu Datta
5 min read Last Updated : Jan 28 2020 | 9:03 PM IST
On January 6, Facebook’s VP Global Policy Management, Monica Bickert, announced that Facebook would crack down on certain categories of manipulated media. It would remove “misleading, manipulated media that had been edited or synthesised in ways not apparent to an average person [to] mislead someone into thinking that a subject of the video said words they did not actually say” and “video that is the product of artificial intelligence or machine learning that merges, replaces or superimposes content, making it appear to be authentic”. Twitter has announced a similar policy and Reddit has also started to crackdown on “deepfakes” as these are called. Google is also cooperating in this technological struggle. 
 
Deepfake is a portmanteau of “Deep Learning” and “Fake”. It’s a catch-all word to describe a large portfolio of AI-driven video-manipulation technologies.  Although deepfakes have not been around for very long — the term was coined in 2017 on Reddit — they have induced legislative action. The recent policy announcements by social media networks have been triggered by impending US Presidential elections since deepfakes are a particularly effective tool for dissemination of fake news. 
 
Deepfake is a spinoff from recognition technology. Apart from having unique faces, voices and figures, individuals also have unique postures, facial expressions, vocal cadences and modes of physical movement.  If sufficient video footage of an individual exists, AI can analyse these patterns to recognise that person, even if their face is covered, or they are deliberately distorting their voices. Take it one step further and AI can extrapolate how that person would say, or do something.
 
 “Strip” technologies such as DeepNude can guess accurately what a clothed person would look like, naked. Ageing and de-ageing technologies allow an AI to guess what someone would look like in the past, and in the future.  This has been used to depict a young Harrison Ford, for instance.
 
Deepfake can, therefore, be used to create realistic videos of somebody doing, and saying things, that they have not actually done, or said. For example, it would be possible to make deepfake videos, where President Donald Trump is chanting in Sanskrit, Lionel Messi is swimming, and Roger Federer playing cricket. These could be realistic since the AI could analyse hundreds of hours of footage of speech patterns, body-motion patterns and facial expressions to create such videos. Mark Zuckerberg has himself been the victim of a deepfake video, which went viral.
 
The technology has already been used by fans to make Nicholas Cage “star” in various movies. It has also put dead actors, such as Carrie Fisher and James Dean onscreen and there are lawsuits by the estates of other dead actors to prevent this happening.  It is also possible to create “synthetics” — full-body moving, talking images of people who don’t actually exist.
 
These technologies are proliferating in the form of programs easily used by anybody. There may be legal applications. For example, labelled deepfake for the sake of parody, or entertainment, is possible. An aspiring film-maker may make a film starring non-existent people. Deepfake can also be used to train athletes to improve techniques. It can be used for satirical content. Indeed, Facebook will allow deepfake attempts at humour, parody or satire (though this can be very subjective). 
 
But there are also many nasty applications. Apart from creating disinformation to further political causes, deepfake technology can generate celeb pornography, and “revenge porn”, using footage of ex-partners. Porn is most popular use-case — according to estimates by researchers, about 95 per cent of deepfakes in public domain consist of porn. It is also possible that deepfake will soon be weaponised to “frame” civil activists in undemocratic regimes.
 
The state of California has banned deepfake revenge porn and celeb porn. It’s an obvious hotspot since both Silicon Valley and Hollywood are in California. At the federal level, the US Congress has contemplated legislation to regulate and control deepfakes. But this is hard to formulate without stepping on free speech, and freedom of expression. In any case, it’s hard to see legislation keeping pace with a fast-evolving technology.
 
Deepfake generation relies heavily on “training” AI through algorithms that analyses footage, and recognises and recreate subjects. It often uses a method called Generative Adversorial Networks (GAN), which sets up two algorithms to battle each other: one algo tries to forge convincing deepfakes while the other algo points out flaws in forgery. In this way, the AI learns and improves.
 
Identifying deepfakes is not an easy task.  Facebook is putting $10 million into creating better recognition systems. It is hiring actors to make videos for training AI algorithms. Google has already put together a publicly available dataset of 3,000-odd deepfake videos, to train AI programs to identify deepfakes. This will turn into a race of competing technologies – deepfake technology will improve rapidly even as detection also gets more sophisticated.
 
There are many other ways to manipulate media of course. But this one could eventually lead to the complete collapse of credible information dissemination.
 

The AI avatar

  • Deepfake is a spinoff from recognition technology 
  • It can create realistic fake videos of actual people
  • Further political cause by spreading disinformation
  • Generate celeb pornography
  • Accurately create time-lapse images of people to show ageing or their childhood


Topics :artificial intelligencemedia freedom