Don’t miss the latest developments in business and finance.

The limits of facial recognition

New privacy legislation will be required to limit the use of such technology and to provide checks and balances against abuse

The limits of facial recognition
A ‘smart departure’ self-service machine scans a woman’s face to authenticate her identity using face recognition technology, during a demonstration by the Immigration Department at Hong Kong Airport in Hong Kong, China. (Photo: Reuters)
Devangshu Datta
4 min read Last Updated : May 06 2019 | 12:43 AM IST
Recognising people by visually scanning faces is something most human beings do automatically. It is a task computers have struggled to perform. But Face Recognition technology is improving and becoming more popular.  

Smartphone and PC users deploy FR to control access. There are public-facing FR deployments in airports, public toilets, smart offices and apartment blocks. Police and security agencies use FR. Banks, malls, and supermarkets use CCTV systems and cities have large CCTV deployments in public areas, metro stations, municipal buildings, etc. Footage from these can be linked to FR databases. FR is also used by Chinese security forces to maintain surveillance on the Uighur minority and it is key to running China’s social credit system.

But FR is not a magic bullet. There are major privacy and ethical issues involved in deployment. There are also huge technical challenges since FR systems often don’t work well except in controlled environments.

A PC or smartphone FR system has a narrow purpose. The user records her face and the device security matches faces to that image when somebody logs on. This is a single-person FR database in one sense, though the manufacturer may end up with a large database as it collects images.

A smart building processes a few thousand images, perhaps. Airport check-in FR systems have a trickier task. There are the names, mugshots, flight numbers and other details of passengers to be matched to people going through check-in.

There are privacy issues here as well. Passengers are either sharing biometrics with private entities (airlines and private companies running airports), or private entities are accessing government databases for matches.  Given the well-documented history of Aadhaar abuse and leaks by private entities connected to that system, it’s easy to understand why this is problematic.

FR systems in public toilets don’t match individuals. These are for gender-differentiation. This is a very different task from putting a name to a face and actually, trickier in some ways. FR gender-ID systems are trained by putting together databases of millions of facial images labelled “man”, “woman”, “girl”, “boy”. There is a high error percentage because some men “look” like women, and vice-versa, and teens can be androgynous.

Those errors are higher if there’s hair-style and colour bias in the training databases. Databases trained with mostly white Caucasian faces have very high error rates when identifying gender of persons with darker skins. Transgender people may utterly confuse gender differentiation systems. (There is a wider issue here, since most nations are struggling to decide which public toilet transgenders should use).

In these cases, the subjects are cooperative and these checks are done in controlled environments. Somebody passing an airport or smart office check-in, “poses” in front of a neutral background, making recognition easier.

Police departments and security agencies have a much more difficult task, across multiple dimensions. They work with unclear images sourced from CCTV, with confusing backgrounds; they often deal with subjects, who have been injured, or killed, with resulting mutilations; some subjects deliberately obscure their appearances, and so on.  The police also work with far larger databases, comprising millions of people with billions of images. There are often large matching errors.

From what we know of the Chinese social credit and surveillance systems, the scales there are larger than anywhere else in the world. According to leaks into public domain, about 2.5 million Uighurs are tracked constantly, using FR among other things. One smart city data system that leaked off Alibaba covered a CCTV network of Beijing’s embassy district. It had an FR system that categorised and labelled people by ethnicity.  

Selfies and other pictures shot with Chinese smartphones are often stored on clouds run by the phone manufacturers. This helps them build databases containing hundreds of millions of catalogued faces. The Chinese government has legal access to much of that data.

Chinese municipalities use FR system to track down and fine citizens for minor offences, like jaywalking. China’s social credit system marks down people who criticise the government, or default on debt, or indulge in drunken brawls. Low scorers can be punished by denial of access to airports and railways stations. Once again, FR comes into play.

While other nations might not put similar social credit scoring systems into operation, they will certainly use the technology.  New privacy legislation will be required to delineate and limit the use of such technology and to provide checks and balances against abuse.

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper
Next Story