Why face recognition technologies are getting thumbs down?

Why face recognition technologies are getting thumbs down?

Sunil MishraUpdated: Saturday, November 07, 2020, 06:24 PM IST
article-image
Why face recognition technologies are getting thumbs down? | Andrea P by Pexels

Tech-n-Biz

A fortnightly section on technology and business

Just a few months back, IBM CEO announced that it was taking the focus off the facial recognition services. Arvind Krishnan in a letter to the US Congress wrote that 'IBM no longer offers general purpose IBM facial recognition or analysis software'. Microsoft has issued a similar statement that they would not sell face recognition technology to US police forces. Amazon stopped the sale of similar technologies as well. The activists are running a campaign asking Facebook to stop using face recognition altogether even though Facebook has provided an option to disable this by choice.

Though there are many companies still working on this and no statutory authority has banned the usage of face technology. Certainly, some ethical issues are being raised about its usage. The most recent backlash was during the Black Lives Matter protest in the US where this technology was controversially used by the local police to identify miscreants. It raised the question of racial profiling where the human biases are getting amplified in the AI models.

Mystery behind face recognition

Face recognition was one of the most difficult problems for computers to crack because humans don’t know how exactly it works. Human brains have evolved over thousands of years specifically to recognise faces and read the emotions because it was important for our survival. Two crumbled pieces of paper may be geometrically different, but our human eyes can’t differentiate them easily. Two human faces are 90 per ent similar structurally but our eyes can easily catch the minute variations and we remember them as different people. It means our brains have special abilities to read human faces. Sometimes this ability goes awry, and it results in what neurologists call prosopagnosia (face blindness).

Oliver Sacks wrote a seminal book called 'The man who mistook his wife for a hat' based on the experience of people who could not recognise faces though their vision was alright.

Face recognition has little to do with vision and more to do with the complex neural processing. So, when the computers adopted the convolutional neural network to extract prominent features and process them at speed, this puzzle was solved. It was all about pattern recognition at a more abstract level of reasoning. With the higher processing power of a computer, loads of pixel data could be abstracted to create a face model with high accuracy. Today when you upload the picture on Facebook, the AI algorithm scans the face and matches with existing friend lists and suggests names based on higher matches. Today, face recognition has exceeded human abilities. In the 2017 ImageNet challenge (a world-wide challenge to track the progress on computer vision), an AI could identify faces with 2.3 per cent error which is well below the human error rate of 5 per cent. Moreover, the accuracy of a computer has been increasing every year while humans are no longer getting any better in vision. So, very soon AI will beat humans comprehensively in this area. Already most smartphones are using face logins and that works fine. It works even when you grow a beard, get swollen eyes or grow old. The technology works on the relative positioning of the facial features rather than absolute features hence it works in most situations.

How facial recognition works

The face technologies have some unintended consequences as well. It does not just read faces, but it can do detailed classification based on race, ethnicity, age, sex and many other demographic and psychographic details. In the normal situation, these are considered private data and would have required consent of the user before being acquired. So, it is, in a way, acquiring private data without the explicit knowledge and permission of the users.

How this data can be misused is only left to the imagination. With a good algorithm, this data can be used for political or even criminal manipulation whether by private or the state agencies. This was the prime reason for the opposition of this technology by the state police in the US.

If you have ever seen videos on social media where Trump or Obama is lip-synching on a popular song, you might have stopped for a moment to think if it was real. There was a video by MIT lab where they put various noted personality’s face and all of them singing and moving their faces in tune with a song. This technology is known as ‘Deepfake’ that uses a neural network to predict facial emotions and movements. When you have Aristotle and Trump singing the same song you can very well guess that it must be fake, but if this technology is deployed cleverly, it may well make the human impersonation a child’s play in the video. If that happens you will start doubting every YouTube video, if they are real. Only other AI technology can find out if it was fake.

Social media score

In China as an experiment in some schools, they have deployed face recognition using some CCTV cameras. These cameras monitor the student's behaviour throughout the class sessions and assign a score based on individual student’s actions. The AI model can analyse the hours of activities and can find out if the student was attentive in the class. It can be more accurate than the class teacher’s judgement who is limited in her abilities to monitor things. The same model is being extended beyond the schools where the images from the street cameras and public TVs can be used to derive the public behaviour of any citizen. It seems straight lifted from George Orwell’s book ‘1984’ where the big brother is watching every inhabitant all the time. The creepy, all pervasive vigilance of the state is an example of the dystopian world where everyone is as good as living in a glasshouse.

It is this fear of privacy invasion that is scaring the proponents of the ethical use of AI. It is not the potential of AI but our abilities to deal with those consequences that are putting the self-imposed limits on the face recognition technologies. Every new technology has a potential advantage but also has possible misuse. Just as we were able to restrict the usage of nuclear technologies in bomb-making and deploy that in things like nuclear energy, we should be able to adopt the computer vision technologies to find the use cases that are beneficial to humans in the long run.

Mishra is a software professional with over 20 years of experience with leading IT and consulting companies. He also works with universities as a startup mentor in the area of new technologies.

RECENT STORIES

Analysis: Jobless Growth – The Oxymoron Demystified

Analysis: Jobless Growth – The Oxymoron Demystified

Editorial: British Raj to Billionaire Raj

Editorial: British Raj to Billionaire Raj

MumbaiNaama: When Breaching Code Of Conduct Meant Penalties

MumbaiNaama: When Breaching Code Of Conduct Meant Penalties

Editorial: Injustice To Teachers

Editorial: Injustice To Teachers

RBI Imposes Restrictions On Kotak Mahindra Bank: A Wake-Up Call for IT Governance In Indian Banking

RBI Imposes Restrictions On Kotak Mahindra Bank: A Wake-Up Call for IT Governance In Indian Banking