How Facial Recognition Needs To Improve To Be Effective

Forbes 2 months ago

Facial recognition is one of the fastest-growing biometric technologies. What began as a surveillance tool to track down criminals and aid border control has been put forward as an authentication method to support the explosion of devices connected to the cloud and the internet of things (IoT). Biometric authentication systems are needed in the digital world and require a high level of efficacy to ensure that no one except the legitimate user can enter a device. However, as it currently stands, facial recognition could end up making the world resemble the dystopian societies of film and literature. Border control is not the same as the IoT.

Last year, a Dutch nonprofit found that investigators could bypass the facial recognition feature on 42 out of 110 smartphones tested. The study found that holding up a photo of the phone's owner is enough to unlock devices from Asus, BlackBerry, Huawei, Lenovo, LG, Nokia, Samsung, Sony and Xiaomi. Facial recognition systems on the market today rely on a few simple features that a picture can replicate, and they depend on imaging conditions. An NIST study reports that facial recognition works better using standardized samples such as mugshot pictures or police department databases than with unconstrained images.

Ethnicity markedly influences performance, as well. NIST found that black females have the highest false match rate. Apple's Face ID avoids those pitfalls by using an infrared camera to create a 3-D facial map of users. But last month at Black Hat USA 2019, researchers with Tencent demonstrated an attack that bypasses FaceID by using a pair of modified glasses on the victim’s face, uncovering a weak point: liveness detection. Other weak points exist.

Aside from those reported limitations, there are more complex issues to consider before further investments into facial recognition make it the default biometric modality in consumer devices and environments like airports.  

If AI Could Learn From Its Mistakes, It Could Learn Better

Most facial recognition systems employ a type of artificial intelligence (AI) known as neural networks to identify a person. Neural networks are systems built to classify data. After feeding the system thousands of training examples, it can spot patterns and classify images without human intervention. Facial recognition backed by neural networks can reach up to 98% accuracy in the best case but may fall short depending on training images. Neural networks are quite fragile: By removing a single cornerstone, the whole edifice can crumble. Japanese researchers from Kyushu University discovered that neural networks belonging to state-of-the-art image recognition systems wrongly label images after changing one pixel in 74% of test images (this alteration would not affect a human’s ability to identify an object). On average, removing three to five key pixels reduces the accuracy of the system to 0.5%. While those pixels have to be chosen carefully, an adversarial attack on a neural network can easily locate this weakness, and anyone in the field can do it.

At the moment, AI learns from statistical pattern analysis using hundreds if not thousands of dimensions to extract unseen patterns from large datasets, and this is why neural nets can be easily fooled. However, if AI learned from mistakes, similar to how our brains learn, it could learn from experiences in a self-correcting manner. For example, children do not learn to walk using the input of hundreds of similar children; they fall a few times in the course of a couple of weeks and, one day, they take their first steps. Continuous learning using all the information generated from mistakes is not a technical problem; it is a choice.

Biometrics Belong On Your Device, Not The Cloud

Facial recognition databases supply a treasure trove of facial scans that are vulnerable in the event of a data breach. Just a few weeks ago, there was a major breach involving security company Suprema that exposed the fingerprints of over 1 million people, facial recognition information and unencrypted usernames. Security researchers found this information available on a publicly accessible database. Unfortunately, once biometric data is exposed, the loss is perpetual because it cannot be changed like a password. To take it a step further, a bad actor can use a single photo of you with the Samsung AI tool to generate a deepfake video and impersonate you online. Nothing is stopping a bad actor from impersonating anyone across a supposedly secure protocol, like the blockchain or a banking app, for example.

To prevent bad actors from accessing precious facial data, we need to stop private companies from aggregating personally identifiable information (PII) — especially biometrics — in databases stored on the cloud or any other kind of server that is not secure. The goal is to keep biometric data on the end user’s device at all times, and this is possible because of increased computing power on edge devices. The rise of on-device AI can help keep sensitive data on your device and eliminate vulnerabilities associated with cloud storage and transfer to and from the cloud.

Usage Has To Be Defined And Controlled

Facial recognition started with surveillance. Police departments across the United States already incorporate systems into their practices. However, this year, San Francisco, Oakland and Somerville (Massachusetts) became the first three cities to ban facial recognition, stating that the technology is error-prone and infringes on people’s privacy and liberties. Amazon’s Rekognition software, a service that has been marketed to and implemented by police departments, mistakenly matched 26 out of 120 California legislators as criminals according to the results of a recently released test by the ACLU. Before we implement facial recognition in various capacities, we need to understand its implications and regulate accordingly.

Our facial images are the most precious data that exists today. The only way to avoid exploitation is to implement facial recognition effectively and safely. We need to strengthen existing systems by teaching AI to learn from mistakes, keeping sensitive biometric information on the device and defining an appropriate regulatory framework.


Source link
Read also:
Business Insider › Technology › 1 month ago
Google's new Pixel 4 comes with disclaimers that say its facial recognition isn't perfect. Specifically, the Pixel 4's facial recognition can be fooled by "someone who looks a lot like you," like a twin. It also says that "someone else" can unlock your...
New York Post › Technology › 3 weeks ago
Amazon has considered adding facial recognition technology to its Ring doorbell cameras, according to a letter to a US senator defending its video-sharing partnerships with police. The company told Sen. Ed Markey that facial recognition is a...
Axios › Technology › 2 months ago
A coalition of tech companies and trade groups is urging Congress not to pass legislation that would ban government use of facial recognition.Why it matters: As of now there are no national rules on how governments can or can't use face recognition...
Business Insider › Politics › 1 month ago
Microsoft reportedly funded an Israeli startup that makes facial recognition used to secretly monitor Palestinians living in the West Bank. The investment, first reported by Forbes, seems to conflict with Microsoft's public pledge not to use facial...
CNN › Technology › 1 week ago
Facial recognition checks are about to become even more ubiquitous in China, as rules come into force requiring anyone registering a new mobile phone number to submit to facial scans.
Daily Mail Online › 2 months ago
Google hired temps to collect facial recognition data from black people to improve security measures for upcoming Pixel 4 smartphone, told people it was for game and gave them $5 giftcards
Business Insider › Lifestyle › 2 months ago
Google is facing criticism after a company it contracted with allegedly secretly recorded the faces of homeless people to improve its facial recognition software without disclosing they were working with the tech giant. The allegations are part of a...
Business Insider › Technology › 2 months ago
Google suspended a research programme designed to improve its facial recognition after a report surfaced that its contractors had been tricking black homeless people into letting their picture be taken. Anonymous contractors told the New York Daily...
Business Insider › Technology › 2 months ago
Emerging tech is making it more efficient and inexpensive for police across the country to scale up their surveillance operations. Drones, facial recognition, and algorithm-driven policing are a few of the many technologies aiding police operations...
Washington Examiner › 1 month ago
President Trump’s campaign manager suggested using facial recognition at rallies to better gauge how the president’s performance was playing with his supporters.
Sign In

Sign in to follow sources and tags you love, and get personalized stories.

Continue with Google
OR