Facial analysis still biased, the press still confusing it with facial recognition

A University of Maryland assistant professor audited major facial recognition services of government technology claims and found significant racial bias, with match rates. . . not reported.

The article is the latest example of the media confusing facial analysis with facial recognition. Indeed, the article notes that it was “emotion recognition technology within three facial recognition services” that was tested.

The researcher, Lauren Rhue, is not misquoted using the term “recognition”, but rather warns that we should ask ourselves “do we need to analyze faces in this way?”

The author of the article, however, uses the term “facial recognition” six times in the article, and refers to real-world biometric face-matching applications, but makes no attempt to differentiate between them. software answering the question “are these pictures of the same person” and “how does this person feel”.

Emotion recognition is considered closer to phrenology than biometrics by some AI researchers.

The article originally appeared in the Baltimore Sun. This means that readers of this publication and of GovTech risk being misled by the same kind of imprecise language that biometrics industry insiders, and scientists in general, have long been warning about.

A similar study was conducted by Joy Buolamwini in 2018, but Buolamwini made it clear that facial analysis software was used and drew more general observations about AI and biometrics from there. These observations have also been found to be generally accurate.

Millions of dollars and the time of hundreds of researchers dedicated to solving the problem of demographic disparities, or bias, in facial recognition since then, with significant progress. The generalization of “emotion recognition” to facial biometrics has become misleading.

Arguably worse are the false parallels drawn between the findings and the use of biometrics at the Port of Baltimore, which does not use emotion recognition from surveyed providers or others to match faces.

The question posed by Rhue above is a reasonable one, and the answer is probably “no”. There is still work to be done to clarify scope and remove bias in machine learning and related fields.

There is also a need to educate the public about what facial recognition means and what it doesn’t.

Article topics

precision | bias | biometric matching | biometrics | biometric research | emotion recognition | facial analysis | facial recognition

About Roberto Frank

Check Also

UK Labor Party brings back ID card concept to reduce irregular immigration

Shadow immigration minister Stephen Kinnock has unveiled plans for a possible Labor government in the …