AI bias may come from annotation instructions – TechCrunch

The research in the area of ​​machine learning and AI, now a key technology in virtually every industry and business, is far too voluminous for anyone to read it all. This column, Perceptron (formerly Deep Science), aims to bring together some of the most relevant recent discoveries and papers – particularly, but not limited to, artificial intelligence – and explain why they matter.

This week in AI, a new study reveals how bias, a common problem in AI systems, can start with the instructions given to people recruited to annotate the data from which AI systems learn to make predictions. The co-authors find that annotators pick up patterns in instructions, which conditions them to contribute annotations that then become overrepresented in the data, biasing the AI ​​system towards those annotations.

Today, many AI systems “learn” to make sense of images, videos, text, and audio from examples that have been labeled by annotators. Labels allow systems to extrapolate relationships between examples (for example, the link between the caption “kitchen sink” and a photo of a kitchen sink) to data that systems have not seen before ( for example, photos of kitchen sinks that weren’t included in the data used to “teach” the model).

It works remarkably well. But annotation is a flawed approach – annotators bring biases to the table that can bleed into the trained system. For example, studies have shown that average annotator is more likely to label sentences in African American Vernacular English (AAVE), the informal grammar used by some Black Americans, as toxic, leading AI toxicity detectors trained on labels to view AAVE as disproportionately toxic.

It turns out that annotators’ predispositions may not be solely responsible for the presence of bias in training labels. In a preprint study from Arizona State University and the Allen Institute for AI, the researchers investigated whether a source of bias could lie in the instructions written by dataset creators to serve as guides for annotators. These instructions usually include a brief description of the task (for example, “Tag all the birds in these photos”) as well as several examples.

Picture credits: Parmar et al.

The researchers looked at 14 different “benchmark” datasets used to measure the performance of natural language processing systems or AI systems that can classify, summarize, translate, and otherwise analyze or manipulate text. By studying the task instructions provided to annotators working on the datasets, they found evidence that the instructions prompted the annotators to follow specific patterns, which then propagated to the datasets. For example, more than half of the annotations in Quoref, a dataset designed to test the ability of AI systems to understand when two or more phrases refer to the same person (or thing), begin with the phrase “What is the name”, a phrase present in a third of the instructions for the dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly troubling because it suggests that systems trained on biased instruction/annotation data might not perform as well as initially thought. Indeed, the co-authors found that instructional bias overestimates the performance of systems and that these systems often fail to generalize beyond instructional models.

The silver lining is that large systems, like OpenAI’s GPT-3, have proven to be generally less susceptible to instruction bias. But the research is a reminder that AI systems, like people, are susceptible to developing biases from sources that aren’t always obvious. The unsolvable challenge is to uncover these sources and mitigate the impact downstream.

In a less serious article, scientists from Switzerland concluded that facial recognition systems are not easily fooled by realistic AI-edited faces. “Morphing attacks”, as they are called, involve the use of AI to alter the photo of an ID, passport or other form of identity document in order to bypass security systems. The co-authors created “morphs” using AI (Nvidia’s StyleGAN 2) and tested them against four cutting-edge facial recognition systems. The morphs posed no significant threat, they claimed, despite their realistic appearance.

Elsewhere in the field of computer vision, Meta researchers have developed an AI “helper” that can remember features of a room, including the location and context of objects, to answer questions. Detailed in a preprint article, the work is probably part of Meta Nazare Project initiative to develop augmented reality glasses that use AI to analyze their environment.

Self-centered meta AI

Picture credits: Meta

The researchers’ system, which is designed for use on any camera-equipped body-worn device, analyzes footage to build “semantically rich and efficient scene memories” that “encode spatiotemporal information on objects”. The system remembers where the objects are and when they appeared in the video sequence, and also places the answers to questions a user might ask about the objects in its memory. For example, when asked “Where did you last see my keys?”, the system might indicate that the keys were on a side table in the living room that morning.

Meta, which reportedly aims to launch full-featured AR glasses in 2024, telegraphed its plans for “egocentric” AI last October with the launch of Ego4D, a long-term “perception” AI research project. egocentric”. The company said at the time that the goal was to teach AI systems, among other tasks, to understand social cues, how the actions of an AR device wearer might affect their environment, and how hands interact with objects.

From language and augmented reality to physical phenomena: an AI model helped in an MIT study of waves – how and when they break. Although it sounds a little obscure, the truth is that wave models are needed both to build structures in and near water, and to model how the ocean interacts with the atmosphere in the models. climatic.

Picture credits: MIT

Normally waves are roughly simulated by a set of equations, but researchers trained a machine learning model on hundreds of examples of waves in a 40-foot water tank filled with sensors. By observing waves and making predictions based on empirical evidence, then comparing them to theoretical models, AI has helped show where models have failed.

A startup was born from research at EPFL, where Thibault Asselborn’s doctoral thesis on the analysis of handwriting turned into a full-fledged educational app. Using algorithms he designed, the app (called School Rebound) can identify habits and corrective actions in just 30 seconds of a child writing on an iPad with a stylus. These are presented to the child in the form of games which help him to write more clearly by reinforcing good habits.

“Our scientific model and our rigor are important and set us apart from other existing applications,” Asselborn said in a press release. “We have received letters from teachers who have seen their students improve by leaps and bounds. Some students even come before class to practice.

Picture credits: duke university

Another new finding in elementary schools involves identifying hearing problems during routine screenings. These screenings, which some readers may recall, often use a device called a tympanometer, which must be operated by trained audiologists. If there isn’t, for example in an isolated school district, children with hearing problems may never get the help they need in time.

Samantha Robler and Susan Emmett at Duke decided to build a tympanometer that basically works on its own, sending data to a smartphone app where it is interpreted by an AI model. Any worrying element will be reported and the child may be subject to further examination. It’s not a replacement for an expert, but it’s much better than nothing and can help identify hearing problems much earlier in places without the proper resources.

About Roberto Frank

Check Also

US civil rights officials warn employers of biased AI

The federal government said Thursday that artificial intelligence technology to screen new job applicants or …