in

A new AI training program helps robots own their ignorance


More self-aware machines could avoid making dangerous mistakes

delivery bot

WATCH AND LEARN  Shadowing humans on the job could help such autonomous robots as delivery bots (one shown) and self-driving cars recognize shortcomings in their own training.

Elvert Barnes/Flickr (CC BY-SA 2.0)

HONOLULU — A new training scheme could remind artificial intelligence programs that they aren’t know-it-alls.

AI programs that run robots, self-driving cars and other autonomous machines often train in simulated environments before making real-world debuts (SN: 12/8/18, p. 14). But situations that an AI doesn’t encounter in virtual reality can become blind spots in its real-life decision making. For instance, a delivery bot trained in a virtual cityscape with no emergency vehicles may not know that it should pause before entering a crosswalk if it hears sirens.

To create machines that err on the side of caution, computer scientist Ramya Ramakrishnan of MIT and colleagues developed a post-simulation training program in which a human demonstrator helps the AI identify gaps in its education. “This allows the [AI] to safely act in the real world,” says Ramakrishnan, whose work is being presented January 31 at the AAAI Conference on Artificial Intelligence. Engineers could also use information on AI blind spots to design better simulations in the future.

During its probationary period, the AI takes note of environmental factors influencing the human’s actions that it does not recognize from its simulation. When the human does something the AI doesn’t expect — like hesitating to enter a crosswalk despite having the right-of-way — the AI scans its surroundings for previously unknown elements, such as sirens. If the AI detects any of these features, it assumes the human is following some safety protocol it didn’t learn in the virtual world and that it should defer to the human’s judgment in these types of situations.

Ramakrishnan and colleagues have tested this setup by first training AI programs in simplistic simulations and then letting them learn their blind spots from human characters in more realistic, but still virtual, worlds. The researchers now need to test the system in the real world.   

Maria Temming

Previously the staff writer for physical sciences at Science News, Maria Temming is the assistant editor at Science News Explores. She has bachelor’s degrees in physics and English, and a master’s in science writing.

More Stories from Science News on Artificial Intelligence

From the Nature Index

Paid Content


A photo shows someone

Generative AI grabbed headlines this year. Here’s why and what’s next

China's WeRide tests autonomous buses in Singapore, accelerates global ambition

China’s WeRide tests autonomous buses in Singapore, accelerates global ambition