As artificial intelligence explodes in popularity, two of its pioneers have nabbed the 2024 Nobel Prize in physics.
The prize goes to John Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks,” the Royal Swedish Academy of Sciences in Stockholm announced October 8. These computational tools, which seek to mimic the functioning of the human brain, underlie technologies like image recognition algorithms, large language models including ChatGPT, soccer-playing robots and more (SN: 2/1/24; SN: 5/24/24).
The prize surprised many, as these developments are typically associated with computer science rather than physics. But the Nobel committee noted that the techniques were based on physics methods.
Still, no one was more shocked than Hinton himself: “I’m flabbergasted. I had no idea this would happen. I’m very surprised,” he said by phone during the announcement news conference.
The techniques have underpinned a variety of scientific advancements. Neural networks have helped physicists grapple with large amounts of complex data, allowing important advances that include making images of black holes and devising materials for new technologies such as advanced batteries (SN: 4/13/23; SN: 1/16/24). Machine learning has made strides in the biological and medical fields too, with the promise of improving medical imaging and understanding protein folding (SN: 6/17/24; SN: 9/23/23).
“This award solidifies the fact that AI isn’t just a niche technology — it’s a scientific revolution with cross-disciplinary impact,” says AI researcher Craig Ramlal of the University of the West Indies at St. Augustine in Trinidad. “Even more importantly, this award legitimizes AI as a tool for understanding and simulating the natural world, which hopefully will drive more innovations.”
Neural networks are designed to identify patterns in data, rather than making explicitly programmed calculations. They’re based on a web of individual elements called nodes that are inspired by the individual neurons in the brain. Training the neural network by feeding it data hones its ability to make accurate conclusions, by optimizing the strength of the couplings between the nodes.
In 1982, Hopfield, of Princeton University, created an early type of neural network, called a Hopfield network, that could store and reconstruct patterns in data. The network was similar to magnetic materials in physics, in which atoms have small magnetic fields that can point either up or down, akin to the 0 or 1 values at each node of a Hopfield network. For any given configuration of atoms in a material, scientists can determine its energy. A Hopfield network, after being trained on a variety of patterns, minimizes an analogous energy to uncover which of those patterns is hidden in the input data.
“In my view, physics is trying to understand how systems work. The systems are made of parts. These parts interact,” Hopfield said in remarks given virtually during a news conference at Princeton.
Hinton, of the University of Toronto, built on that technique, devising a neural network called a Boltzmann machine, which is based on statistical physics including the work of 19th century Austrian physicist Ludwig Boltzmann. Boltzmann machines contain additional nodes that are hidden — they process the data but do not directly receive input. Different possible states of the model have a different probability of occurring. These probabilities are set by the Boltzmann distribution, which describes configurations of many particles such as molecules in a gas.
“The work that Hopfield and Hinton did has just been transformative, not just for the scholarly communities developing AI and neural networks, but also for many aspects of society,” says computer scientist Rebecca Willett of the University of Chicago.
The two winners will split the prize of 11 million Swedish kroner, or about $1 million.
“I was delighted to hear this, actually. It was a massive surprise,” says AI researcher Max Welling of the University of Amsterdam. “There is a very clear connection to physics.… The models themselves are deeply inspired by physics models.” What’s more, the discovery made many developments in physics possible, he says. “Try to come up with a technology that has had a bigger impact on physics, especially on the methods side. It’s hard.”
Since the 1980s, researchers have vastly improved upon these models, and scaled them up dramatically. Deep machine learning models now have many layers of hidden nodes and can boast hundreds of billions of connections between nodes. Vast amounts of data are used to train the networks, scraping the internet for text or images to feed into them.
While AI technologies based on neural networks are capable of feats unimaginable in the 1980s, the technologies still have a multitude of pitfalls. Many researchers focus now on understanding how the prevalence of machine learning could have negative impacts on society, such as reinforcing racial biases, facilitating the spread of misinformation and making plagiarism and cheating quick and easy (SN: 9/10/24; SN: 2/1/24; SN: 4/12/23).
While some scientists, including Hinton, worry that AI could become superintelligent, the way that AI is trained differs from human learning patterns, and many researchers disagree that artificial intelligence is on a path to world domination (SN: 2/28/24). AI models are famous for making laughable mistakes that defy common sense. Scientists are still actively working to define how terms like “understanding” can be applied to machine learning systems, and how best to test their capabilities (SN: 7/10/24).
“There are real concerns about how [AI is] going to affect labor and the job market, how it enables misinformation and manipulation of data,” Willett says. “I think those are very real concerns in the here and now. That’s because humans can take these tools and use them for malicious purposes.”