ral networks,” the Royal Swedish Academy of Sciences said in a statement.

The prize ups the count of Caltech Nobel laureates to 46 alums, faculty, and postdoctoral scholars, who have won a total of 47 Nobel Prizes.

“This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning,” the statement read. “John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data.”

Hopfield is credited with creating the Hopfield network, which uses a method for saving and recreating patterns. When fed a distorted image, the network can find the saved image that is most like the distorted one, according to the academy.

In effect, Hopfield’s model describes how the brain recalls memories when fed partial information, similar to the method one’s brain uses to remember a word on the tip of one’s tongue.

“Physics is trying to understand how systems work,” Hopfield said during a press conference hosted by Princeton. “The systems are made of parts. These parts interact, so when you make large systems they get behaviors which are different from the way small systems behave and, in fact, can be fundamentally new things. When you get systems which are rich enough in complexity and size, they can have properties which you can’t possibly intuit from the elementary particles you put in there. You have to say that system contains some new physics.”

At 91, Hopfield, a Chicago native, is the third oldest Nobel laureate in physics.

He began his career at Bell Laboratories in 1958 as a physicist studying the properties of solid matter, but felt limited by the boundaries of his field.

He moved to UC Berkeley as an assistant professor in 1961 and joined the physics faculty at Princeton in 1964. Sixteen years later, he joined Caltech in Pasadena as a professor of chemistry and biology, and in 1997, returned to Princeton, this time in the department of molecular biology.

But it was at Caltech — from 1980 until 1997 — where he did much of his work.

It’s there where he doubled down on his fascination with whether digital devices could be taught to process information much like a human’s brain — a departure from the sequential kind of learning of a traditional computer.

The question was about whether computers could think. To that end, while at Caltech, he designed neural networks that linked computers, in effect, mirroring the human brain by enabling the computers to communicate with each other at the same time.

In the 1980s, his work focused on how the processes of the brain can inform how machines save and reproduce patterns. He explained in an interview that his work came from an initial intrigue with the connections between physics and biology.

“Biology is just a physical system, but a very complicated one,” he said.

In the earlier days, he did not anticipate that his work on neural networks would ever be useful in machine learning.

But there’s a “natural handshake” between questions in artificial intelligence and biology, he said.

The years leading up to the Hopfield network were like an “AI winter,” said Dmitry Krotov, a physicist with the Massachusetts Institute of Technology and IBM who has published several papers with Hopfield in recent years. But Hopfield’s work in 1982 “was the major driving force that ended that period,” he said. “It’s the ground zero for the modern era of neural networks.”

This point was affirmed by the neuroscientist and computer scientist Terry Sejnowski, now at the Salk Institute for Biological Sciences, who studied under Hopfield and later became a key collaborator of Hinton’s.

Work on the Hopfield network “drew many physicists into the machine learning field,” he said. “In many ways, it helped create the field.”

Fast forward to now, in the age of artificial intelligence, and the world is catching up with the profound implications of his and Hinton’s work.

With their ability to make sense of vast amounts of data, artificial neural networks already have a major role in scientific research, the Nobel committee said, including in physics, where it is used to design new materials, crunch large amounts of data from particle accelerators and help survey the universe.

In a call during the Nobel announcement in Stockholm on Tuesday, Hinton expressed worries over machine learning and said it would have an extraordinary influence on society.

Hopfield’s remarks during Tuesday’s press conference touched on concerns about the uses of artificial intelligence in the future. Hinton, too, left his job as a researcher at Google last year, in part so he could freely discuss his concerns that the AI technologies he helped create could end up harming humanity.

“I worry about anything which says, ‘I’m big, I’m fast, I’m faster than you are, I’m bigger than you are and I can also run you,’ ” Hopfield said. “Now, can you peacefully inhabit with me? I don’t know. I worry.”

The New York Times contributed to this article.