Neural Networks Co-Inventor Hinton Wins Physics Nobel, Fears AI

by · Forbes
A screen shows the laureates of the 2024 Nobel Prize in Physics, US physicist John J Hopfield (top ... [+] L) and Canadian-British computer scientist and cognitive psychologist Geoffrey E Hinton as (bottom LtoR) Chair of the Nobel Committee for Physics Ellen Moons, Secretary General of the Royal Swedish Academy of Sciences Hans Ellegren and Member of the Nobel Committee for Physics Anders Irbaeck make the announcement at the Royal Swedish Academy of Sciences in Stockholm, Sweden on October 8, 2024. American John Hopfield and British-Canadian Geoffrey Hinton won the Nobel Prize in Physics on October 8, 2024 for pioneering work in the development of artificial intelligence. (Photo by Jonathan NACKSTRAND / AFP) (Photo by JONATHAN NACKSTRAND/AFP via Getty Images)AFP via Getty Images

On October 8, the 2024 Nobel Prize in Physics went to a computer scientist and a physicist for “for foundational discoveries and inventions that enable machine learning with artificial neural networks,” according to the New York Times.

Since neural networks fall more in the domain of computer science, rather than physics, this news raised many questions in my mind:

  • Couldn’t the Academy give the award to physicists who discovered something new in physics?
  • Should we cower along with Hinton who fears his creation — “something smarter than us” — could have “bad consequences”?
  • Will Hinton’s Nobel have any effect on how companies are using AI?

The answers: Possibly; some already do; and no.

2024 Physics Nobel Prize

The Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to two professors: John Hopfield, a physics professor at Princeton University and Geoffrey Hinton, a University of Toronto computer scientist. The Academy awarded the prize for their discoveries that helped computers learn “more in the way the human brain does,” noted the Times.

While such neural networks are a tremendous breakthrough, their connection to physics is not their chief feature. To be fair, the Nobel committee said neural networks have a major role in scientific research — including in the creation of “new materials with specific properties,” the Times wrote.

By this logic, the Nobel committee might have justified awarding the prize in Physiology and Medicine. How so? After all, among many other such models, Stanford Medicine researchers devised SyntheMol, an AI model to create “recipes for chemists to synthesize drugs in the lab,” according to Stanford Medicine News Center.

MORE FOR YOU
Hurricane Milton Now ‘Extremely Dangerous Category 4’ En Route To Florida—Here’s What To Know
Gmail Hackers Have Control Of 2FA, Email & Number? Here’s What To Do
A Missile Could Not Erase Russian Drone’s Embarrassing Stealth Secret

The 2024 Nobel in Physiology and Medicine went to two Massachusetts researchers — Victor Ambros and Gary Ruvkun — for their discovery of microRNA, “which helps determine how cells develop and function,” according to the Times. I wonder why the Nobel Committee did not award the prize to researchers closer to the center of the Physics bullseye.

Should We Fear AI’s Bad Consequences?

We should fear AI’s bad consequences — and already do.

Hinton spoke with journalists about the significance of neural networks. “It will be comparable with the Industrial Revolution,” he said, according to the Times. “Instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.”

While he envisions the technology will boost health care productivity, he worries about its dark side. He expressed worry “about a number of possible bad consequences, particularly the threat of these things getting out of control,” noted the Times.

This summer I participated in a conversation with retail industry board members and experienced their palpable terror. My interpretation is these board members are torn between strong emotional poles.

The bright green light says companies should invest heavily in AI to avoid falling behind rivals while the glowing red light holds companies back due to the risk of AI hallucinations potentially harming the companies reputation and prompting lawsuits.

Will This Prize Affect How Companies Use AI?

This Nobel will not affect how companies use AI since the hype is already at a peak.

Meanwhile those competing emotional poles are keeping many companies from putting generative AI in front of their customers.

How so? Of 200 to 300 generative AI experiments the typical large company undertakes, about 10 to 15 have led to widespread internal rollouts, and perhaps one or two are released to customers, according to my June Forbes interview with Liran Hason, CEO of Aporia, a startup that sells companies a system that detects AI hallucinations.

In my conversations with company executives since the July publication of my book, Brain Rush, two questions from business leaders have predominated:

  • Which generative AI applications will yield the biggest payoffs?
  • What should be role of the CEO in leading AI design and deployment?

In Search of High Payoff Generative AI Applications

I have a different idea about what a payoff from AI would look like. Rather than focusing on how long it takes for a company’s profits to cover the cost of building AI, I think business leaders should focus on a different metric.

Specifically, what matters to investors is expectations-beating growth. Hence business leaders should use generative AI to spur the creation of new, fast-growing products and services that can offset the company’s dependence on maturing core products.

Few companies are actually pulling this off. As I noted in my Value Pyramid case study, most generative AI use cases help people overcome creator’s block — such as the anxiety about writing an email. Fewer generative AI applications help improve the productivity of business functions such as customer service or coding. And few, if any, applications of AI chatbots enable companies to add new sources of revenue.

The CEO’s Role In AI Design And Deployment

One reason so few generative AI applications are found at the summit of the value pyramid is the role of the CEO.

To understand why, consider two approaches to developing new products: relay race and rugby.

In the relay race approach, engineers develop a blueprint for a new product. When they are done, they hand the blueprint baton to the head of manufacturing who protests the blueprint will be too expensive to manufacture and will produce quality problems.

Once manufacturing has built what the engineers ordered, the head of manufacturing hands the baton to the head of sales — asking them to find customers for the products on the loading dock. The head of sales inspects the products and says they do not have the features customers want and will be hard to sell.

Now consider the rugby approach. Here the CEO pulls together a scrum — consisting of leaders from sales, purchasing, manufacturing, and finance. The scrum visits with early-adopter customers and listens to them talk about their unmet needs.

After that, the scrum develops a prototype — taking into account the concerns of each business function and how well the new product will meet the needs of the early-adopters.

The scrum then gives the prototype to the customer and asks for feedback. Typically, that will send the team back to add new features, delete others, and after a few iterations — produce a product customers are eager to buy.

Last month I did a podcast with retail executives who told me the typical retail CEO uses the relay race approach. When it comes to generative AI, the most common approach is similar — the CEO delegates the design and deployment to a chief AI officer who lacks a deep understanding of the business strategy.

Rather than delegating AI, the CEO must understand how AI can help redesign how a company works with customers to create value — modeled after the rugby approach I described above.

A case in point is TechSee, a company that built an much more effective way to provide remote customer service. The company founder and CEO Eitan Cohen could not help a family member solve a computer problem over the phone, according to my October Inc. column.

His mother-in-law called and said her printer was not working. “People can’t communicate the problem over the telephone,” he told me.

“I would have to drive over to look at the printer to figure out what was wrong and solve it. I didn’t like that. I thought there must be a solution. I could take remote control of the computer. But that would not help me realize that the dog chewed the cable, or it was not plugged in to the right place on the printer,” he added.

TechSee’s solution pays off for customers. “We enable our customers to go from defense to offense,” he said. “Contact center employees can solve customer problems quickly so they can spend time selling products. We reduce average call duration by 20 percent to 50 percent — from an hour to less than 30 minutes,” he added.

TechSee’s solution enables customers to send images of the products for which they are seeking service. Those pictures reveal the source of the problem in a way words can’t. Moreover, the company uses its ability to analyze the most common problems and use an AI chatbot to deliver faster problem resolution through self-service.

If more companies follow TechSee’s approach to deploying generative AI, there could be more of the good Hinton has helped create and less of the bad.