Elon Musk’s xAI in Funding Talks That Could Value Company at $40 Billion
The financing efforts follow that of rival OpenAI, which recently closed a funding round that valued it at $157 billion.
by https://www.nytimes.com/by/mike-isaac, https://www.nytimes.com/by/cade-metz · NY TimesElon Musk’s artificial intelligence start-up, xAI, is in talks to raise new financing that could value it at as much as $40 billion, up from $24 billion five months ago, three people with knowledge of the discussions said.
The talks are in the early stages, two of the people said, and the start-up’s valuation could fall into the mid-$30 billion range if the talks continue. Investors have discussed putting another $5 billion into the company, two of the people said, a figure that could also change as the discussions progress.
The talks follow a huge fund-raising effort by OpenAI, the San Francisco start-up that makes ChatGPT, which said this month that it had closed on financing that valued it at a whopping $157 billion. OpenAI launched the A.I. boom in late 2022 with ChatGPT’s release, spurring a funding surge for other A.I. companies, including xAI.
Enthusiasm among investors for A.I. companies has cooled in recent months, as several high-profile start-ups have essentially been folded into tech giants like Google and Amazon. But xAI and OpenAI are among the few that continue to seek billions in funding to build the leading A.I. technologies.
In May, xAI raised $6 billion, while rival Anthropic raised hundreds of millions of dollars in 2023 and in recent months has floated the idea of more funding with investors.
Mr. Musk did not respond to a request for comment about xAI. Details of the talks were earlier reported by The Wall Street Journal and The Information.
(The New York Times has sued OpenAI and Microsoft for copyright infringement of news content related to A.I. systems.)
Mr. Musk helped found OpenAI in 2015, alongside the entrepreneur Sam Altman and a small group of A.I. researchers. He parted ways with the A.I. lab less than three years later, after a dispute over its direction. At the time, OpenAI was structured as a nonprofit organization.
After Mr. Musk left OpenAI, Mr. Altman transformed OpenAI into a for-profit operation so that it could raise the enormous amounts of money needed to build A.I. technologies. These technologies learn skills by analyzing enormous amounts of digital data, which requires at least hundreds of millions of dollars in computing power.
After ChatGPT’s release in 2022, Mr. Musk created xAI to build similar technology. Through his social media service, X, xAI offers a chatbot called Grok. The start-up is using a facility in Memphis to develop its A.I. models on a network of thousands of high-powered computer servers.
Around the same time, Mr. Musk sued OpenAI, arguing that OpenAI and two of its founders, Mr. Altman and Greg Brockman, breached the company’s founding contract by putting commercial interests ahead of the public good. Mr. Musk dropped the suit months later, before reviving it in federal court in August.
While xAI is independent from X, its technology has been integrated into the social media platform and is trained on users’ posts. Users who subscribe to X’s premium features can ask Grok questions and receive responses.
Mr. Musk has said this technology is a path to artificial general intelligence, or A.G.I., a machine that can do anything the human brain can do. OpenAI makes the same argument about its technology. But Mr. Musk has said that unlike OpenAI, his company will “open source” its A.I. technology, sharing the underlying code with people and businesses.
Mr. Musk has argued that this is a safer approach to A.I. development than closing the technology to others. He, like some others in the field, has long warned that A.I. could be dangerous and perhaps even destroy humanity.
The tech industry is deeply divided over whether the code that underlies A.I. should be publicly available. Some engineers argue that the powerful technology must be guarded against interlopers while others insist that the benefits of transparency outweigh the harms.
Explore Our Coverage of Artificial Intelligence
News and Analysis
- A national security memorandum signed by President Biden detailed how the Pentagon, the intelligence agencies and other national security institutions should use and protect A.I. technology.
- Intel was for decades Silicon Valley’s dominant chip company. But missed opportunities and poor execution left it on the sidelines in tech’s latest gold rush.
- A former OpenAI researcher who helped gather and organize the enormous amounts of internet data used to train the startup’s ChatGPT chatbot says the company broke copyright law.
The Age of A.I.
- Despite — or, perhaps, because of — the rise in artificially made images, photography is suddenly in the spotlight, in galleries in New York and beyond.
- Nevada used A.I. to find students in need of help. The new system cut the number of students deemed “at risk” in the state by 200,000, leading to tough moral and ethical questions over which children deserve extra assistance.
- A project at Stanford points to the need for institutional innovation, especially in government, to increase the odds that A.I. enhances democracy.