SAN FRANCISCO >> SoftBank said Wednesday that it had agreed to pay $6.5 billion for the Silicon Valley chip startup Ampere Computing, doubling down on a bet that technology that originated in smartphones will come to dominate the world’s data centers.

The deal also reflects the Japanese conglomerate’s belief that Ampere’s chips can begin to play a significant role in artificial intelligence, where Nvidia has reaped the most rewards.

Ampere was founded eight years ago to sell chips for data centers based on technology from Arm Holdings, a British company that licenses chip designs that have powered nearly all mobile phones. SoftBank, which bought Arm in 2016, has been working to have chips based on Arm technology used more widely and for different tasks.

“The future of artificial superintelligence requires breakthrough computing power,” Masayoshi Son, SoftBank’s chair and CEO, said in prepared remarks. “Ampere’s expertise in semiconductors and high-performance computing will help accelerate this vision, and deepens our commitment to AI innovation in the United States.”

SoftBank said it would operate Ampere as a wholly owned subsidiary under its own name.

The sale comes amid a flurry of deals and shifting alliances driven by a furious demand for the chips used to power AI applications such as OpenAI’s ChatGPT. SoftBank, in particular, has announced a series of transactions in a bid to play a bigger role in the field.

In its splashiest move to date, Son joined President Donald Trump in January to announce an initiative called Stargate, alongside Sam Altman, OpenAI’s chief, and Larry Ellison, chair and founder of the software maker Oracle, which is Ampere’s largest investor and customer.

Son, Altman and Ellison said Stargate would invest as much as $500 billion to build an array of U.S. data centers to power the operations of OpenAI, starting with a location in Texas. Nvidia was listed as a key technology partner for the venture; it supplies chips called graphics processing units, or GPUs, which account for the bulk of AI calculations.

Another kind of chip also plays central roles in AI These are the microprocessors designed by Intel, Advanced Micro Devices and Arm that handle general-purpose computing calculations. These chips, which work alongside GPUs and are called “host” processors, manage AI jobs such as building special software programs called models. One microprocessor is typically used for every four Nvidia GPUs sold.

These microprocessors are also sometimes used to handle an AI task called “inferencing,” which includes providing answers to queries in chatbots. Up to now, chips from Intel and AMD accounted for nearly all AI host processors and microprocessors used for inferencing.

But some influential companies want to change that. Nvidia has begun heavily pushing Arm processors as an option for host microprocessors instead of Intel or AMD chips.

A lot of money is at stake. IDC, a market research firm, predicts that the market for microprocessors sold for AI will grow to $33 billion by 2030 from $12.5 billion in 2025.

AMD and Intel have pointed out that shifting to Arm can require laborious changes to software. They added that Nvidia was not exclusively backing Arm technology and still supported their chips as an option along with its latest GPUs.

“Nvidia is still a significant partner of ours,” said Ronak Singhal, chief architect of Intel’s Xeon line of data center chips.