BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Intel Takes Aim At Nvidia (Again) With New AI Chip And Baidu Partnership

This article is more than 7 years old.

Intel  practically owns the business of selling chips for data center servers. IDC pegs its share of the market at 99%.

But Intel doesn't have such a strong grip on the latest, and hottest, slice of the market: artificial intelligence. It faces stiff competition from graphics chip expert Nvidia, whose graphics cards are currently the most popular for powering deep learning neural networks that perform mainstay artificial intelligence tasks like image recognition, voice recognition and natural language processing.

Hoping to push back against Nvidia's inroads, Intel announced on Wednesday a new server processor tailored for artificial intelligence, the third-generation Xeon Phi, code-named "Knights Mill." Not many technical details were revealed, but Intel said the chip would add more so-called "floating point" calculations, which are important for powering machine learning algorithms. The company said the Xeon Phi chip would be out sometime in 2017.

Intel hopes the chip will give it more of a chance to compete in the rapidly evolving (but still small market) for machine learning, a subset of AI that allows the computer to teach itself instead of having to be programmed. Intel said only 7% of all servers being used for machine learning and only 0.1% are running deep neural nets, a subset of machine learning that emulates the neurons and synapses of the brain to make sense of unstructured data.

Graphical processing units (or GPUs) from Nvidia have caught on here, in part, because of their ability to do “parallel computing,” a technique that involves multiple calculations happening simultaneously. That makes them much faster at running deep learning neural nets than a more generalized processors.

But as demand for this kind of computing grows rapidly, GPUs will quickly run into problems, according to Intel.

"A GPU solution won't scale," Diane Bryant, executive vice president and general manager of the Data Center Group for Intel, said in an interview. "The market is still nascent, so the current implementations are small enough that they could use GPUs, but it won't scale in the future."

To bolster its argument, Intel invited executives from  Baidu , China's biggest search engine, to appear on stage at the company's annual conference in San Francisco on Wednesday. Baidu announced that it would be using Xeon Phi chips, rather than Nvidia's GPUs, to run its natural language processing service, called Deep Speech. Baidu has been a heavy user of Nvidia's GPUs to power its deep learning models.

The moves suggest Intel is getting more aggressive in its fight with Nvidia over the future of AI. It recently published some benchmarks that compared the two companies' technology, claiming that the Xeon Phi processor is 2.3 times faster than Nvidia GPUs for machine learning algorithms. But in a blog post published on Tuesday, Nvidia pushed back against these claims. Nvidia said most of the benchmarking came from outdated software and hardware that doesn’t offer a true side-by-side comparison. Nvidia claimed if Intel had used the latest technology, Nvidia would achieve 30% faster training machine learning models over Intel.

“It’s understandable that newcomers to the field may not be aware of all the developments that have been taking place in both hardware and software,” Nvidia wrote.

(Update: Intel's response to the Nvidia blog: "It is completely understandable that Nvidia is concerned about Intel in this space. Intel routinely publishes performance claims based on publicly available solutions at the time, and we stand by our data.")

Regardless of what the benchmarks show, Intel said GPU accelerators will never be able to compete with a single dedicated chip.

“If you look at how systems use GPUs today, they’re deployed where you have a Xeon processor and a GPU card,” Jason Waxman, Intel's vice president and general manager of cloud, said in an interview. “There’s a lot of downsides to having to offload to another processor.”

Waxman said that the new Xeon Phi processor will eliminate the need to swap between a central processor and a GPU accelerator. With a chip like the upcoming Xeon Phi, all the processing required for machine learning tasks takes place on a single chip, thus eliminating the need to switch between a main processor and a GPU accelerator. "We want to wean people off the dependency on GPUs," said Waxman. "We think it’s a suboptimal implementation.”

Intel's data center unit makes up an increasing significant portion of the company's business. Last quarter, data center chips brought in $4 billion in revenue, compared with its shrinking (but still dominant) PC business of $7.3 billion. But the data center unit has seen its growth slow to just  5% year-over-year in the last quarter. With the portion of the server market dedicated to machine learning and deep learning set to boom, there's a lot business at stake if Intel doesn't catch up.

As part of its AI push, Intel last week announced plan to acquire AI startup Nervana for a reported $408 million.

Nvidia isn't Intel's only rival for machine learning chips.  Google has been equipping its servers with its a custom chip called the Tensor Processing Unit. The chip is specifically optimized for running Google's machine learning software, TensorFlow. Google claimed that its chip is the equivalent of moving the performance of its servers about seven years into the future.

Follow me on TwitterSend me a secure tip