In a new set of test results released on Wednesday, artificial intelligence processors from Qualcomm outperformed Nvidia in two of three categories of power efficiency, while a Taiwanese upstart outperformed both in a third.
When it comes to using massive volumes of data to train AI models, Nvidia dominates the market. But, after these AI models have been trained, they are used more broadly in “inference” tasks like creating text responses to questions and determining whether an image contains a cat.
Experts predict that as companies incorporate AI technology into their products, the market for data center inference chips will expand significantly. But, corporations like Alphabet’s Google are already looking into ways to limit the additional expenses brought on by doing this.
Based on how many data center server queries each chip can handle per watt, testing results released on Wednesday by MLCommons, an engineering consortium that maintains testing benchmarks widely used in the AI chip industry, showed that Qualcomm’s AI 100 chip outperformed Nvidia’s flagship H100 chip at classifying images.
In comparison to Nvidia, Qualcomm’s processors achieved 197.6 server queries per watt. With 227 queries per watt, Neuchips, a business founded by renowned Taiwanese chip academic Youn-Long Lin, won the competition.
In the object detection test, Qualcomm outperformed Nvidia with a score of 3.2 queries per watt as opposed to Nvidia’s 2.4 queries per watt. Applications for object detection include looking at video from stores to determine where customers frequent most.
Nvidia, however, won a test of natural language processing, the AI technology most frequently employed in systems like chatbots, in both absolute performance terms and power efficiency terms. Nvidia achieved a sampling rate of 10.8 samples per watt, followed by Neuchips at 8.9 samples per watt and Qualcomm at 7.5 samples per watt.