AI chips: Explosive growth of deep learning is leading to rapid evolution of diverse, dedicated processors

Artificial intelligence (AI) utilization has been accelerating rapidly for more than 10 years, as decreases in memory, storage and computation cost have made an increasing number of applications cost-effective. The technique of deep learning has emerged as the most useful. Large public websites such as Facebook (Nasdaq: FB) and Amazon (Nasdaq: AMZN), with enormous stores of data on user behavior and a clear benefit from influencing user behavior, were among the earliest adopters and continue to expand such techniques. Publicly visible applications include speech recognition, natural language processing and image recognition. Other high-value applications include network threat detection, credit fraud detection and pharmaceutical research.

Deep learning techniques are based on neural networks, inspired by animal brain structure. Neural networks perform successive computations on large amounts of data. Each iteration operates on the results of the prior computation, which is why the process is called “deep.” Deep learning relies on large amounts computation. In fact, deep learning techniques are well known; the recent growth is driven by decreasing costs of data acquisition, data transmission, data storage and computation. The new processors all aim to lower the cost of computation.

The new chips are less costly than CPUs for running deep learning workloads

Each computation is limited and tends to require relatively low precision, necessitating fewer bits than found in typical CPU operations. Deep learning computations are mostly tensor operations — predominantly matrix multiplication — and parallel tensor processing is the heart of many specialized AI chips. Traditional CPUs are relatively inefficient in carrying out this kind of processing. They cannot process many operations at the same time, and they deliver precision and capacity for complex computations that are not needed.

Nvidia (Nasdaq: NVDA) GPUs led the wave of new processors. In 2012, Google announced that its Google Brain deep learning project to recognize images of cats was powered by Nvidia GPUs, resulting in a hundredfold improvement in performance over conventional CPUs. With this kind of endorsement and with the widespread acceptance of the importance of deep learning, many companies, large and small, are following the money and investing in new types of processors. It is not certain that the GPU will be a long-term winner; successful applications of FPGAs and TPUs are plentiful.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.