AI Accelerator

The AI ​​accelerator is a microchip designed specifically to enable rapid processing of artificial intelligence (AI) functions.
ai-accelerator
Other built-in accelerators, such as graphics processing units (GPUs), auxiliary power units (APUs) and power processing units (PPUs), AI accelerators are designed to perform a special task that is as simple as normal for traditional CPUs Most desktop and notebooks have x86 derivatives. An purpose-made accelerator offers more performance, more features and more power efficiency to facilitate the given task.

Some computing tasks can be widely parallel, in which AI contains many. A GPU can speed up such tasks by using several simple cores that are usually used to distribute pixels on one screen. With a general purpose, GPU (GPGPU), a graphics card can be widely used in parallel computing implementations such as AI, where they distribute up to 10 times the performance of the CPUs.

AI accelerator's designs are also usually focused on multicore implementation. These core AIs are designed for general arithmetic functions, where the number of such works required for a job can present the impossible for traditional computing approaches. Google DeepMind's, AlphaGo project was the only issue with Go's game - the number of possible piece posts in the game made it impossible with a brutal force approach. Despite the massive hardware power, many clever adjustments were to be made in the algorithm. With objective-designed application-specific integrated circuits (ASIC), it is believed that efficiency can be higher than those obtained with GPGPU, which can benefit from the AI ​​functions such as autonomous driving.

Current hardware for AI acceleration includes Intel  Nervana , Google Tensor, Adapteva Epiphany,  Mobil EyeQ and  Movidus  Myriad 2S.
Share To:

Morshaaftab

Post A Comment:

0 comments so far,add yours