IBM discloses most recent accomplishments in AI hardware

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energetic effort,” IBM Research wrote in a blog post. “Therefore, the development of novel hardware-based on radically new processing paradigms is crucial for AI research to progress.” So, at the 2019 IEEE International Electron Devices Meeting (IEDM) in San Francisco, IBM Research divulged a series of AI hardware breakthroughs via papers in a few critical fields.

“Over the last five decades, semiconductor technology has been the engine for computing hardware,” IBM wrote. “[FinFET] technology continues to scale with ever-demanding requirements in density, power, and performance, but not fast enough[.]” Stacked Gate-All-Around (GAA) nanosheets are IBM’s answer as the demands of AI surpass the abilities of FinFET semiconductor architectures. “Nanosheet” was just coined in 2015, and now IBM Research is featuring three papers in the field of nanosheet technology. These included new procedures for empowering nanosheet stacking and different voltage cells, as well as a new fabrication strategy. IBM trusts that GAA nanosheets will offer “more computing performance and less power consumption” while additionally taking into consideration increasingly shifted and streamlined designs, empowering more adaptable gadget design.

IBM Research likewise featured a series of papers around phase-change memory (PCM), which “still poses major challenges,” including susceptibility to noise, resistance drift, and reliability concerns. The papers exhibited work from IBM specialists to grow new gadgets, algorithmic and structural solutions, and another model training strategy to help address these issues, improving stability and reliability. Different scientists presented a new neuro-inspired silicon-integrated prototype chip design for PCM.

At last, IBM explained on its efforts to quicken profound learning with new memory gadgets that are made utilizing preexisting materials found in semiconductor factories. The subsequent electro-chemical random-access memory, or ECRAM, “demonstrates sub-microsecond programming speed, a high conductance change linearity and symmetry, and a 2×2 array configuration without access selectors.” The ECRAM, which is CMOS-compatible, was tested on a linear regression common to the training of deep neural networks. In tandem, IBM Research featured new algorithms to improve the accuracy of predictive AI.

“The achievements in these papers,” IBM wrote in an email to HPCwire, “address a critical issue in AI advances: making hardware systems more efficient to keep pace with the demand of AI software and data workloads.”

Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Fortune Bulletin journalist was involved in the writing and production of this article.

Hardware