A new type of computer chip boosts performance and reduces the energy demands of artificial intelligence systems by employing in-memory computing to break through computational bottlenecks.
The new chip provides speed and efficiency for AI applications such as self-driving cars, smart cities and factories, and healthcare. Such applications rely on deep-learning algorithms that allow computers to perform complex tasks by learning from data sets. The chip’s energy savings make it especially useful for “Edge AI,” where AI operates quickly and securely on smaller devices, ranging from wearables to on-premise servers.
The speed and energy savings derive from rearranging the chip’s architecture so that computation happens at the same location as the computer’s memory, eliminating the time and energy needed to retrieve data stored far away. To place the computational components within the tightly packed memory circuits, Naveen Verma and his students came up with an ingenious solution: Introduce tiny electrical components called capacitors right above the memory cells using the metal wires that have always existed in a chip, and harness these to implement a highly precise form of analog computation. While the transistors, commonly used
for computation, suffer from various forms of variation and noise, capacitors can be much more precisely controlled and can be easily integrated into the circuit of a static random access memory (SRAM).
The team has integrated the in-memory computing architectures into a programmable chip that works with common computer languages, such as C, as well as the software frameworks commonly used to design deep-learning systems, such as TensorFlow.
"In-memory computing shows a lot of promise in addressing the energy and speed of computing systems. With our design, this promise can scale to the AI applications we really care about.” – Naveen Verma
Innovator: Naveen Verma, Professor of Electrical Engineering
Co-inventors: Graduate students Hongyang Jia, Jin Lee, Murat Ozatay, Rakshit Pathak, and
Yinqi Tang; postdoctoral research associate Hossein Valavi
Additional team members: Graduate students Peter Deaville and Bonan Zhan
Development status: Patent protection is pending.
Funding: U.S. Department of Defense, Defense Advanced Projects Research Agency and Princeton Intellectual Property Accelerator Fund
For licensing enquiries, contact Chris Wright, Technology Licensing Associate.