Artificial brains could point the way to ultra-efficient supercomputers
New research from Sandia National Laboratories suggests that brain-inspired neuromorphic computers are just as adept at solving complex mathematical equations as they are at speeding up neural networks and could eventually pave the way to ultra-efficient supercomputers.
Running on around 20 watts, the human brain is able to process vast quantities of sensory information from our environment without interrupting consciousness. For decades, researchers have been trying to replicate these processes in silicon, in what is commonly referred to as neuromorphic computing.
Sandia has been at the center of much of this research. The lab has deployed numerous neuromorphic systems from the likes of Intel, SpiNNaker, and IBM over the past several years.
Much of the research around these systems has focused on things like artificial intelligence and machine learning. But as it turns out, these brain-inspired chips are much more versatile.
The brain is performing complex computations even if we don't realize it, researchers James Aimone and Brad Theilman explained in a recent Sandia news release.
"Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball. These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply," Aimone explained.
In a paper recently published in the journal Nature Machine Intelligence, the boffins at Sandia demonstrated a novel algorithm for efficiently running a class of problems called partial differential equations (PDEs) on neuromorphic computers, including Intel's Loihi 2 neurochips.
PDEs are at the heart of some of the most complex scientific computing workloads today. They're used to model all manner of phenomena including electrostatic forces between molecules, the flow of water through a turbine, and the way radio frequencies propagate through buildings, the researchers explain.
These problems can be incredibly computationally demanding, often requiring the full grunt of modern supercomputers to solve. Neuromorphic computing presents a potential alternative that promises to be far more efficient if it can be made to scale reliably.
While still in their infancy, neuromorphic computers have already demonstrated strong efficiency gains over conventional CPU- and GPU-based systems. Intel's Loihi 2 systems deployed in Sandia's Hala Point and Oheo Gulch systems are reportedly capable of delivering 15 TOPS per watt of efficiency, around 2.5x that of modern GPUs like Nvidia's Blackwell chips.
More modern systems, such as the SpiNNaker2-based system deployed at Sandia last summer, tout even greater efficiency, claiming 18x higher performance per watt than modern GPUs.
As exciting as that might sound, the in-memory compute architecture inherent to neuromorphics is notoriously difficult to program, often requiring researchers to invent new algorithms for existing processes.
Here, the researchers were able to develop an algorithm called NeuroFEM, which implements the finite element method (FEM) commonly used to solve PDEs on spiking neuromorphic hardware. Perhaps more importantly, this research wasn't just theoretical, though, as we understand it the PDEs being solved here are intended more as a proof of concept than to demonstrate neuromorphic superiority.
The researchers were able to solve PDEs using actual neuromorphic hardware, specifically, Intel's Oheo Gulch system, which features 32 of its Loihi 2 neurochips.
In testing, the lab demonstrated near ideal strong scaling, which means that each time the core count is doubled, the time to solution is halved. This scaling isn't immune to Amdahl's law, which describes the limit to which workloads can be efficiently parallelized. But in testing, NeuroFEM was still shown to be 99 percent parallelizable.
What's more, the paper's authors argue the algorithm mitigates many of the programmability problems with neuromorphic systems.
"An important benefit of this approach is that it enables direct use of neuromorphic hardware on a broad class of numerical applications with almost no additional work for the user," they wrote. "The user friendliness of spiking neuromorphic hardware has long been recognized as a serious limitation to broader adoption and our results directly mitigate this problem."
- Ultimate camouflage tech mimics octopus in scientific first
- Trump's AI 'Genesis Mission' emerges from Land of Confusion
- We'll beat China to the Moon, NASA nominee declares
- Norway's most powerful supercomputer will use waste heat to raise salmon
By moving to an analog-based neuromorphic system - Loihi 2 is still a digital computer - the researchers speculate that even more complex PDEs could be solved even faster while also using less power.
With that said, neuromorphics may not be the only path forward. Researchers are increasingly exploring ways to use machine learning and generative AI surrogate models to accelerate conventional HPC problems.
"It remains an open question whether neuromorphic hardware can outperform GPUs on deep neural networks, which have largely evolved to benefit from GPUs' single instruction, multiple data architecture," the researchers wrote. ®