Artificial neurons are basically capacitors that absorb and sum electrical charges they then release in tiny bursts of electricity. Computer chips, called “neuromorphic systems,” assemble neural networks into large groupings that mimic the human brain by sending electrical stimuli to neurons firing in no predictable order. This contrasts with the more lock-step procedure used by most computers with pre-set electronic processes.
Because of their haphazard firing, neuromorphic systems are often slower than conventional computers but also require far less energy to operate. They also require a different approach to programming because otherwise their artificial neurons fire too often or not often enough, which has been a problem in commercializing them.
To solve this problem, computer engineers use spiking tools that let artificial neurons release energy in spikes, much like human neurons do. Researchers at Sandia National Laboratory developed a spiking tool they call Whetstone. It acts as supplemental computer code for conventional software training programs. Whetstone trains and sharpens artificial neurons by leveraging those that spike only when a sufficient amount of energy (data) has been collected. This training has improved standard neural networks and is being evaluated for use with neuromorphic systems, which have usually been trained in ad hoc ways rather than using a standardized method.
Whetstone can be visualized as a way to control a class of talkative students tasked with identifying an object on their teacher’s desk. Prior to Whetstone, the students sent a continuous stream of sensor input to their formerly overwhelmed teacher, who had to listen to all of it before passing a decision into the neural system. This huge amount of information often requires a lot of computing power to process, which also adds an increase in electrical power, or cloud computing. Both options add time and costs to commercial AI products, reduce security and privacy, and make their acceptance less likely.
Under Whetstone, their newly strict teacher only pays attention to a simple “yes” or “no” measurement of each student when they raise their hands with a solution, rather than to everything they are saying. Suppose, for example, the intent is to determine if a piece of green fruit on the desk is an apple. Each student is a sensor that may respond to a different quality of what makes up an apple: Does it have the right of smell, taste, texture, and so on? And although a student looking for red may vote “no,” another student looking for green would vote “yes.” When the number of answers, either yay or nay, is electrically high enough to trigger the neuron’s capacity to fire, that simple result, instead of endless waffling, enters the overall neural system.
Although Whetstone simplifications could potentially increase errors, the overwhelming number of participating neurons, often over a million, provide information that statistically make up for inaccuracies introduced by data simplification.
Whetstone works best when patched in to programs meant to train new artificial intelligence equipment. That because Whetstone doesn’t have to overcome learned patterns with already established energy minimums.
Whetstone has been shown to let neural computer networks process information up to 100 times more efficiently than the current industry standard, say the Sandia researchers who developed it.
It also greatly reduces the amount of circuitry needed to perform autonomous tasks. This should help AI become more popular and useful for mobile phones, self-driving cars, and automated interpretation of images.
The largest AI companies have developed spiking tools for their own products, but none are as fast or efficient as Whetstone. And their tools usually only work only their own hardware and software. Whetstone, in contrast, works on many neural platforms.