brain-inspired AI code library

Brain-Inspired AI Code Library Notches Milestone

Brain-inspired AI code library hits a milestone, driving advances in AI innovation:

A new open source code library, snnTorch, has surpassed 100,000 downloads and is used in a wide variety of projects — from NASA satellite tracking efforts to optimizing chips for AI.

Brain-inspired AI code library hits a milestone, driving advances in AI innovation:
brain-inspired AI code library
Spiking neural networks, a form of low-power, brain-inspired deep learning, are being incorporated into more applications across a variety of fields. (Image: ucsc.edu)

Four years ago, UC Santa Cruz’s Jason Eshraghian developed a Python library that combines neuroscience with artificial intelligence (AI) to create spiking neural networks, a machine learning method that takes inspiration from the brain’s ability to efficiently process data.

Now, his open source code library — snnTorch — has surpassed 100,000 downloads and is used in a wide variety of projects, from NASA satellite tracking efforts to semiconductor companies optimizing chips for AI.

“It’s exciting because it shows people are interested in the brain, and that people have identified that neural networks are really inefficient compared to the brain,” said Assistant Professor Eshraghian. “People are concerned about the environmental impact [of the costly power demands] of neural networks and large language models, and so this is a very plausible direction forward.”

Here is an exclusive Tech Briefs interview with Eshraghian, edited for length and clarity.

Tech Briefs: What was the biggest technical challenge you faced while developing snnTorch?

Eshraghian: Starting any project where the state of research is highly unsettled can be extremely daunting. Then taking that and making it not just functional, but intuitive and user-friendly — that was a constant balancing act.

On the one hand, I wanted to maintain the sophistication and complexity of biological neurons and all the fun stuff neuroscience has to offer. On the other hand, I had to make sure the interface was natural and straightforward for developers. There’s little value in a tool that nobody understands.

For every function I coded up, I probably spent half a day forgetting I just wrote it so that I could give it arguments that felt intuitive with names that blended with what PyTorch had to offer. This was critical because snnTorch is meant to be used in conjunction with other deep learning libraries out there. It was very important for it to feel syntactically similar to PyTorch, while still being distinguishable.

A lot more time went into making it “usable” than “functional.”

Tech Briefs: Can you explain in simple terms how it works?

Eshraghian: Deep learning relies on layers upon layers of “artificial neurons” which communicate often using 32-bit floating point values. The brain uses biological neurons. These neurons have memory. They respond to history, they’re robust to noise, and they communicate using voltage bursts known as “action potentials.” These bursts look like sudden spikes that come out of nowhere. So, if we chain up a load of these biological neurons, we call that a “spiking neural network.” Hence the “snn” in snnTorch.

snnTorch takes neuron models and learning rules developed by computational and theoretical neurosciences, and then introduces some of its own. Many of these neuron models are quite dreadful at being “trained” the same way that deep learning trains its artificial neurons, so under the hood of snnTorch, there are a lot of modifications made to these neurons that make them compatible with the advances in deep learning.

As a result, when combining neuron models in snnTorch with different connection structures in PyTorch, you have a neural network that evolves over time using neurons that transmit information between each other via spikes.

This is great for brain modelling, but more importantly, it’s also incredibly energy efficient when models are compiled to neuromorphic, or brain-inspired, hardware.

Tech Briefs: The paper says, “Eshraghian is collaborating with people to push the field in a number of ways, from making biological discoveries about the brain, to pushing the limits of neuromorphic chips to handle low-power AI workloads, to facilitating collaboration to bring the spiking neural network-style of computing to other domains such as natural physics.” Do you have any updates you can share?

Eshraghian: The brain is a highly complex dynamical system, composed of neurons that are in themselves highly complex dynamical systems. There are a lot of phenomena in natural physics that are inherently “brain-like.” For example, if I take a resistor, connect it to a capacitor, and inject this circuit with a current, then it actually does a fairly good job of emulating a biological neuron.

A lot of similar, interesting dynamics exist at the molecular and atomic level, where material scientists and physicists are trying to build next-generation nanoelectronics and push beyond some of the limitations of silicon with the slowing down of Moore’s Law.

But bridging the gap between physics and application is incredibly hard. It took six decades of advances in transistors and integrated circuits to build a 24-core microprocessor. It can be incredibly hard to justify jumping ship to a totally different material stack to start building a new technology from a functionality and financial perspective.

Tech Briefs: Going from that, what are your next steps?

Eshraghian: Now that we have a software framework that bridges deep learning with neuroscience, and a tool that enables interoperability with exotic hardware, the next steps in my lab are to build that hardware and create tools that make it easy for others to build that hardware.

The deep learning community has a lot of reliance on NVIDIA at the moment. And the modern AI revolution would simply not have been possible without them and the GPUs they sell. But there is space to do better.

My short-term dream is to have chip design follow a process that is almost as simple as training a deep learning model — which has become incredibly simple in today’s day and age thanks to tools such as Tensorflow and PyTorch. Doing the same thing for AI chips and accelerators, so that we can run our models locally, and reduce our overdependence on centralized servers, is the next obvious step.

Leave a Reply

Your email address will not be published. Required fields are marked *