What is neuromorphic computing? Everything you need to know about how it is changing the future of computing (2023)

What is neuromorphic computing?

As the name suggests, neuromorphic computing uses a model that's inspired by the workings of the brain.

Innovation

  • How to use ChatGPT: Everything you need to know

  • Can this luxury cargo bike replace your car? The answer will surprise you

  • The top satellite phones and gadgets for reliable off-grid communication

  • These experts are racing to protect AI from hackers. Time is running out

The brain makes a really appealing model for computing: unlike most supercomputers, which fill rooms, the brain is compact, fitting neatly in something the size of, well... your head.

Brains also need far less energy than most supercomputers: your brain uses about 20 watts, whereas the Fugaku supercomputer needs 28 megawatts -- or to put it another way, a brain needs about 0.00007% of Fugaku's power supply. While supercomputers need elaborate cooling systems, the brain sits in a bony housing that keeps it neatly at 37°C.

SEE:Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation(TechRepublic Premium)

True, supercomputers make specific calculations at great speed, but the brain wins on adaptability. It can write poetry, pick a familiar face out of a crowd in a flash, drive a car, learn a new language, take good decisions and bad, and so much more. And with traditional models of computing struggling, harnessing techniques used by our brains could be the key to vastly more powerful computers in the future.

Why do we need neuromorphic systems?

Most hardware today is based on the von Neumann architecture, which separates out memory and computing. Because von Neumann chips have to shuttle information back and forth between the memory and CPU, they waste time (computations are held back by the speed of the bus between the compute and memory) and energy -- a problem known as the von Neumann bottleneck.

By cramming more transistors onto these von Neumann processors, chipmakers have for a long time been able to keep adding to the amount of computing power on a chip, following Moore's Law. But problems with shrinking transistors any further, their energy requirements, and the heat they throw out mean without a change in chip fundamentals, that won't go on for much longer.

As time goes on, von Neumann architectures will make it harder and harder to deliver the increases in compute power that we need.

To keep up, a new type of non-von Neumann architecture will be needed: a neuromorphic architecture. Quantum computing and neuromorphic systems have both been claimed as the solution, and it's neuromorphic computing, brain-inspired computing, that's likely to be commercialised sooner.

As well as potentially overcoming the von Neumann bottleneck, a neuromorphic computer could channel the brain's workings to address other problems. While von Neumann systems are largely serial, brains use massively parallel computing. Brains are also more fault-tolerant than computers -- both advantages researchers are hoping to model within neuromorphic systems.

  • Breeding neuromorphic networks for fun and profit: The new reproductive science
  • Intel, partners make new strides in Loihi neuromorphic computing chip development
  • Intel Labs searches for chip giant's next act in quantum, neuromorphic advances

So how can you make a computer that works like the human brain?

First, to understand neuromorphic technology it make sense to take a quick look at how the brain works.

Messages are carried to and from the brain via neurons, a type of nerve cell. If you step on a pin, pain receptors in the skin of your foot pick up the damage, and trigger something known as an action potential -- basically, a signal to activate -- in the neurone that's connected to the foot. The action potential causes the neuron to release chemicals across a gap called a synapse, which happens across many neurons until the message reaches the brain. Your brain then registers the pain, at which point messages are sent from neuron to neuron until the signal reaches your leg muscles -- and you move your foot.

An action potential can be triggered by either lots of inputs at once (spatial), or input that builds up over time (temporal). These techniques, plus the huge interconnectivity of synapses -- one synapse might be connected to 10,000 others -- means the brain can transfer information quickly and efficiently.

SEE: Neuromorphic computing finds new life in machine learning

Neuromorphic computing models the way the brain works through spiking neural networks. Conventional computing is based on transistors that are either on or off, one or zero. Spiking neural networks can convey information in both the same temporal and spatial way as the brain can and so produce more than one of two outputs. Neuromorphic systems can be either digital or analogue, with the part of synapses played by either software or memristors.

Memristors could also come in handy in modelling another useful element of the brain: synapses' ability to store information as well as transmitting it. Memristors can store a range of values, rather than just the traditional one and zero, allowing it to mimic the way the strength of a connection between two synapses can vary. Changing those weights in artificial synapses in neuromorphic computing is one way to allow the brain-based systems to learn.

Along with memristive technologies, including phase change memory, resistive RAM, spin-transfer torque magnetic RAM, and conductive bridge RAM, researchers are also looking for other new ways to model the brain's synapse, such as using quantum dots and graphene.

  • Neuromorphic computing could solve the tech industry's looming crisis
  • Intel is teaching a computer chip to smell
  • This powerful new supercomputer will let scientists ask 'the right questions'

What uses could neuromorphic systems be put to?

For compute heavy tasks, edge devices like smartphones currently have to hand off processing to a cloud-based system, which processes the query and feeds the answer back to the device. With neuromorphic systems, that query wouldn't have to be shunted back and forth, it could be conducted within the device itself.

Hardware

  • The best photography drones: Capture stunning aerial footage

  • Our favorite robot mowers: Hands-free lawn care

  • The top 5 home batteries: Backup systems to keep you charged

  • The best cheap vacuum cleaners: Affordable and reliable vacs

But perhaps the biggest driving force for investments in neuromorphic computing is the promise it holds for AI.

Current generation AI tends to be heavily rules-based, trained on datasets until it learns to generate a particular outcome. But that's not how the human brain works: our grey matter is much more comfortable with ambiguity and flexibility.

SEE: Neuromorphic computing could solve the tech industry's looming crisis

It's hoped that the next generation of artificial intelligence could deal with a few more brain-like problems, including constraint satisfaction, where a system has to find the optimum solution to a problem with a lot of restrictions.

Neuromorphic systems are also likely to help develop better AIs as they're more comfortable with other types of problems like probabilistic computing, where systems have to cope with noisy and uncertain data. There are also others, such as causality and non-linear thinking, which are relatively immature in neuromorphic computing systems, but once they're more established, they could vastly expand the uses AIs could be put to.

Are there neuromorphic computer systems available today?

Yep, academics, startups and some of tech's big names are already making and using neuromorphic systems.

Intel has a neuromorphic chip, called Loihi, and has used 64 of them to make an 8 million synapse system called Pohoiki Beach, comprising 8 million neurones (it's expecting that to reach 100 million neurones in the near future). At the moment, Loihi chips are being used by researchers, including at the Telluride Neuromorphic Cognition Engineering Workshop, where they're being used in the creation of artificial skin and in the development of powered prosthetic limbs.

IBM also has its own neuromorphic system, TrueNorth, launched in 2014 and last seen with 64 million neurones and 16 billion synapses. While IBM has been comparatively quiet on how TrueNorth is developing, it did recently announce a partnership with the US Air Force Research Laboratory to create a 'neuromorphic supercomputer' known as Blue Raven. While the lab is still exploring uses for the technology, one option could be creating smarter, lighter, less energy-demanding drones.

Neuromorphic computing started off in a research lab (Carver Mead's at Cal-tech) and some of the best known are still in academic institutions. The EU-funded Human Brain Project (HBP), a 10-year project that's been running since 2013, was set up to advance understanding of the brain through six areas of research, including neuromorphic computing.

The HBP has led to two major neuromorphic initiatives, SpiNNaker and BrainScaleS. In 2018, a million-core SpiNNaker system went live, the largest neuromorphic supercomputer at the time, and the university hopes to eventually scale it up to model one million neurones. BrainScaleS has similar aims as SpiNNaker, and its architecture is now on its second generation, BrainScaleS-2.

  • Neuromorphic computing and the brain that wouldn't die
  • What neuromorphic engineering is, and why it's triggered an analog revolution
  • 10 tech predictions that could mean huge changes ahead

What are the challenges to using neuromorphic systems?

Shifting from von Neumann to neuromorphic computing isn't going to come without substantial challenges.

Computing norms -- how data is encoded and processed, for example -- have all grown up around the von Neumann model, and so will need to be reworked for a world where neuromorphic computing is more common. One example is dealing with visual input: conventional systems understand them as a series of individual frames, while a neuromorphic processor would encode such information as changes in a visual field over time.

SEE: Building the bionic brain (free PDF) (TechRepublic)

Programming languages will also need to be rewritten from the ground up, too. There are challenges on the hardware side: new generations of memory, storage and sensor tech will need to be created to take full advantage of neuromorphic devices.

Neuromorphic technology could even need a fundamental change in how the hardware and software is developed, because of the integration between different elements in neuromorphic hardware, such as the integration between memory and processing.

Do we know enough about the brain to start making brain-like computers?

One side effect of the increasing momentum behind neuromorphic computing is likely to be improvements in neuroscience: as researchers start to try to recreate our grey matter in electronics, they may learn more about the brain's inner workings that help biologists learn more about the brain.

And similarly, the more we learn about the human brain, the more avenues are likely to open up for neuromorphic computing researchers. For example, glial cells -- the brain's support cells -- don't figure highly in most neuromorphic designs, but as more information comes to light about how these cells are involved in information processing, computer scientists are starting to examine whether they should figure in neuromorphic designs too.

And of course, one of the more interesting questions about the increasingly sophisticated work to model the human brain in silicon is whether researchers may eventually end up recreating -- or creating -- consciousness in machines.

Artificial Intelligence

  • These experts are racing to protect AI from hackers. Time is running out
  • What is generative AI and why is it so popular? Here's everything you need to know
  • The best AI art generators: DALL-E 2 and other fun alternatives to try
  • AI's true goal may no longer be intelligence

FAQs

What is neuromorphic computing? Everything you need to know about how it is changing the future of computing? ›

Neuromorphic computing is a type of artificial intelligence (AI) that mimics the way the brain works. It uses specialized hardware, such as AI chips and software algorithms, to simulate neurons and synapses to process data more efficiently than traditional computers.

What is the future of neuromorphic computing? ›

The future of AI also depends on improving the capacity of intelligent systems by using powerful hardware. Hardware such as neuromorphic computing and quantum computing will allow companies to build AI solutions that are extremely fast and can encapsulate more data and knowledge.

What is the need of neuromorphic computing? ›

Developments in neuromorphic technology could improve the learning capabilities of state-of-the-art autonomous devices, such as driverless cars and drones. Neuromorphic computing is critical to the future of AI.

What is the potential of neuromorphic computing? ›

As a technology, neuromorphic computing has the potential to significantly advance the field of AI by creating more powerful and efficient models that can process information in ways that resemble the human brain.

What is the theory of neuromorphic computing? ›

Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons. We study artificial neural networks with nonlinear waves as a computing reservoir. We discuss universality and the conditions to learn a dataset in terms of output channels and nonlinearity.

What will be the future of computing? ›

The future of computing is being shaped by transistors made from materials other than silicon. It's being amplified by approaches that have nothing to do with transistor speed, such as deep-learning software and the ability to crowdsource excess computing power to create what amounts to distributed supercomputers.

How fast is neuromorphic computing? ›

The new AI neuromorphic chip can perform data-crunching tasks 1,000 times faster than normal processors like CPUs and GPUs while using a lot less power. The way it is based on brain neurons is not something entirely new. Many AI algorithms simulate neural networks in their programs.

What are the limits of neuromorphic computing? ›

Real biological neurons can have up to 20,000 synapse connections per neuron. Existing neuromorphic chips tend to limit the number of synapses to 256 per neuron.

What does neuromorphic mean today? ›

What does the word “neuromorphic” mean today? The term is broadly used to mean technologies that are inspired by biology, specifically brains, but with the rise of artificial intelligence, technologies claiming to be “brain–inspired” are abundant.

What is the disadvantage of neuromorphic computing? ›

Speaking about the limitations of neuromorphic computing, Dally pointed out that spikes are an inefficient way of representing numbers. This means that they are not particularly useful for doing many tasks that are currently done by conventional computers.

Are neuromorphic chips the future? ›

Neuromorphic computing will not be directly replacing the modern CPUs and GPUs. Instead, the two types of computing approaches will be complementary, each suited for its sorts of algorithms and applications.

What is the difference between AI and neuromorphic computing? ›

While artificial intelligence (AI) is a more general area that includes a variety of technologies and strategies for building intelligent machines, neuromorphic technology is a subset of electronics that tries to emulate the operations of the human brain using specialized hardware.

Is neuromorphic computing analog or digital? ›

Although the Loihi, the TrueNorth, and the Spinnaker are pure digital systems, in the sense that both computing and communication are held digitally, the NeuroGrid is a mixed analog–digital circuit. In the Neurogrid, synaptic computations were implemented with analog circuitry.

What are neuromorphic devices? ›

A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations.

How you think technology will change in the next 5 years? ›

We can expect a large transition to cloud computing in the next five years in many organizations, businesses, and industries. There also will be more advances in alternatives to cloud computing, including edge computing (which we detail on this list) and fog computing.

What will computing look like in 2030? ›

It is estimated that by 2030, global data will be growing by one yottabyte every year. Total general computing power will see a tenfold increase and reach 3.3 ZFLOPS, and AI computing power will increase by a factor of 500, to more than 100 ZFLOPS[2].

How computing is changing the world? ›

With the internet and computers, long-distance communication has also become much more accessible for people worldwide. As technology progressed, simple tasks like shopping, booking tickets, buying a new house, searching for schools, and looking for medical information became much more manageable.

Is neuromorphic computing AI? ›

Neuromorphic Computing | Beyond Today's AI. Intel Labs' neuromorphic research goes beyond today's deep-learning algorithms by co-designing optimized hardware with next-generation AI software. Built with the help of a growing community, this pioneering research effort seeks to accelerate the future of adaptive AI.

When was neuromorphic computing invented? ›

Inspired by the human brain and the functioning of the nervous system, Neuromorphic Computing was a concept introduced in the 1980s.

How does a neuromorphic chip work? ›

Neuromorphic chips are packed with artificial neurons and artificial synapses that mimic the activity spikes that occur within the human brain—and they handle all this processing on the chip. This results in smarter, far more energy-efficient computing systems.

What is the difference between neuromorphic and neural networks? ›

The key feature to neuromorphic systems is that they operate on the same principle as neurons in the brain in that signals can fire neurons which cause them to send signals to other neurons. A neural net, however, is a series of nodes connected by weighted links that resembles the neurons in a brain.

What are the disadvantages of computing? ›

Computers are great tools, but they have their disadvantages too. They can be slow, unreliable, and expensive. They also require constant maintenance and upgrades.
...
They also require constant maintenance and upgrades.
  • Online Cyber-Crimes. ...
  • Health-Issues. ...
  • Fake News. ...
  • E-waste. ...
  • Lack of Concentration and Irritation.
Sep 8, 2021

What will Elon Musk's microchip do? ›

Neuralink's device has a chip that processes and transmits neural signals that could be transmitted to devices like a computer or a phone. The company hopes that a person would potentially be able to control a mouse, keyboard or other computer functions like text messaging with their thoughts.

What will replace computer chips? ›

The most common 2-D material replacing silicon is graphene. Graphene is an allotrope of carbon consisting of a single layer of atoms arranged in a two-dimensional honeycomb lattice.

What technology will replace silicon for chips? ›

Silicon carbide is replacing silicon for power electronics in major EV industries including Tesla, since it has three times higher thermal conductivity than silicon despite its lower electrical mobilities.

Is neuromorphic computing a quantum computer? ›

Quantum neuromorphic computing physically implements neural networks in brain-inspired quantum hard- ware to speed up their computation. In this perspective article, we show that this emerging paradigm could make the best use of the existing and near future intermediate size quantum computers.

What is the difference between quantum and neuromorphic computing? ›

 Quantum computing uses specialized hardware and software to exploit the unique features of quantum physics, while neuromorphic computing uses specialized hardware and software to mimic the workings of the human brain.

Is neuromorphic computing quantum computing? ›

Quantum neuromorphic computing implements neural networks on quantum hardware. Depending on the quantum computing platform, different approaches can be divided into two groups: digital approaches using gate-based quantum computers and analog approaches using analog quantum computing platforms (Fig. 1).

How big is the neuromorphic computing market? ›

Neuromorphic Computing Market Report Scope
Report MetricDetails
Estimated Market SizeUSD 22,743 Thousand
Projected Market SizeUSD 550,593 Thousand
Growth RateCAGR of 89.1%
Market size available for years2021–2026
9 more rows

What's so exciting about neuromorphic computing? ›

Key advantages of neuromorphic computing compared to traditional approaches are energy efficiency, execution speed, robustness against local failures and the ability to learn.

Who is making neuromorphic chips? ›

Intel Labs' second-generation neuromorphic research chip, codenamed Loihi 2, and Lava, an open-source software framework, will drive innovation and adoption of neuromorphic computing solutions. Enhancements include: Up to 10x faster processing capability.

References

Top Articles
Latest Posts
Article information

Author: Terence Hammes MD

Last Updated: 07/09/2023

Views: 5831

Rating: 4.9 / 5 (49 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Terence Hammes MD

Birthday: 1992-04-11

Address: Suite 408 9446 Mercy Mews, West Roxie, CT 04904

Phone: +50312511349175

Job: Product Consulting Liaison

Hobby: Jogging, Motor sports, Nordic skating, Jigsaw puzzles, Bird watching, Nordic skating, Sculpting

Introduction: My name is Terence Hammes MD, I am a inexpensive, energetic, jolly, faithful, cheerful, proud, rich person who loves writing and wants to share my knowledge and understanding with you.