As artificial intelligence technology advances, attention is turning to neuromorphic computing and its potential to propel AI to new levels of power and performance.
Neuromorphic computing is a type of computing technology that mimics the human brain and nervous system. “It’s a hardware and software computing element that combines multiple specializations such as biology, mathematics, electronics and physics,” explains Abhishek Khandelwal, vice president of life sciences at engineering consultancy Capgemini Engineering.
While current AI technology improves human skills in several areas, such as B. Tier 4 self-driving vehicles and generative models, it still offers only a rough approximation of human/biological capabilities and is only useful in a handful of areas. “Input and output are still tied to digital means,” says Shriram Natarajan, director of technology research and consulting firm ISG. Neuromorphic approaches, on the other hand, try to actually replicate underlying biological systems. This could lead to a better understanding of the physical processes involved and could potentially be more natural for users.
Alternative computer architecture
Neuromorphic computing provides an alternative computing architecture that is fundamentally different from current computing platforms. “Current computer architectures are based on von Neuman principles, such as B. separate storage and processing and binary representation,” says Khandelwal. “Neuromorphic computing is modeled using brain concepts such as neurons and synapses.”
Current AI and machine learning (ML) technologies use a “network of neurons” at increasing depth to create human-like understandings of space, imagery or language, says Natarajan. The technology is designed to reflect human behavior and intuition. Neuromorphic technology has similar goals, but with greater fidelity to human brain structure. “AI systems have been very successful by taking over just a few features of the brain,” he notes. “The expectation of neuromorphic technology is that a deeper copy would be more effective, have broader applicability, and likely require less energy.”
Greater intelligence, consumes less energy
Neuromorphic computing uses a distributed network of neurons to process information in parallel. “This parallel processing approach is critical because it allows the system to process information faster and more efficiently than traditional computers, and makes it more resilient to errors and noise in the data,” says Khandelwal. “Unlike traditional computing, which needs to be trained on a large amount of data, neuromorphic computing learns and adapts in real time, just like the human brain, and uses relatively less power than traditional AI algorithms.”
Neuromorphic advocates believe the technology will lead to smarter systems. “Such systems could also learn automatically and self-regulate what they learn and where they learn from,” says Natarajan. Meanwhile, combining neuromorphic technology with neuroprosthetics (like Neuralink) could lead to breakthroughs in prosthetic control and various other types of human assistive and augmentation technologies.
Neuromorphic computer systems can learn and adapt in real time. “Compared to traditional AI algorithms, which require a significant amount of training before they become effective, neuromorphic computing systems can learn and adapt on the fly,” says Khandelwal. “This means they can respond quickly to changing environments and situations, making them ideal for use in applications like robotics and self-driving cars.”
Complex and challenging to develop
The development of neuromorphic computing systems is a complex and challenging task that requires significant expertise and innovation, as well as a deep understanding of neuroscience, computer science and engineering. “The primary engineering challenge is to design and build systems that can accurately simulate the behavior of biological neurons and synapses,” says Khandelwal. “This requires a deep understanding of the underlying principles of neural networks and the ability to translate this knowledge into practical technological solutions.”
Neuromorphic computing is a relatively new field, and much work remains to be done before it can even remotely achieve its full promise. “As neuromorphic computing aims to replicate the structure and function of the human brain in hardware and software, it has the potential to revolutionize computing by enabling machines to process information more efficiently and accurately, and to learn and adapt like humans ‘ says Khandelwal. “The aim is to overcome the limitations of artificial intelligence and robotics, which still have challenges with autonomy, creativity and sociality, and to be able to integrate aspects such as haptics and tactile perception into the overall analysis and decision-making processes.”
Neuromorphic Leaders and Evolution
According to Natarajan, the market leaders for neuromorphic computing include Qualcomm, Intel and IBM. But before the technology can enter the commercial mainstream, visual, acoustic, and spatial sensors must be developed or improved. “Even with these advances, it will take a lot more basic research, computation, and simulation to get viable neuromorphic solutions off the ground,” he says.
Despite the challenges, Khandelwal believes neuromorphic computing research is progressing on multiple fronts. “Advances in neural networks like World Models or Large Language Models (LLMs) like GPT-4 expand what will be possible for real-world use cases,” he says. “Computational psychology and other fields that mix neuroscience, computer science and cognitive science are pushing the boundaries of what is possible with neuromorphic computing,” says Khandelwal.
What to read next:
3 ways computer vision will bring humans into AI in 2023
Emotion recognition in technology: It’s complicated
Build a chatbot that people will really like