Revolutionary chips from Dresden: AI is now becoming energy efficient!
With the “AI Pro” chip, TUM is developing novel, energy-efficient solutions for AI applications that conserve resources.

Revolutionary chips from Dresden: AI is now becoming energy efficient!
In the fast-moving world of artificial intelligence (AI), the efficiency of the hardware used plays a central role. This is where the technical innovation developed by the Technical University of Munich (TUM) comes into play. A new chip called “AI Pro” was manufactured in collaboration with Global Foundries in Dresden. This chip is characterized by its special properties that set it apart from conventional AI chips.
The “AI Pro” chip combines computing and storage units, which represents a significant improvement over traditional architectures. It operates on the principle of “hyperdimensional computing,” which allows it to recognize similarities and patterns without the need for millions of data sets. For example, the chip can detect that a car has four wheels and is traveling on the road without needing countless images. This capability not only results in faster data processing, but also enormous energy saving potential.
Energy efficiency in focus
With an energy consumption of just 24 microjoules per defined workout, the “AI Pro” chip requires extremely little compared to other chips that consume 10 to 100 times more energy. Current AI technologies, especially the powerful chips from Nvidia, have a significant energy appetite. These chips power large data centers that use as much electricity as small towns - a typical data center requires about as much energy as 100,000 homes. In this context, the effort to develop neuromorphic computers that work on the model of the human brain is particularly relevant. The human brain runs on the energy of a light bulb, which is a huge difference from the energy costs required today.
Engineers in Dresden are working on solutions to significantly increase energy efficiency. These neuromorphic computers use artificial neurons that are only activated when they are needed. Steve Furber, an advocate for neuromorphic systems, sees this technology as a potential solution to the high energy consumption currently associated with AI developments. Particular reference is made to the innovative ability of devices that operate directly at the edge, i.e. close to the user. This not only results in lower costs, but also minimizes the need to transfer data to the cloud.
Data security considerations
Another advantage of the “AI Pro” chip is the processing of data directly on the device. Sensitive data remains onboard, reducing cybersecurity and internet connectivity concerns. Such chips are particularly tailored to applications such as vital data processing via smartwatches or drone navigation. Professor Amrouch from TUM emphasizes that the future belongs to the people who own the hardware, which underlines the importance of local data processing.
Along with advances in neuromorphic technology, more and more applications are emerging that require specialized chips. The French company Spinncloud has taken another significant step towards mimicking human brain functions with the neuromorphic supercomputer Spinnaker 2. This system will be available from May 2025 and will make it possible to simulate up to ten billion artificial neurons in order to work even more efficiently. Still, the technology lags behind GPU computers - particularly in performance on training-intensive tasks such as language models. Work is underway to improve response times, but a “killer app” that clearly demonstrates the benefits is still missing.
Global demand for more sustainable technologies will increase as data volumes continue to grow. Neuromorphic computing therefore not only offers a promising solution for energy efficiency, but also a way to integrate AI into battery-powered devices. Initiatives such as the “Neuromorphic Computing” project from Fraunhofer IIS demonstrate the focus on developing scalable and configurable solutions that are intended to evolve in line with Moore’s Law. The idea of processing resource-intensive AI tasks locally could significantly improve data protection and the amount of energy required for data centers in the future.
We may therefore be at the beginning of a new era of energy efficiency in AI, which will be decisively shaped by the progressive development of neuromorphic systems and the optimization of chips like the “AI Pro”.