Hi All,
I’ve been thinking about possible solutions to the “deterministic” gpu issues with on chain AI.This in turn has led me down the path of Neomorphic Quantum Computing (what a mouthful)
Thought I would share and maybe it would help or at least get a few neurons going
Firstly what is Neomorphic Quantum Computing ?
Neomorphic quantum computing is a new and emerging field that combines the principles of neuromorphic computing and quantum computing. It aims to create computing systems that are inspired by the human brain and can perform quantum computations. This approach has the potential to revolutionize various fields, including artificial intelligence, drug discovery, and materials science.
While traditional quantum computers rely on quantum bits (qubits) to perform calculations, neomorphic quantum computers use artificial neurons and synapses to mimic the brain’s neural networks. This allows for more efficient and parallel processing, as well as the ability to learn and adapt to new information.
One of the key advantages of neomorphic quantum computing is its potential to overcome the limitations of classical computers, such as energy consumption and processing power. By harnessing the power of quantum mechanics and the efficiency of neuromorphic computing, neomorphic quantum computers could enable breakthroughs in fields that are currently out of reach.
Dynex utilizes a unique approach to quantum computing known as neuromorphic quantum computing, which leverages ion drifting of electrons. This differs from traditional superconducting-based quantum computing by employing memristive elements that enable rapid response to changes, facilitating efficient solution finding.
This all happens at room temperature no special cooling or cryogenic temperature requirement’s
While specific hardware details are not publicly disclosed, it’s understood that Dynex’s technology relies on specialized quantum circuits designed to harness the principles of ion drift and quantum phenomena. This approach offers potential advantages in terms of scalability and error correction compared to traditional quantum computing methods.
Some benefits are;
• Inherently parallel operation is a characteristic of neuromorphic computers, where all neurons and synapses can potentially operate simultaneously; however, when compared with the parallelized von Neumann systems, neurons and synapses perform relatively simple computations.
• Memory and processing are co-located: in neuromorphic hardware, there is no concept of separating memory and processing. In many implementations, neurons and synapses perform processing and store values in tandem, despite the fact that neurons are sometimes thought of as processing units and synapses as memory units. By combining the processor and memory, the von Neumann bottleneck regarding processor/memory separation is mitigated, resulting in a reduction in maximum throughput. Furthermore, this collocation reduces the need for data access from the main memory, which consumes a large amount of energy compared to compute energy.5.
• Neuromorphic computers have inherent scalability since adding more neuromorphic chips increases the number of neurons and synapses. . In order to run larger and larger networks, it is possible to treat multiple physical neuromorphic chips as a single large neuromorphic implementation. Several large-scale neuromorphic hardware systems have been successfully implemented, including SpiNNaker6,7 and Loihi8.
• Neuromorphic computers use event-driven computation (meaning, computing only when available data is available) and temporally sparse
Sze, V., Chen, Y.-H., Emer, J., Suleiman, A. & Zhang, Z. Hardware for machine learning:
challenges and opportunities. In 2017 IEEE Custom Integrated Circuits Conference (CICC) 1–8
(IEEE, 2017).
Mayr, C., Hoeppner, S. & Furber, S. SpiNNaker 2: a 10 million core processor system for brain
simulation and machine learning. Preprint at [1911.02385] SpiNNaker 2: A 10 Million Core Processor System for Brain Simulation and Machine Learning (2019).
Furber, S. B., Galluppi, F., Temple, S. & Plana, L. A. The SpiNNaker project. Proc. IEEE 102,
652–665 (2014).
Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE
Micro 38, 82–99 (2018).
3
activity to achieve extremely high computational efficiency9,10. There is no work being performed by neurons and synapses unless there are spikes to be processed, and typically spikes are relatively sparse in the network operation.
• Stochasticity can be incorporated into neuromorphic computers, for instance when neurons fire, to accommodate noise.
Neuromorphic computers are well documented in the literature, and their features are often cited as motivating factors for their implementation and utilization 11,12,13,14. An attractive feature of neuromorphic computers is their extremely low power consumption: they consume orders of magnitude less power than conventional computers. This low-power operation is due to the fact that they are event-driven and massively parallel, with only a small portion of the entire system being active at any given time. Energy efficiency alone is a compelling reason to investigate the use of neuromorphic computers in light of the increasing energy costs associated with computing, as well as the increasing number of applications that are energy constrained (e.g. edge computing applications). As neuromorphic computers implement neural network-style computations inherently, they are a natural platform for many of today’s artificial intelligence and machine learning applications. The inherent computational properties of neuromorphic computers can also be leveraged to perform a wide variety of different types of computations15.
Mostafa, H., Müller, L. K. & Indiveri, G. An event-based architecture for solving constraint
satisfaction problems. Nat. Commun. 6, 1–10 (2015).
Amir, A. et al. A low power, fully event-based gesture recognition system. In 2017 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR) 7388–7397 (IEEE, 2017).
Schuman, C. D. et al. A survey of neuromorphic computing and neural networks in hardware.
Preprint at [1705.06963] A Survey of Neuromorphic Computing and Neural Networks in Hardware (2017).
James, C. D. et al. A historical survey of algorithms and hardware architectures for neural-
inspired and neuromorphic computing applications. Biol. Inspired Cogn. Archit. 19, 49–64 (2017).
Strukov, D., Indiveri, G., Grollier, J. & Fusi, S. Building brain-inspired computing. Nat.
Commun. 10, 4838–2019 (2019).
Davies, M. et al. Advancing neuromorphic computing with Loihi: a survey of results and
outlook. Proc. IEEE 109, 911–934 (2021).
Aimone, J. B. et al. Non-neural network applications for spiking neuromorphic hardware.
In Proc. 3rd International Workshop on Post Moores Era Supercomputing 24–26 (PMES, 2018).
Furthermore,
Neuromorphic computers offer several potential advantages over deterministic GPUs for on-chain AI:
Lower Power Consumption: Due to their event-driven nature and massively parallel architecture, neuromorphic computers consume significantly less power than conventional GPUs. This is crucial for on-chain AI, where energy efficiency is a major concern.
Improved Scalability: Neuromorphic hardware scales well by adding more chips, allowing for larger and more complex neural networks on the blockchain. This is essential for handling increasingly demanding AI tasks.
**Event-Driven Computation:**Neuromorphic computers only perform computations when data is available, leading to higher efficiency compared to constantly running GPUs. This is particularly beneficial for sparse data scenarios common in on-chain AI.
Potential for Non-Neural Network Applications: While traditionally used for neural networks, neuromorphic hardware can handle various computations. This opens doors for exploring new on-chain AI algorithms beyond standard neural networks.
Thanks for reading
Kurt