Thesis Nikhil GARG
Defence: 5 December 15:00
Total video
Jury
Fabien ALIBART | Research manager | Nanotechnologies & Nanosystems Laboratory (LN2), CNRS, Université de Sherbrooke | Directeur de thèse | |||
Sylvain SAIGHI | Full professor | IMS Bordeaux | Rapporteur | |||
Elisa VIANELLO | Senior scientist | CEA-Leti | Rapporteur | |||
Dominique DROUIN | Professor | University of Sherbrooke | Thesis Co-Director | |||
Laura BEGON-LOURS | Assistant professor | ETH Zurich | Examinateur | |||
Jean-Michel PORTAL | Professor | Aix-Marseille University | Examinateur |
Summary:
Integrating artificial intelligence (AI) into edge computing (EC) and wearable devices presents significant challenges due to strict constraints on computing power and energy consumption. Neuromorphic computing, inspired by the brain's energy-efficient design and continuous learning capabilities, offers a promising solution for these applications. This thesis proposes a flexible algorithm-circuit co-design framework that addresses both algorithm development and hardware design, facilitating the efficient deployment of AI on specialised ultra-low power hardware.
The first part focuses on algorithm development and introduces voltage-dependent synaptic plasticity (VDSP), an unsupervised learning rule inspired by the brain. VDSP aims to implement Hebb's plasticity mechanism online using nanoscale memristive synapses. These devices mimic biological synapses by adjusting their resistance according to past electrical activity, enabling efficient online learning. The VDSP updates synaptic conductance according to the neuron's membrane potential, eliminating the need for additional memory to store the timing of activity peaks. This approach allows online learning without the complex pulse shaping circuitry usually required for spike timing dependent plasticity (STDP) with memristors. We show how VDSP can be advantageously adapted to three types of memristive devices (metal oxide filament synapses and ferroelectric tunnel junctions) with distinctive analogue switching characteristics. System-level simulations of pulse neural networks using these devices validated their performance on pattern recognition tasks on MNIST, achieving up to 90 % accuracy with improved scalability and reduced hyperparameter tuning compared to STDP. In addition, we evaluated the variability of the devices and proposed mitigation strategies to improve robustness.
In the second part, we implement a leaky integrate-and-fire (LIF) analogue neuron, together with a voltage regulator and current attenuator, to smoothly interface CMOS neurons with memristive synapses. The neuron design includes dual leakage, facilitating local learning via the VDSP. We also propose a configurable adaptation mechanism that allows adaptive LIF neurons to be reconfigured in real time. These versatile circuits can interface with a range of synaptic devices, enabling the processing of signals with a variety of temporal dynamics. By integrating these neurons into a network, we present a self-learning CMOS-memristor neural building block (NBB), composed of analogue circuits for cross-reading and LIF neurons, as well as digital circuits for switching between inference and learning modes. Compact neural networks that can adapt themselves, learn in real time and process environmental data, when implemented on ultra-low power hardware, open up new prospects for AI in edge computing. Advances in both hardware (circuits) and algorithms (online learning) will significantly accelerate the deployment of AI applications by exploiting analogue computing and nanoscale memory technologies.
Abstract:
Integrating artificial intelligence (AI) into edge computing (EC) and portable devices presents significant challenges due to stringent constraints on computational power and energy consumption. Neuromorphic computing, inspired by the brain's energy-efficient design and continuous learning capabilities, offers a promising solution for these applications. This thesis proposes a flexible algorithm-circuit co-design framework that addresses both algorithm development and hardware design, facilitating the efficient deployment of AI on specialized, ultra-low-power hardware.
The first part focuses on algorithm development and introduces voltage-dependent synaptic plasticity (VDSP), a brain-inspired unsupervised learning rule. VDSP is aimed at the online implementation of Hebb's plasticity mechanism using nanoscale memristive synapses. These devices mimic biological synapses by adjusting their resistance based on past electrical activity, enabling efficient online learning. VDSP updates synaptic conductance based on the membrane potential of the neuron, eliminating the need for additional memory to store spike timings. This approach allows for online learning without the complex pulse-shaping circuits typically required for spike-timing-dependent plasticity (STDP) with memristors. We show how VDSP can be advantageously adapted to three types of memristive devices (metal-oxide filamentary synapses, and ferroelectric tunnel junctions) with distinctive analog switching characteristics. System-level simulations of spiking neural networks using these devices validated their performance on MNIST pattern recognition tasks, achieving up to 90% accuracy with improved adaptability and reduced hyperparameter tuning compared to STDP. Additionally, we evaluated device variability and proposed mitigation strategies to enhance robustness.
In the second part, we implement an analog leaky integrate-and-fire (LIF) neuron, accompanied by a voltage regulator and current attenuator, to seamlessly interface CMOS neurons with memristive synapses. The neuron design features dual leakage, facilitating local learning through VDSP. We also propose a configurable adaptation mechanism that allows adaptive LIF neurons to be reconfigured in run-time. These versatile circuits can interface with a range of synaptic devices, allowing the processing of signals with a variety of temporal dynamics. Integrating these neurons into a network, we present a CMOS-memristor self-learning neural building block (NBB), consisting of analog circuits for crossbar reading and LIF neurons, along with digital circuits for switching between inference and learning modes. Compact neural networks that can self-adapt, learn in real time, and process environmental data, when realized on ultra-low-power hardware, open new possibilities for AI in edge computing. Advances in both hardware (circuits) and algorithms (online learning) will greatly accelerate the deployment of AI applications by leveraging analog computing and nanoscale memory technologies.