{"id":71585,"date":"2024-11-26T12:31:11","date_gmt":"2024-11-26T10:31:11","guid":{"rendered":"https:\/\/www.iemn.fr\/?p=71585"},"modified":"2024-11-26T16:29:26","modified_gmt":"2024-11-26T14:29:26","slug":"these-nikhil-garg","status":"publish","type":"post","link":"https:\/\/www.iemn.fr\/en\/a-la-une\/these-nikhil-garg.html","title":{"rendered":"Thesis by Nikhil GARG: \"neuromorphic in-memory learning with analog integrated circuits and nanoscale memristive devices\" 5\/12 15:00"},"content":{"rendered":"<div id='layer_slider_1'  class='avia-layerslider main_color avia-shadow  avia-builder-el-0  el_before_av_heading  avia-builder-el-first  container_wrap sidebar_right'  style='height: 261px;'  ><div id=\"layerslider_58_1schm1utdxu33\" data-ls-slug=\"homepageslider\" class=\"ls-wp-container fitvidsignore ls-selectable\" style=\"width:1140px;height:260px;margin:0 auto;margin-bottom: 0px;\"><div class=\"ls-slide\" data-ls=\"duration:6000;transition2d:5;\"><img loading=\"lazy\" decoding=\"async\" width=\"2600\" height=\"270\" src=\"https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1.jpg\" class=\"ls-bg\" alt=\"\" srcset=\"https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1.jpg 2600w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-300x31.jpg 300w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-768x80.jpg 768w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-1030x107.jpg 1030w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-1500x156.jpg 1500w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-705x73.jpg 705w\" sizes=\"auto, (max-width: 2600px) 100vw, 2600px\" \/><ls-layer style=\"font-size:14px;text-align:left;font-style:normal;text-decoration:none;text-transform:none;font-weight:700;letter-spacing:0px;border-style:solid;border-color:#000;background-position:0% 0%;background-repeat:no-repeat;width:180px;height:30px;left:0px;top:231px;line-height:32px;color:#ffffff;border-radius:6px 6px 6px 6px;padding-left:50px;background-color:rgba(0, 0, 0, 0.57);\" class=\"ls-l ls-ib-icon ls-text-layer\" data-ls=\"minfontsize:0;minmobilefontsize:0;\"><i class=\"fa fa-quote-right\" style=\"color:#ffffff;margin-right:0.8em;font-size:1em;transform:translateY( -0.125em );\"><\/i>ACTUALITES<\/ls-layer><\/div><\/div><\/div><div id='after_layer_slider_1'  class='main_color av_default_container_wrap container_wrap sidebar_right'  ><div class='container av-section-cont-open' ><div class='template-page content  av-content-small alpha units'><div class='post-entry post-entry-type-page post-entry-71585'><div class='entry-content-wrapper clearfix'>\n\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-k5dohoxw-c5c154f8bb91fd8571675e7f849fb184\">\n#top .av-special-heading.av-k5dohoxw-c5c154f8bb91fd8571675e7f849fb184{\nmargin:0 0 10px 0;\npadding-bottom:4px;\n}\nbody .av-special-heading.av-k5dohoxw-c5c154f8bb91fd8571675e7f849fb184 .av-special-heading-tag .heading-char{\nfont-size:25px;\n}\n.av-special-heading.av-k5dohoxw-c5c154f8bb91fd8571675e7f849fb184 .av-subheading{\nfont-size:15px;\n}\n<\/style>\n<div  class='av-special-heading av-k5dohoxw-c5c154f8bb91fd8571675e7f849fb184 av-special-heading-h4  avia-builder-el-1  el_after_av_layerslider  el_before_av_hr  avia-builder-el-first  av-linked-heading'><h4 class='av-special-heading-tag'  itemprop=\"headline\"  >Nikhil GARG's thesis: \"neuromorphic in-memory learning with analog integrated circuits and nanoscale memristive devices\" 5\/12 3pm<\/h4><div class=\"special-heading-border\"><div class=\"special-heading-inner-border\"><\/div><\/div><\/div>\n\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-18u73nj-dad6a947580930e400fc42ba200e80f1\">\n#top .hr.av-18u73nj-dad6a947580930e400fc42ba200e80f1{\nmargin-top:5px;\nmargin-bottom:5px;\n}\n.hr.av-18u73nj-dad6a947580930e400fc42ba200e80f1 .hr-inner{\nwidth:100%;\n}\n<\/style>\n<div  class='hr av-18u73nj-dad6a947580930e400fc42ba200e80f1 hr-custom  avia-builder-el-2  el_after_av_heading  el_before_av_textblock  hr-left hr-icon-no'><span class='hr-inner inner-border-av-border-thin'><span class=\"hr-inner-style\"><\/span><\/span><\/div>\n<section  class='av_textblock_section av-jriy64i8-fd5f2e9d63bf552d6910d12f255eb26e'   itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/BlogPosting\" itemprop=\"blogPost\" ><div class='avia_textblock'  itemprop=\"text\" >\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-13ewzjw-68e036126b913e5028f77311dc66b825\">\n.av_font_icon.av-13ewzjw-68e036126b913e5028f77311dc66b825{\ncolor:#bfbfbf;\nborder-color:#bfbfbf;\n}\n.av_font_icon.av-13ewzjw-68e036126b913e5028f77311dc66b825 .av-icon-char{\nfont-size:60px;\nline-height:60px;\n}\n<\/style>\n<span  class='av_font_icon av-13ewzjw-68e036126b913e5028f77311dc66b825 avia_animate_when_visible av-icon-style- avia-icon-pos-left avia-icon-animate'><span class='av-icon-char' aria-hidden='true' data-av_icon='\ue8c9' data-av_iconfont='entypo-fontello' ><\/span><\/span>\n<p><strong>Thesis Nikhil GARG<br \/>\n<\/strong><\/p>\n<p>Defence: 5 December 15:00<strong><br \/>\n<\/strong>Total video<\/p>\n<\/div><\/section>\n\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-jtefqx33-6db969b2e204313ebd62331ed4fc69ec\">\n#top .av_textblock_section.av-jtefqx33-6db969b2e204313ebd62331ed4fc69ec .avia_textblock{\nfont-size:15px;\n}\n<\/style>\n<section  class='av_textblock_section av-jtefqx33-6db969b2e204313ebd62331ed4fc69ec'   itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/BlogPosting\" itemprop=\"blogPost\" ><div class='avia_textblock'  itemprop=\"text\" ><div  class='hr av-kjh3zw-4dff888f744b728a1aca9b3a0971493a hr-default  avia-builder-el-6  avia-builder-el-no-sibling'><span class='hr-inner'><span class=\"hr-inner-style\"><\/span><\/span><\/div>\n<h5><strong><span style=\"color: #800000;\">Jury<\/span><\/strong><i><\/i><\/h5>\n<table>\n<tbody>\n<tr>\n<td>Fabien ALIBART<\/td>\n<td><\/td>\n<td>Research manager<\/td>\n<td><\/td>\n<td>Nanotechnologies &amp; Nanosystems Laboratory (LN2), CNRS, Universit\u00e9 de Sherbrooke<\/td>\n<td><\/td>\n<td>Directeur de th\u00e8se<\/td>\n<\/tr>\n<tr>\n<td>Sylvain SAIGHI<\/td>\n<td><\/td>\n<td>Full professor<\/td>\n<td><\/td>\n<td>IMS Bordeaux<\/td>\n<td><\/td>\n<td>Rapporteur<\/td>\n<\/tr>\n<tr>\n<td>Elisa VIANELLO<\/td>\n<td><\/td>\n<td>Senior scientist<\/td>\n<td><\/td>\n<td>CEA-Leti<\/td>\n<td><\/td>\n<td>Rapporteur<\/td>\n<\/tr>\n<tr>\n<td>Dominique DROUIN<\/td>\n<td><\/td>\n<td>Professor<\/td>\n<td><\/td>\n<td>University of Sherbrooke<\/td>\n<td><\/td>\n<td>Thesis Co-Director<\/td>\n<\/tr>\n<tr>\n<td>Laura BEGON-LOURS<\/td>\n<td><\/td>\n<td>Assistant professor<\/td>\n<td><\/td>\n<td>ETH Zurich<\/td>\n<td><\/td>\n<td>Examinateur<\/td>\n<\/tr>\n<tr>\n<td>Jean-Michel PORTAL<\/td>\n<td><\/td>\n<td>Professor<\/td>\n<td><\/td>\n<td>Aix-Marseille University<\/td>\n<td><\/td>\n<td>Examinateur<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h5><strong><span style=\"color: #800000;\">\u00a0<\/span><\/strong><\/h5>\n<h5>Summary:<\/h5>\n<p>Integrating artificial intelligence (AI) into edge computing (EC) and wearable devices presents significant challenges due to strict constraints on computing power and energy consumption. Neuromorphic computing, inspired by the brain's energy-efficient design and continuous learning capabilities, offers a promising solution for these applications. This thesis proposes a flexible algorithm-circuit co-design framework that addresses both algorithm development and hardware design, facilitating the efficient deployment of AI on specialised ultra-low power hardware.<\/p>\n<p>The first part focuses on algorithm development and introduces voltage-dependent synaptic plasticity (VDSP), an unsupervised learning rule inspired by the brain. VDSP aims to implement Hebb's plasticity mechanism online using nanoscale memristive synapses. These devices mimic biological synapses by adjusting their resistance according to past electrical activity, enabling efficient online learning. The VDSP updates synaptic conductance according to the neuron's membrane potential, eliminating the need for additional memory to store the timing of activity peaks. This approach allows online learning without the complex pulse shaping circuitry usually required for spike timing dependent plasticity (STDP) with memristors. We show how VDSP can be advantageously adapted to three types of memristive devices (metal oxide filament synapses and ferroelectric tunnel junctions) with distinctive analogue switching characteristics. System-level simulations of pulse neural networks using these devices validated their performance on pattern recognition tasks on MNIST, achieving up to 90 % accuracy with improved scalability and reduced hyperparameter tuning compared to STDP. In addition, we evaluated the variability of the devices and proposed mitigation strategies to improve robustness.<\/p>\n<p>In the second part, we implement a leaky integrate-and-fire (LIF) analogue neuron, together with a voltage regulator and current attenuator, to smoothly interface CMOS neurons with memristive synapses. The neuron design includes dual leakage, facilitating local learning via the VDSP. We also propose a configurable adaptation mechanism that allows adaptive LIF neurons to be reconfigured in real time. These versatile circuits can interface with a range of synaptic devices, enabling the processing of signals with a variety of temporal dynamics. By integrating these neurons into a network, we present a self-learning CMOS-memristor neural building block (NBB), composed of analogue circuits for cross-reading and LIF neurons, as well as digital circuits for switching between inference and learning modes. Compact neural networks that can adapt themselves, learn in real time and process environmental data, when implemented on ultra-low power hardware, open up new prospects for AI in edge computing. Advances in both hardware (circuits) and algorithms (online learning) will significantly accelerate the deployment of AI applications by exploiting analogue computing and nanoscale memory technologies.<\/p>\n<p align=\"justify\"><strong>Abstract:<\/strong><\/p>\n<p>Integrating artificial intelligence (AI) into edge computing (EC) and portable devices presents significant challenges due to stringent constraints on computational power and energy consumption. Neuromorphic computing, inspired by the brain's energy-efficient design and continuous learning capabilities, offers a promising solution for these applications. This thesis proposes a flexible algorithm-circuit co-design framework that addresses both algorithm development and hardware design, facilitating the efficient deployment of AI on specialized, ultra-low-power hardware.<\/p>\n<p>The first part focuses on algorithm development and introduces voltage-dependent synaptic plasticity (VDSP), a brain-inspired unsupervised learning rule. VDSP is aimed at the online implementation of Hebb's plasticity mechanism using nanoscale memristive synapses. These devices mimic biological synapses by adjusting their resistance based on past electrical activity, enabling efficient online learning. VDSP updates synaptic conductance based on the membrane potential of the neuron, eliminating the need for additional memory to store spike timings. This approach allows for online learning without the complex pulse-shaping circuits typically required for spike-timing-dependent plasticity (STDP) with memristors. We show how VDSP can be advantageously adapted to three types of memristive devices (metal-oxide filamentary synapses, and ferroelectric tunnel junctions) with distinctive analog switching characteristics. System-level simulations of spiking neural networks using these devices validated their performance on MNIST pattern recognition tasks, achieving up to 90% accuracy with improved adaptability and reduced hyperparameter tuning compared to STDP. Additionally, we evaluated device variability and proposed mitigation strategies to enhance robustness.<\/p>\n<p>In the second part, we implement an analog leaky integrate-and-fire (LIF) neuron, accompanied by a voltage regulator and current attenuator, to seamlessly interface CMOS neurons with memristive synapses. The neuron design features dual leakage, facilitating local learning through VDSP. We also propose a configurable adaptation mechanism that allows adaptive LIF neurons to be reconfigured in run-time. These versatile circuits can interface with a range of synaptic devices, allowing the processing of signals with a variety of temporal dynamics. Integrating these neurons into a network, we present a CMOS-memristor self-learning neural building block (NBB), consisting of analog circuits for crossbar reading and LIF neurons, along with digital circuits for switching between inference and learning modes. Compact neural networks that can self-adapt, learn in real time, and process environmental data, when realized on ultra-low-power hardware, open new possibilities for AI in edge computing. Advances in both hardware (circuits) and algorithms (online learning) will greatly accelerate the deployment of AI applications by leveraging analog computing and nanoscale memory technologies.<\/p>\n<\/div><\/section>","protected":false},"excerpt":{"rendered":"","protected":false},"author":20,"featured_media":71447,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,8,319,65,87,84],"tags":[],"class_list":["post-71585","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-a-la-une","category-actualites","category-actualites2022","category-agenda","category-agenda-en","category-agenda-en-en"],"_links":{"self":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts\/71585","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/users\/20"}],"replies":[{"embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/comments?post=71585"}],"version-history":[{"count":0,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts\/71585\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/media\/71447"}],"wp:attachment":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/media?parent=71585"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/categories?post=71585"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/tags?post=71585"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}