{"id":56902,"date":"2023-03-23T11:55:14","date_gmt":"2023-03-23T09:55:14","guid":{"rendered":"https:\/\/www.iemn.fr\/?p=56902"},"modified":"2023-03-23T11:55:14","modified_gmt":"2023-03-23T09:55:14","slug":"these-k-herisse-multiplication-de-vecteur-matrice-a-signaux-mixtes-dans-la-memoire-a-tres-faible-consommation-pour-lapprentissage-machine-embarque","status":"publish","type":"post","link":"https:\/\/www.iemn.fr\/en\/theses-2022\/these-k-herisse-multiplication-de-vecteur-matrice-a-signaux-mixtes-dans-la-memoire-a-tres-faible-consommation-pour-lapprentissage-machine-embarque.html","title":{"rendered":"THESE K. HERISSE \u00ab\u00a0Multiplication de vecteur-matrice \u00e0 signaux mixtes dans la m\u00e9moire \u00e0 tr\u00e8s faible consommation pour l\u2019apprentissage machine embarqu\u00e9 \u00ab\u00a0"},"content":{"rendered":"<div id='layer_slider_1'  class='avia-layerslider main_color avia-shadow  avia-builder-el-0  el_before_av_heading  avia-builder-el-first  container_wrap sidebar_right'  style='height: 261px;'  ><div id=\"layerslider_58_sx6bvaablzay\" data-ls-slug=\"homepageslider\" class=\"ls-wp-container fitvidsignore ls-selectable\" style=\"width:1140px;height:260px;margin:0 auto;margin-bottom: 0px;\"><div class=\"ls-slide\" data-ls=\"duration:6000;transition2d:5;\"><img loading=\"lazy\" decoding=\"async\" width=\"2600\" height=\"270\" src=\"https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1.jpg\" class=\"ls-bg\" alt=\"\" srcset=\"https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1.jpg 2600w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-300x31.jpg 300w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-768x80.jpg 768w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-1030x107.jpg 1030w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-1500x156.jpg 1500w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-705x73.jpg 705w\" sizes=\"auto, (max-width: 2600px) 100vw, 2600px\" \/><ls-layer style=\"font-size:14px;text-align:left;font-style:normal;text-decoration:none;text-transform:none;font-weight:700;letter-spacing:0px;border-style:solid;border-color:#000;background-position:0% 0%;background-repeat:no-repeat;width:180px;height:30px;left:0px;top:231px;line-height:32px;color:#ffffff;border-radius:6px 6px 6px 6px;padding-left:50px;background-color:rgba(0, 0, 0, 0.57);\" class=\"ls-l ls-ib-icon ls-text-layer\" data-ls=\"minfontsize:0;minmobilefontsize:0;\"><i class=\"fa fa-quote-right\" style=\"color:#ffffff;margin-right:0.8em;font-size:1em;transform:translateY( -0.125em );\"><\/i>ACTUALITES<\/ls-layer><\/div><\/div><\/div><div id='after_layer_slider_1'  class='main_color av_default_container_wrap container_wrap sidebar_right'  ><div class='container av-section-cont-open' ><div class='template-page content  av-content-small alpha units'><div class='post-entry post-entry-type-page post-entry-56902'><div class='entry-content-wrapper clearfix'>\n\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-lfkxnunf-17c3367d8d4633258573afccdf8455bc\">\n#top .av-special-heading.av-lfkxnunf-17c3367d8d4633258573afccdf8455bc{\nmargin:0 0 10px 0;\npadding-bottom:4px;\n}\nbody .av-special-heading.av-lfkxnunf-17c3367d8d4633258573afccdf8455bc .av-special-heading-tag .heading-char{\nfont-size:25px;\n}\n.av-special-heading.av-lfkxnunf-17c3367d8d4633258573afccdf8455bc .av-subheading{\nfont-size:15px;\n}\n<\/style>\n<div  class='av-special-heading av-lfkxnunf-17c3367d8d4633258573afccdf8455bc av-special-heading-h2  avia-builder-el-1  el_after_av_layerslider  el_before_av_hr  avia-builder-el-first'><h2 class='av-special-heading-tag'  itemprop=\"headline\"  >THESE K. HERISSE \u00ab\u00a0Multiplication de vecteur-matrice \u00e0 signaux mixtes dans la m\u00e9moire \u00e0 tr\u00e8s faible consommation pour l\u2019apprentissage machine embarqu\u00e9 \u00ab\u00a0<\/h2><div class=\"special-heading-border\"><div class=\"special-heading-inner-border\"><\/div><\/div><\/div>\n\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-18u73nj-dad6a947580930e400fc42ba200e80f1\">\n#top .hr.av-18u73nj-dad6a947580930e400fc42ba200e80f1{\nmargin-top:5px;\nmargin-bottom:5px;\n}\n.hr.av-18u73nj-dad6a947580930e400fc42ba200e80f1 .hr-inner{\nwidth:100%;\n}\n<\/style>\n<div  class='hr av-18u73nj-dad6a947580930e400fc42ba200e80f1 hr-custom  avia-builder-el-2  el_after_av_heading  el_before_av_textblock  hr-left hr-icon-no'><span class='hr-inner inner-border-av-border-thin'><span class=\"hr-inner-style\"><\/span><\/span><\/div>\n<section  class='av_textblock_section av-jriy64i8-2f4600354c0449b610997916bbd9b6bc'   itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/BlogPosting\" itemprop=\"blogPost\" ><div class='avia_textblock'  itemprop=\"text\" >\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-13ewzjw-68e036126b913e5028f77311dc66b825\">\n.av_font_icon.av-13ewzjw-68e036126b913e5028f77311dc66b825{\ncolor:#bfbfbf;\nborder-color:#bfbfbf;\n}\n.av_font_icon.av-13ewzjw-68e036126b913e5028f77311dc66b825 .av-icon-char{\nfont-size:60px;\nline-height:60px;\n}\n<\/style>\n<span  class='av_font_icon av-13ewzjw-68e036126b913e5028f77311dc66b825 avia_animate_when_visible av-icon-style- avia-icon-pos-left avia-icon-animate'><span class='av-icon-char' aria-hidden='true' data-av_icon='\ue8c9' data-av_iconfont='entypo-fontello' ><\/span><\/span>\n<p><strong>Kevin HERISSE<br \/>\n<\/strong><\/p>\n<p>Soutenance : 16 D\u00e9cembre 2022<strong><br \/>\n<\/strong>Th\u00e8se de doctorat en Electronique, micro\u00e9lectronique, nano\u00e9lectronique et micro-ondes, Universit\u00e9 de Lille, ENGSYS Sciences de l\u2019ing\u00e9nierie et des syst\u00e8mes,<\/p>\n<\/div><\/section>\n<section  class='av_textblock_section av-jtefqx33-628129dba2299b2ecd65ebfc92eac29d'   itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/BlogPosting\" itemprop=\"blogPost\" ><div class='avia_textblock'  itemprop=\"text\" ><div  class='hr av-kjh3zw-4dff888f744b728a1aca9b3a0971493a hr-default  avia-builder-el-6  avia-builder-el-no-sibling'><span class='hr-inner'><span class=\"hr-inner-style\"><\/span><\/span><\/div>\n<h5><strong><span style=\"color: #800000;\">Jury :<\/span><\/strong><\/h5>\n<h5>Summary:<\/h5>\n<p>Les applications de l\u2019intelligence artificielle embarqu\u00e9e sont nombreuses et couvrent de multiples domaines, tels que l\u2019\u00e9lectronique grand public, la domotique, la sant\u00e9 et l\u2019industrie. Elles n\u00e9cessitent des puces d\u00e9di\u00e9es apportant l\u2019intelligence \u00e0 proximit\u00e9 du capteur tout en maintenant une faible consommation d\u2019\u00e9nergie. Bien qu\u2019il existe de nombreux types de r\u00e9seaux neuronaux (Neural Networks \u2013 NN), ils reposent tous sur les m\u00eames calculs de base, \u00e0 savoir des multiplications matricielles et vectorielles (MMV) compos\u00e9es d\u2019op\u00e9rations de multiplication et d\u2019accumulation (MAC). L\u2019optimisation de l\u2019efficacit\u00e9 \u00e9nerg\u00e9tique des op\u00e9rations MAC est un excellent levier pour r\u00e9duire la consommation \u00e9nerg\u00e9tique globale. Dans une architecture Von Neumann classique, la limitation li\u00e9e \u00e0 l\u2019acc\u00e8s aux donn\u00e9es plafonne l\u2019efficacit\u00e9 \u00e0 10 TOPS\/W en consid\u00e9rant une consommation d\u2019\u00e9nergie de 50 fJ\/byte pour le d\u00e9placement des donn\u00e9es. Le traitement en m\u00e9moire (In-Memory Computing \u2013 IMC) permet de r\u00e9duire la surcharge \u00e9nerg\u00e9tique li\u00e9e \u00e0 l\u2019acc\u00e8s aux donn\u00e9es en les traitant \u00e0 proximit\u00e9 de l\u2019endroit o\u00f9 elles sont stock\u00e9es. Cette th\u00e8se analyse l\u2019\u00e9tat de l\u2019art des architectures NN et les travaux pour la d\u00e9tection d\u2019activit\u00e9 vocale (Vocal Activity Detection \u2013 VAD) et le rep\u00e9rage de mots-cl\u00e9s (Keyword Spotting \u2013 KWS), pour montrer que la consommation d\u2019\u00e9nergie et la pr\u00e9cision sont des param\u00e8tres plus importants que le d\u00e9bit pour les applications embarqu\u00e9es. En outre, l\u2019analyse de l\u2019\u00e9tat de l\u2019art de l\u2019IMC montre que le temps disponible pour effectuer les op\u00e9rations du NN peut \u00eatre avantageusement exploit\u00e9. Ce travail pr\u00e9sente un concept d\u2019IMC analogique bas\u00e9 sur le temps et le courant, o\u00f9 des sources de courant chargent\/d\u00e9chargent une ligne capacitive pendant un temps pond\u00e9r\u00e9 par le produit de deux nombres, r\u00e9alisant ainsi des op\u00e9rations MAC multi-bits \u00e0 travers le temps. Une mise en oeuvre de l\u2019architecture propos\u00e9e dans une technologie FDSOI de 28 nm est pr\u00e9sent\u00e9e. Le prototype de circuit int\u00e9gr\u00e9 int\u00e8gre 4 neurones avec 100 entr\u00e9es et des entr\u00e9es et poids de 5 bits. La structure ex\u00e9cute le MMV multi-bits en utilisant la m\u00e9thode IMC analogique propos\u00e9e, bas\u00e9e sur le temps et le courant, avec une latence maximale de 4,5 \u00b5s, parfaitement adapt\u00e9e \u00e0 la plupart des applications embarqu\u00e9es. L\u2019efficacit\u00e9 \u00e9nerg\u00e9tique mesur\u00e9e permet d\u2019envisager une efficacit\u00e9 sup\u00e9rieur \u00e0 50 TOPS\/W s\u2019il est d\u00e9ploy\u00e9 sur un r\u00e9seau de 100 neurones.<\/p>\n<h5>Abstract:<\/h5>\n<p>The applications of embedded artificial intelligence are numerous and cover many fields, such as consumer electronics, home automation, health and industry. They require dedicated chips that bring intelligence close to the sensor while maintaining low power consumption. Although there are many types of Neural Networks (NN), they all rely on the same basic computations, namely matrix and vector multiplications (MMV) composed of multiplication and accumulation operations (MAC). Optimizing the energy efficiency of MAC operations is an excellent lever for reducing overall energy consumption. In a classical Von Neumann architecture, the limitation related to data access caps the efficiency at 10 TOPS\/W considering an energy consumption of 50 fJ\/byte for data movement. In-Memory Computing (IMC) reduces the energy overhead of data access by processing data close to where it is stored. This thesis analyzes the state of the art of NN architectures and works for Vocal Activity Detection (VAD) and Keyword Spotting (KWS), to show that energy consumption and accuracy are more important parameters than throughput for embedded applications. Furthermore, the analysis of the state of the art of IMC shows that the time available to perform NN operations can be advantageously exploited. This work presents a time- and current-based analog IMC design, where current sources charge\/discharge a capacitive line for a time weighted by the product of two numbers, thus performing multi-bit MAC operations through time. An implementation of the proposed architecture in a 28 nm FDSOI technology is presented. The prototype IC integrates 4 neurons with 100 inputs and 5-bit inputs and weights. The structure executes the multi-bit MMV using the proposed analog time and current based IMC method with a maximum latency of 4.5 \u00b5s, perfectly suited for most embedded applications. The measured energy efficiency allows to consider an efficiency higher than 50 TOPS\/W if it is deployed on a 100 neurons network.<\/p>\n<\/div><\/section>","protected":false},"excerpt":{"rendered":"","protected":false},"author":20,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[316],"tags":[],"class_list":["post-56902","post","type-post","status-publish","format-standard","hentry","category-theses-2022"],"_links":{"self":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts\/56902","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/users\/20"}],"replies":[{"embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/comments?post=56902"}],"version-history":[{"count":0,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts\/56902\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/media?parent=56902"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/categories?post=56902"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/tags?post=56902"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}