{"id":55510,"date":"2022-12-01T12:04:46","date_gmt":"2022-12-01T10:04:46","guid":{"rendered":"https:\/\/www.iemn.fr\/?p=55510"},"modified":"2022-12-01T12:04:46","modified_gmt":"2022-12-01T10:04:46","slug":"these-a-mimouna-exploration-de-la-fusion-de-donnees-pour-la-detection-dobjets-multiples-dans-les-systemes-de-transport-intelligents-a-laide-de-lapprentissage-profond","status":"publish","type":"post","link":"https:\/\/www.iemn.fr\/en\/these\/these-2021\/these-a-mimouna-exploration-de-la-fusion-de-donnees-pour-la-detection-dobjets-multiples-dans-les-systemes-de-transport-intelligents-a-laide-de-lapprentissage-profond.html","title":{"rendered":"THESE : A. MIMOUNA \u2013 Exploration de la fusion de donn\u00e9es pour la d\u00e9tection d\u2019objets multiples dans les syst\u00e8mes de transport intelligents \u00e0 l\u2019aide de l\u2019apprentissage profond"},"content":{"rendered":"<div id='layer_slider_1'  class='avia-layerslider main_color avia-shadow  avia-builder-el-0  el_before_av_heading  avia-builder-el-first  container_wrap sidebar_right'  style='height: 261px;'  ><div id=\"layerslider_58_jczd9uibczrl\" data-ls-slug=\"homepageslider\" class=\"ls-wp-container fitvidsignore ls-selectable\" style=\"width:1140px;height:260px;margin:0 auto;margin-bottom: 0px;\"><div class=\"ls-slide\" data-ls=\"duration:6000;transition2d:5;\"><img loading=\"lazy\" decoding=\"async\" width=\"2600\" height=\"270\" src=\"https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1.jpg\" class=\"ls-bg\" alt=\"\" srcset=\"https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1.jpg 2600w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-300x31.jpg 300w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-768x80.jpg 768w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-1030x107.jpg 1030w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-1500x156.jpg 1500w, https:\/\/www.iemn.fr\/wp-content\/uploads\/2019\/01\/sliders_news1-705x73.jpg 705w\" sizes=\"auto, (max-width: 2600px) 100vw, 2600px\" \/><ls-layer style=\"font-size:14px;text-align:left;font-style:normal;text-decoration:none;text-transform:none;font-weight:700;letter-spacing:0px;border-style:solid;border-color:#000;background-position:0% 0%;background-repeat:no-repeat;width:180px;height:30px;left:0px;top:231px;line-height:32px;color:#ffffff;border-radius:6px 6px 6px 6px;padding-left:50px;background-color:rgba(0, 0, 0, 0.57);\" class=\"ls-l ls-ib-icon ls-text-layer\" data-ls=\"minfontsize:0;minmobilefontsize:0;\"><i class=\"fa fa-quote-right\" style=\"color:#ffffff;margin-right:0.8em;font-size:1em;transform:translateY( -0.125em );\"><\/i>ACTUALITES<\/ls-layer><\/div><\/div><\/div><div id='after_layer_slider_1'  class='main_color av_default_container_wrap container_wrap sidebar_right'  ><div class='container av-section-cont-open' ><div class='template-page content  av-content-small alpha units'><div class='post-entry post-entry-type-page post-entry-55510'><div class='entry-content-wrapper clearfix'>\n\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-av_heading-d96e84b80d43c3f81150224645a8ed05\">\n#top .av-special-heading.av-av_heading-d96e84b80d43c3f81150224645a8ed05{\nmargin:0 0 10px 0;\npadding-bottom:4px;\n}\nbody .av-special-heading.av-av_heading-d96e84b80d43c3f81150224645a8ed05 .av-special-heading-tag .heading-char{\nfont-size:25px;\n}\n.av-special-heading.av-av_heading-d96e84b80d43c3f81150224645a8ed05 .av-subheading{\nfont-size:15px;\n}\n<\/style>\n<div  class='av-special-heading av-av_heading-d96e84b80d43c3f81150224645a8ed05 av-special-heading-h2  avia-builder-el-1  el_after_av_layerslider  el_before_av_hr  avia-builder-el-first'><h2 class='av-special-heading-tag'  itemprop=\"headline\"  >THESE : A. MIMOUNA \u2013 Exploration de la fusion de donn\u00e9es pour la d\u00e9tection d\u2019objets multiples dans les syst\u00e8mes de transport intelligents \u00e0 l\u2019aide de l\u2019apprentissage profond <\/h2><div class=\"special-heading-border\"><div class=\"special-heading-inner-border\"><\/div><\/div><\/div>\n\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-18u73nj-dad6a947580930e400fc42ba200e80f1\">\n#top .hr.av-18u73nj-dad6a947580930e400fc42ba200e80f1{\nmargin-top:5px;\nmargin-bottom:5px;\n}\n.hr.av-18u73nj-dad6a947580930e400fc42ba200e80f1 .hr-inner{\nwidth:100%;\n}\n<\/style>\n<div  class='hr av-18u73nj-dad6a947580930e400fc42ba200e80f1 hr-custom  avia-builder-el-2  el_after_av_heading  el_before_av_textblock  hr-left hr-icon-no'><span class='hr-inner inner-border-av-border-thin'><span class=\"hr-inner-style\"><\/span><\/span><\/div>\n<section  class='av_textblock_section av-jriy64i8-2f4600354c0449b610997916bbd9b6bc'   itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/BlogPosting\" itemprop=\"blogPost\" ><div class='avia_textblock'  itemprop=\"text\" >\n<style type=\"text\/css\" data-created_by=\"avia_inline_auto\" id=\"style-css-av-13ewzjw-68e036126b913e5028f77311dc66b825\">\n.av_font_icon.av-13ewzjw-68e036126b913e5028f77311dc66b825{\ncolor:#bfbfbf;\nborder-color:#bfbfbf;\n}\n.av_font_icon.av-13ewzjw-68e036126b913e5028f77311dc66b825 .av-icon-char{\nfont-size:60px;\nline-height:60px;\n}\n<\/style>\n<span  class='av_font_icon av-13ewzjw-68e036126b913e5028f77311dc66b825 avia_animate_when_visible av-icon-style- avia-icon-pos-left avia-icon-animate'><span class='av-icon-char' aria-hidden='true' data-av_icon='\ue8c9' data-av_iconfont='entypo-fontello' ><\/span><\/span>\n<p><strong>A. MIMOUNA<\/strong><\/p>\n<p>Soutenance : <strong>25 Mai 2021<br \/>\n<\/strong>Th\u00e8se de doctorat en Electronique, acoustique et telecommunications, Universit\u00e9 Polytechnique Hauts de France,<\/p>\n<\/div><\/section>\n<section  class='av_textblock_section av-jtefqx33-628129dba2299b2ecd65ebfc92eac29d'   itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/BlogPosting\" itemprop=\"blogPost\" ><div class='avia_textblock'  itemprop=\"text\" ><div  class='hr av-kjh3zw-4dff888f744b728a1aca9b3a0971493a hr-default  avia-builder-el-6  avia-builder-el-no-sibling'><span class='hr-inner'><span class=\"hr-inner-style\"><\/span><\/span><\/div>\n<h5><strong><span style=\"color: #800000;\">\u00a0<\/span><\/strong><\/h5>\n<h5>Summary:<\/h5>\n<p>Une perception fiable de l\u2019environnement est une t\u00e2che cruciale pour la conduite autonome, en particulier dans les zones de trafic dense. La recherche dans ce domaine \u00e9volue de plus en plus. Cependant, nous sommes au d\u00e9but d\u2019une voie de recherche vers une future g\u00e9n\u00e9ration de syst\u00e8mes de transport intelligents. En effet, les principales pr\u00e9occupations lors du d\u00e9veloppement de tels syst\u00e8mes sont les conditions de la conduite, la surveillance des infrastructures et la r\u00e9ponse pr\u00e9cise du syst\u00e8me en temps r\u00e9el. Les r\u00e9centes am\u00e9liorations et perc\u00e9es dans la compr\u00e9hension de l\u2019environnement pour les syst\u00e8mes de transport intelligents reposent principalement sur l\u2019apprentissage profond et la fusion de diff\u00e9rentes modalit\u00e9s. Dans ce contexte, tout d\u2019abord, nous introduisons OLIMP:A heterOgeneous MuLtimodal Dataset for Advanced EnvIronMent Perception 1. C\u2019est la premi\u00e8re base de donn\u00e9es public, multimodale et synchronis\u00e9e qui comprend des donn\u00e9es radar ultra large bande (ULB), des donn\u00e9es acoustiques, des donn\u00e9es radar \u00e0 bande \u00e9troite et des images. OLIMP comprend 407 sc\u00e8nes et 47 354 donn\u00e9es synchronis\u00e9es, dont quatre cat\u00e9gories: pi\u00e9tons, cyclistes, voitures et tramways. L\u2019ensemble de donn\u00e9es pr\u00e9sente divers d\u00e9fis li\u00e9s au trafic urbain dense, tels que des environnements encombr\u00e9s et des conditions m\u00e9t\u00e9orologiques diff\u00e9rentes. Pour d\u00e9montrer l\u2019utilit\u00e9 de la base introduite, nous proposons, par la suite, un framework de fusion qui combine les quatre modalit\u00e9s pour la d\u00e9tection multi-objets. Les r\u00e9sultats obtenus sont prometteurs et incitent \u00e0 de futures recherches. Dans les applications \u00e0 courte port\u00e9e, les radars ULB repr\u00e9sentent une technologie prometteuse pour la construction de syst\u00e8mes de d\u00e9tection d\u2019obstacles fiables car ils sont robustes aux conditions environnementales. Cependant, ces radars souffrent d\u2019un d\u00e9fi de segmentation : localiser les r\u00e9gions d\u2019int\u00e9r\u00eat (ROIs) pertinentes dans ses signaux. Par cons\u00e9quent, nous mettons en avant une approche de segmentation pour d\u00e9tecter les ROIs dans un environnement d\u00e9di\u00e9 \u00e0 la perception de l\u2019environnement c\u2019est la troisi\u00e8me contribution. Plus pr\u00e9cis\u00e9ment, nous mettons en ?uvre une analyse d\u2019entropie diff\u00e9rentielle pour d\u00e9tecter les ROIs. Les r\u00e9sultats obtenus montrent des performances sup\u00e9rieures en termes de d\u00e9tection d\u2019obstacles par rapport aux techniques de l\u2019\u00e9tat de l\u2019art, et une robustesse m\u00eame avec des signaux de faible amplitude. Par la suite, nous proposons un nouveau framework bas\u00e9e sur l\u2019apprentissage profond qui exploite le r\u00e9seau de neurones r\u00e9currents avec les signaux ULB pour la d\u00e9tection multiple d\u2019obstacles routiers. Les caract\u00e9ristiques sont extraites du domaine temps-fr\u00e9quence \u00e0 l\u2019aide de la transform\u00e9e en ondelettes discr\u00e8te et sont transmises au r\u00e9seau r\u00e9current \u00e0 m\u00e9moire courte et long terme. Les r\u00e9sultats obtenus montrent que le syst\u00e8me bas\u00e9 sur l\u2019LSTM surpasse les autres techniques impl\u00e9ment\u00e9es en termes de d\u00e9tection d\u2019obstacles.<\/p>\n<h5>Abstract:<\/h5>\n<p>Reliable perception of the environment is a crucial task for autonomous driving, especially in dense traffic areas. Research in this area is increasingly evolving. However, we are at the beginning of a research path towards a future generation of intelligent transportation systems. Indeed, the main concerns in the development of such systems are driving conditions, infrastructure monitoring and accurate real-time system response. Recent improvements and breakthroughs in understanding the environment for intelligent transportation systems are mainly based on deep learning and fusion of different modalities. In this context, first, we introduce OLIMP:A heterOgeneous MuLtimodal Dataset for Advanced EnvIronMent Perception 1. It is the first public, multimodal, and synchronized database that includes ultra wideband (ULB) radar data, acoustic data, narrowband radar data, and images. OLIMP includes 407 scenes and 47,354 synchronized data, including four categories: pedestrians, cyclists, cars, and streetcars. The dataset presents various challenges related to dense urban traffic, such as congested environments and different weather conditions. To demonstrate the usefulness of the introduced database, we subsequently propose a fusion framework that combines the four modalities for multi-object detection. The results obtained are promising and encourage future research. In short-range applications, ULB radars represent a promising technology for building reliable obstacle detection systems because they are robust to environmental conditions. However, these radars suffer from a segmentation challenge: locating the relevant regions of interest (ROIs) in its signals. Therefore, we put forward a segmentation approach to detect ROIs in an environment dedicated to the perception of the environment. More precisely, we implement a differential entropy analysis to detect ROIs. The results obtained show superior performances in terms of obstacle detection compared to state-of-the-art techniques, and robustness even with low amplitude signals. Subsequently, we propose a new framework based on deep learning that exploits the recurrent neural network with ULB signals for multiple road obstacle detection. The features are extracted from the time-frequency domain using the discrete wavelet transform and fed to the recurrent network with short and long term memory. The results obtained show that the LSTM-based system outperforms the other implemented techniques in terms of obstacle detection.<\/p>\n<\/div><\/section>","protected":false},"excerpt":{"rendered":"","protected":false},"author":20,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[317],"tags":[],"class_list":["post-55510","post","type-post","status-publish","format-standard","hentry","category-these-2021"],"_links":{"self":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts\/55510","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/users\/20"}],"replies":[{"embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/comments?post=55510"}],"version-history":[{"count":0,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/posts\/55510\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/media?parent=55510"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/categories?post=55510"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.iemn.fr\/en\/wp-json\/wp\/v2\/tags?post=55510"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}