Malmö University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
End-to-End Federated Learning for Autonomous Driving Vehicles
Chalmers University of Technology.
Chalmers University of Technology.
Malmö University, Faculty of Technology and Society (TS), Department of Computer Science and Media Technology (DVMT).ORCID iD: 0000-0002-7700-1816
2021 (English)In: Proceedings of the International Joint Conference on Neural Networks, IEEE, 2021Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, with the development of computation capability in devices, companies are eager to investigate and utilize suitable ML/DL methods to improve their service quality. However, with the traditional learning strategy, companies need to first build up a powerful data center to collect and analyze data from the edge and then perform centralized model training, which turns out to be inefficient. Federated Learning has been introduced to solve this challenge. Because of its characteristics such as model-only exchange and parallel training, the technique can not only preserve user data privacy but also accelerate model training speed. The method can easily handle real-time data generated from the edge without taking up a lot of valuable network transmission resources. In this paper, we introduce an approach to end-to-end on-device Machine Learning by utilizing Federated Learning. We validate our approach with an important industrial use case in the field of autonomous driving vehicles, the wheel steering angle prediction. Our results show that Federated Learning can significantly improve the quality of local edge models and also reach the same accuracy level as compared to the traditional centralized Machine Learning approach without its negative effects. Furthermore, Federated Learning can accelerate model training speed and reduce the communication overhead, which proves that this approach has great strength when deploying ML/DL components to various real-world embedded systems.

Place, publisher, year, edition, pages
IEEE, 2021.
Series
Proceedings of ... International Joint Conference on Neural Networks, ISSN 2161-4393, E-ISSN 2161-4407
Keywords [en]
Federated Learning Machine learning Heterogeneous computation Software Engineering, Autonomous vehicles, Embedded systems, Machine learning, Quality of service, Software engineering, Autonomous driving, Computation software, End to end, Heterogeneous computation, Learning strategy, Model training, Service Quality, Traditional learning, Training speed, Data privacy
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:mau:diva-48761DOI: 10.1109/IJCNN52387.2021.9533808ISI: 000722581704014Scopus ID: 2-s2.0-85115839157ISBN: 978-1-6654-3900-8 (electronic)ISBN: 978-1-6654-4597-9 (print)OAI: oai:DiVA.org:mau-48761DiVA, id: diva2:1623193
Conference
2021 International Joint Conference on Neural Networks (IJCNN), 18-22 July 2021, Shenzhen, China
Available from: 2021-12-28 Created: 2021-12-28 Last updated: 2022-08-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Olsson, Helena Holmström

Search in DiVA

By author/editor
Olsson, Helena Holmström
By organisation
Department of Computer Science and Media Technology (DVMT)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 90 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf