Malmö University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Neural networks in context: challenges and opportunities: a critical inquiry into prerequisites for user trust in decisions promoted by neural networks
Malmö University, Faculty of Technology and Society (TS), Department of Computer Science and Media Technology (DVMT). Malmö University, Internet of Things and People (IOTAP).ORCID iD: 0000-0001-5676-1931
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Artificial intelligence and machine learning (ML) in particular increasingly impact human life by creating value from collected data. This assetisation affects all aspectsof human life, from choosing a significant other to recommending a product for us to consume. This type of ML-based system thrives because it predicts human behaviour based on average case performance metrics (like accuracy). However, its usefulnessis more limited when it comes to being transparent about its internal knowledge representations for singular decisions, for example, it is not good at explaining why ithas suggested a particular decision in a specific context.The goal of this work is to let end users be in command of how ML systems are used and thereby combine the strengths of humans and machines – machines which can propose transparent decisions. Artificial neural networks are an interesting candidate for a setting of this type, given that this technology has been successful in building knowledge representations from raw data. A neural network can be trained by exposing it to data from the target domain. It can then internalise knowledge representations from the domain and perform contextual tasks. In these situations, the fragment of the actual world internalised in an ML system has to be contextualised by a human to beuseful and trustworthy in non-static settings.This setting is explored through the overarching research question: What challenges and opportunities can emerge when an end user uses neural networks in context to support singular decision-making? To address this question, Research through Design is used as the central methodology, as this research approach matches the openness of the research question. Through six design experiments, I explore and expand on challenges and opportunities in settings where singular contextual decisions matter. The initial design experiments focus on opportunities in settings that augment human cognitive abilities. Thereafter, the experiments explore challenges related to settings where neural networks can enhance human cognitive abilities. This part concerns approaches intended to explain promoted decisions.This work contributes in three ways: 1) exploring learning related to neural networks in context to put forward a core terminology for contextual decision-making using ML systems, wherein the terminology includes the generative notions of true-to-the-domain, concept, out-of-distribution and generalisation; 2) presenting a number of design guidelines; and 3) showing the need to align internal knowledge representations with concepts if neural networks are to produce explainable decisions. I also argue that training neural networks to generalise basic concepts like shapes and colours, concepts easily understandable by humans, is a path forward. This research direction leads towards neural network-based systems that can produce more complex explanations that build on basic generalisable concepts.

Abstract [sv]

Artificiell intelligens och i synnerhet Maskininlärning (ML) påverkar i hög grad människors liv genom de kan skapa monetärt värde från data. Denna produktifiering av insamlad data påverkar på många sätt våra liv, från val av partner till att rekommendera nästa produkt att konsumera. ML-baserade system fungerar väl i denna roll eftersom de kan förutsäga människors beteende baserat på genomsnittliga prestandamått, men deras användbarhet är mer begränsad i situationer där det är viktigt med transparens visavi de kunskapsrepresentationer ett enskilt beslut baseras på.

 Målet med detta arbete är att kombinera människors och maskiners styrkor via en tydlig maktrelation där en slutanvändare har kommandot. Denna maktrelation bygger på användning av ML-system som är transparenta med bakomliggande orsaker för ett föreslaget beslut. Artificiella neurala nätverk är ett intressant val av ML-teknik för denna uppgift eftersom de kan bygga interna kunskapsrepresentationer från rå data och därför tränas utan specialiserad ML kunskap. Detta innebär att ett neuralt nätverk kan tränas genom att exponeras för data från en måldomän och i denna process internalisera relevanta kunskapsrepresentationer. Därefter kan nätet presentera kontextuella förslag på beslut baserat på dessa representationer. I icke-statiska situationer behöver det fragment av den verkliga världen som internaliseras i ML-systemet kontextualiseras av en människa för att systemet skall vara användbart och tillförlitligt.

 I detta arbete utforskas det ovan beskrivna området via en övergripande forskningsfråga: Vilka utmaningar och möjligheter kan uppstå när en slutanvändare använder neurala nätverk som stöd för enstaka beslut i ett väldefinierat sammanhang?

 För att besvara forskningsfrågan ovan används metodologin forskning genom design, detta på grund av att den valda metodologin matchar öppenheten i forskningsfrågan. Genom sex designexperiment utforskas utmaningar och möjligheter i situationer där enskilda kontextuella beslut är viktiga. De initiala designexperimenten fokuserar främst på möjligheter i situationer där neurala nätverk presterar i paritet med människors kognitiva förmågor och de senare experimenten utforskar utmaningar i situationer där neurala nätverk överträffar människans kognitiva förmågor.  Den andra delen fokuserar främst på metoder som syftar till att förklara beslut föreslagna av det neurala nätverket.

 Detta arbete bidrar till existerande kunskap på tre sätt: (1) utforskande av lärande relaterat till neurala nätverk med målet att presentera en terminologi användbar för kontextuellt beslutsfattande understött av ML-system, den framtagna terminologin inkluderar generativa begrepp som: sann-i-relation-till-domänen, koncept, utanför-distributionen och generalisering, (2) ett antal designriktlinjer, (3) behovet av att justera interna kunskapsrepresentationer i neurala nätverk så att de överensstämmer med koncept vilket skulle kunna medföra att neurala nätverk kan producera förklaringsbara beslut. Jag föreslår även att en framkomlig forskningsstrategi är att träna neurala nätverk med utgångspunkt från grundläggande koncept, som former och färger. Denna strategi innebär att nätverken kan generalisera utifrån dessa generella koncept i olika domäner. Den föreslagna forskningsriktning syftar till att producera mer komplexa förklaringar från neurala nätverk baserat på grundläggande generaliserbara koncept.

Place, publisher, year, edition, pages
Malmö: Malmö University Press, 2023. , p. 70
Series
Studies in Computer Science ; 22
Keywords [en]
Explainable AI, Machine Learning, Neural Network, Concept, Generalisation, Out-of-Distribution
Keywords [sv]
Förklaringsbar AI, Maskininlärning, Neurala Nätverk, Koncept, Generalisering, Utanför-distributionen
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:mau:diva-58450DOI: 10.24834/isbn.9789178773503ISBN: 978-91-7877-351-0 (print)ISBN: 978-91-7877-350-3 (electronic)OAI: oai:DiVA.org:mau-58450DiVA, id: diva2:1744266
Public defence
2023-04-13, Orkanen, D138 eller livestream, Nordenskiöldsgatan 10, Malmö, 14:00 (English)
Opponent
Supervisors
Note

Paper IV and VIII in dissertation as manuscript

Available from: 2023-03-17 Created: 2023-03-17 Last updated: 2024-02-29Bibliographically approved
List of papers
1. Evaluating Interpretability in Machine Teaching
Open this publication in new window or tab >>Evaluating Interpretability in Machine Teaching
2020 (English)In: Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness: The PAAMS Collection / [ed] Springer, Springer, 2020, Vol. 1233, p. 54-65Conference paper, Published paper (Other academic)
Abstract [en]

Building interpretable machine learning agents is a challenge that needs to be addressed to make the agents trustworthy and align the usage of the technology with human values. In this work, we focus on how to evaluate interpretability in a machine teaching setting, a settingthat involves a human domain expert as a teacher in relation to a machine learning agent. By using a prototype in a study, we discuss theinterpretability denition and show how interpretability can be evaluatedon a functional-, human- and application level. We end the paperby discussing open questions and suggestions on how our results can be transferable to other domains.

Place, publisher, year, edition, pages
Springer, 2020
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1233
National Category
Human Computer Interaction
Research subject
Interaktionsdesign
Identifiers
urn:nbn:se:mau:diva-18380 (URN)10.1007/978-3-030-51999-5_5 (DOI)2-s2.0-85088540310 (Scopus ID)978-3-030-51998-8 (ISBN)978-3-030-51999-5 (ISBN)
Conference
PAAMS: International Conference on Practical Applications of Agents and Multi-Agent Systems, 7-9 October 2020, L’Aquila, Italy
Available from: 2020-09-23 Created: 2020-09-23 Last updated: 2023-07-06Bibliographically approved
2. Contextual machine teaching
Open this publication in new window or tab >>Contextual machine teaching
2020 (English)In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2020Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning research today is dominated by atechnocentric perspective and in many cases disconnected fromthe users of the technology. The machine teaching paradigm insteadshifts the focus from machine learning experts towards thedomain experts and users of machine learning technology. Thisshift opens up for new perspectives on the current use of machinelearning as well as new usage areas to explore. In this study,we apply and map existing machine teaching principles ontoa contextual machine teaching implementation in a commutingsetting. The aim is to highlight areas in machine teaching theorythat requires more attention. The main contribution of this workis an increased focus on available features, the features space andthe potential to transfer some of the domain expert’s explanatorypowers to the machine learning system.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Machine learning, Machine Teaching, Human in the loop I
National Category
Computer Systems
Identifiers
urn:nbn:se:mau:diva-17116 (URN)10.1109/PerComWorkshops48775.2020.9156132 (DOI)000612838200047 ()2-s2.0-85091989967 (Scopus ID)978-1-7281-4716-1 (ISBN)978-1-7281-4717-8 (ISBN)
Conference
PerCom, Workshop on Context and Activity Modeling and Recognition (CoMoReA). March 23-27, 2020. Austin, Texas, USA.
Available from: 2020-04-23 Created: 2020-04-23 Last updated: 2024-02-05Bibliographically approved
3. The Role of Explanations in Human-Machine Learning
Open this publication in new window or tab >>The Role of Explanations in Human-Machine Learning
2021 (English)In: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2021, p. 1006-1013Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we study explanations in a setting where human capabilities are in parity with Machine Learning (ML) capabilities. If an ML system is to be trusted in this situation, limitations in the trained ML model’s abilities have to be exposed to the end-user. A majority of current approaches focus on the task of creating explanations for a proposed decision, but less attention is given to the equally important task of exposing limitations in the ML model’s capabilities, limitations that in turn affect the validity of created explanations. Using a small-scale design experiment we compare human explanations with explanations created by an ML system. This paper explores and presents how the structure and terminology of scientific explanations can expose limitations in the ML models knowledge and be used as an approach for research and design in the area of explainable artificial intelligence.

Place, publisher, year, edition, pages
IEEE, 2021
Series
Conference proceedings - IEEE International Conference on Systems, Man, and Cybernetics, ISSN 1062-922X, E-ISSN 2577-1655
Keywords
Training, Terminology, Conferences, Neural networks, Machine learning, Knowledge representation, Iterative methods
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-50672 (URN)10.1109/SMC52423.2021.9658610 (DOI)000800532000156 ()2-s2.0-85124332156 (Scopus ID)978-1-6654-4207-7 (ISBN)
Conference
Systems, Man, and Cybernetics (SMC), Melbourne, Australia 2021
Available from: 2022-03-17 Created: 2022-03-17 Last updated: 2024-03-04Bibliographically approved
4. A Conceptual Approach to Explainable Neural Networks
Open this publication in new window or tab >>A Conceptual Approach to Explainable Neural Networks
(English)Manuscript (preprint) (Other academic)
Abstract [en]

The success of neural networks largely builds on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain a neural network’s decision, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we performed a targeted literature review focusing on research that aims to associate internal representations with human understandable concepts. By using deductive nomological explanations combined with causality theories as an analytical lens, we analyse nine carefully selected research papers. We find our analytical lens, the explanation structure and causality, useful to understand what can be expected, and not expected, from explanations inferred from neural networks. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal: is it (a) understanding the ML model, (b) the training data or (c) actionable explanations that are true-to-the-domain?

Keywords
neural networks, causality, scientific explanations, explainable artificial intelligence
National Category
Human Computer Interaction Computer Engineering
Research subject
Interaktionsdesign
Identifiers
urn:nbn:se:mau:diva-58464 (URN)
Available from: 2023-03-01 Created: 2023-03-01 Last updated: 2023-03-17Bibliographically approved
5. More Sanity Checks for Saliency Maps
Open this publication in new window or tab >>More Sanity Checks for Saliency Maps
2022 (English)In: ISMIS 2022: Foundations of Intelligent Systems / [ed] Michelangelo Ceci; Sergio Flesca; Elio Masciari; Giuseppe Manco; Zbigniew W. Raś, Springer, 2022, p. 175-184Conference paper, Published paper (Refereed)
Abstract [en]

Concepts are powerful human mental representations used to explain, reason and understand. In this work, we use theories on concepts as an analytical lens to compare internal knowledge representations in neural networks to human concepts. In two image classification studies we find an unclear alignment between these, but more pronounced, we find the need to further develop explanation methods that incorporate concept ontologies. 

Place, publisher, year, edition, pages
Springer, 2022
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13515
Keywords
Explainable AI, Understandable AI, Human-centric AI
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:mau:diva-54924 (URN)10.1007/978-3-031-16564-1_17 (DOI)000886990100017 ()2-s2.0-85140462679 (Scopus ID)978-3-031-16564-1 (ISBN)978-3-031-16563-4 (ISBN)
Conference
26th International Symposium on Methodologies for Intelligent Systems, ISMIS 2022, Cosenza, Italy, October 3–5, 2022
Available from: 2022-09-14 Created: 2022-09-14 Last updated: 2023-12-15Bibliographically approved
6. "When can i trust it?": contextualising explainability methods for classifiers
Open this publication in new window or tab >>"When can i trust it?": contextualising explainability methods for classifiers
2023 (English)In: CMLT '23: Proceedings of the 2023 8th International Conference on Machine Learning Technologies, ACM Digital Library, 2023, p. 108-115Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
ACM Digital Library, 2023
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-58441 (URN)10.1145/3589883.3589899 (DOI)001050779800016 ()2-s2.0-85167805603 (Scopus ID)9781450398329 (ISBN)
Conference
International Conference on Machine Learning Technologies (ICMLT) Stockholm, Sweden | March 10-12, 2023
Available from: 2023-03-01 Created: 2023-03-01 Last updated: 2023-09-12Bibliographically approved
7. Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts
Open this publication in new window or tab >>Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts
2023 (English)In: Proceedings of Eighth International Congress on Information and Communication Technology / [ed] Yang, XS., Sherratt, R.S., Dey, N., Joshi, A., Springer, 2023, Vol. 1, p. 155-171Conference paper, Published paper (Refereed)
Abstract [en]

The currently dominating artificial intelligence and machine learning technology, neural networks, builds on inductive statistical learning processes. Being void of knowledge that can be used deductively these systems cannot distinguish exemplars part of the target domain from those not part of it. This ability is critical when the aim is to build human trust in real-world settings and essential to avoid usage in domains wherein a system cannot be trusted. In the work presented here, we conduct two qualitative contextual user studies and one controlled experiment to uncover research paths and design openings for the sought distinction. Through our experiments, we find a need to refocus from average case metrics and benchmarking datasets toward systems that can be falsified. The work uncovers and lays bare the need to incorporate and internalise a domain ontology in the systems and/or present evidence for a decision in a fashion that allows a human to use our unique knowledge and reasoning capability. Additional material and code to reproduce our experiments can be found at https://github.com/k3larra/ood.

Place, publisher, year, edition, pages
Springer, 2023
Series
Lecture Notes in Networks and Systems, ISSN 2367-3370, E-ISSN 2367-3389 ; 693
Keywords
Trustworthy Machine Learning, Explainable AI, Neural Networks, Concepts, Generalisation, Out of Distribution
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-58465 (URN)10.1007/978-981-99-3243-6_13 (DOI)2-s2.0-85174720293 (Scopus ID)978-981-99-3242-9 (ISBN)978-981-99-3243-6 (ISBN)
Conference
International Congress on Information and Communication Technology (ICICT), London, 2023
Available from: 2023-03-01 Created: 2023-03-01 Last updated: 2024-02-05Bibliographically approved
8. Deep Learning, generalisation and concepts
Open this publication in new window or tab >>Deep Learning, generalisation and concepts
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Central to deep learning is an ability to generalise within a target domain consistent with human beliefs within the same domain. A label inferred by the neural network then maps to a human mental representation of a, to the label, corresponding concept. If an explanation concerning why a specific decision is promoted it is important that we move from average case performance metrics towards interpretable explanations that build on human understandable concepts connected to the promoted label. In this work, we use Explainable Artificial Intelligence (XAI) methods to investigate if internal knowledge representations in trained neural networks are aligned and generalise in correspondence to human mental representations. Our findings indicate an, in neural networks, epistemic misalignment between machine and human knowledge representations. Consequently, if the goal is classifications explainable for en users we can question the usefulness of neural networks trained without considering concept alignment. 

National Category
Computer Engineering
Identifiers
urn:nbn:se:mau:diva-58467 (URN)
Available from: 2023-03-01 Created: 2023-03-01 Last updated: 2023-03-17Bibliographically approved

Open Access in DiVA

fulltext(34630 kB)1962 downloads
File information
File name FULLTEXT01.pdfFile size 34630 kBChecksum SHA-512
85d118854d8a68238e2f61e55fcf08778a99dae0f8d72952d34cb2e59c85ab334c38766807059ac6dff71682af89e1e212a77b63453f12268df259ce3f6c0611
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records

Holmberg, Lars

Search in DiVA

By author/editor
Holmberg, Lars
By organisation
Department of Computer Science and Media Technology (DVMT)Internet of Things and People (IOTAP)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 1963 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 2918 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf