Publikationer från Malmö universitet
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Neural networks in context: challenges and opportunities: a critical inquiry into prerequisites for user trust in decisions promoted by neural networks
Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).ORCID-id: 0000-0001-5676-1931
2023 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Artificial intelligence and machine learning (ML) in particular increasingly impact human life by creating value from collected data. This assetisation affects all aspectsof human life, from choosing a significant other to recommending a product for us to consume. This type of ML-based system thrives because it predicts human behaviour based on average case performance metrics (like accuracy). However, its usefulnessis more limited when it comes to being transparent about its internal knowledge representations for singular decisions, for example, it is not good at explaining why ithas suggested a particular decision in a specific context.The goal of this work is to let end users be in command of how ML systems are used and thereby combine the strengths of humans and machines – machines which can propose transparent decisions. Artificial neural networks are an interesting candidate for a setting of this type, given that this technology has been successful in building knowledge representations from raw data. A neural network can be trained by exposing it to data from the target domain. It can then internalise knowledge representations from the domain and perform contextual tasks. In these situations, the fragment of the actual world internalised in an ML system has to be contextualised by a human to beuseful and trustworthy in non-static settings.This setting is explored through the overarching research question: What challenges and opportunities can emerge when an end user uses neural networks in context to support singular decision-making? To address this question, Research through Design is used as the central methodology, as this research approach matches the openness of the research question. Through six design experiments, I explore and expand on challenges and opportunities in settings where singular contextual decisions matter. The initial design experiments focus on opportunities in settings that augment human cognitive abilities. Thereafter, the experiments explore challenges related to settings where neural networks can enhance human cognitive abilities. This part concerns approaches intended to explain promoted decisions.This work contributes in three ways: 1) exploring learning related to neural networks in context to put forward a core terminology for contextual decision-making using ML systems, wherein the terminology includes the generative notions of true-to-the-domain, concept, out-of-distribution and generalisation; 2) presenting a number of design guidelines; and 3) showing the need to align internal knowledge representations with concepts if neural networks are to produce explainable decisions. I also argue that training neural networks to generalise basic concepts like shapes and colours, concepts easily understandable by humans, is a path forward. This research direction leads towards neural network-based systems that can produce more complex explanations that build on basic generalisable concepts.

Abstract [sv]

Artificiell intelligens och i synnerhet Maskininlärning (ML) påverkar i hög grad människors liv genom de kan skapa monetärt värde från data. Denna produktifiering av insamlad data påverkar på många sätt våra liv, från val av partner till att rekommendera nästa produkt att konsumera. ML-baserade system fungerar väl i denna roll eftersom de kan förutsäga människors beteende baserat på genomsnittliga prestandamått, men deras användbarhet är mer begränsad i situationer där det är viktigt med transparens visavi de kunskapsrepresentationer ett enskilt beslut baseras på.

 Målet med detta arbete är att kombinera människors och maskiners styrkor via en tydlig maktrelation där en slutanvändare har kommandot. Denna maktrelation bygger på användning av ML-system som är transparenta med bakomliggande orsaker för ett föreslaget beslut. Artificiella neurala nätverk är ett intressant val av ML-teknik för denna uppgift eftersom de kan bygga interna kunskapsrepresentationer från rå data och därför tränas utan specialiserad ML kunskap. Detta innebär att ett neuralt nätverk kan tränas genom att exponeras för data från en måldomän och i denna process internalisera relevanta kunskapsrepresentationer. Därefter kan nätet presentera kontextuella förslag på beslut baserat på dessa representationer. I icke-statiska situationer behöver det fragment av den verkliga världen som internaliseras i ML-systemet kontextualiseras av en människa för att systemet skall vara användbart och tillförlitligt.

 I detta arbete utforskas det ovan beskrivna området via en övergripande forskningsfråga: Vilka utmaningar och möjligheter kan uppstå när en slutanvändare använder neurala nätverk som stöd för enstaka beslut i ett väldefinierat sammanhang?

 För att besvara forskningsfrågan ovan används metodologin forskning genom design, detta på grund av att den valda metodologin matchar öppenheten i forskningsfrågan. Genom sex designexperiment utforskas utmaningar och möjligheter i situationer där enskilda kontextuella beslut är viktiga. De initiala designexperimenten fokuserar främst på möjligheter i situationer där neurala nätverk presterar i paritet med människors kognitiva förmågor och de senare experimenten utforskar utmaningar i situationer där neurala nätverk överträffar människans kognitiva förmågor.  Den andra delen fokuserar främst på metoder som syftar till att förklara beslut föreslagna av det neurala nätverket.

 Detta arbete bidrar till existerande kunskap på tre sätt: (1) utforskande av lärande relaterat till neurala nätverk med målet att presentera en terminologi användbar för kontextuellt beslutsfattande understött av ML-system, den framtagna terminologin inkluderar generativa begrepp som: sann-i-relation-till-domänen, koncept, utanför-distributionen och generalisering, (2) ett antal designriktlinjer, (3) behovet av att justera interna kunskapsrepresentationer i neurala nätverk så att de överensstämmer med koncept vilket skulle kunna medföra att neurala nätverk kan producera förklaringsbara beslut. Jag föreslår även att en framkomlig forskningsstrategi är att träna neurala nätverk med utgångspunkt från grundläggande koncept, som former och färger. Denna strategi innebär att nätverken kan generalisera utifrån dessa generella koncept i olika domäner. Den föreslagna forskningsriktning syftar till att producera mer komplexa förklaringar från neurala nätverk baserat på grundläggande generaliserbara koncept.

sted, utgiver, år, opplag, sider
Malmö: Malmö University Press, 2023. , s. 70
Serie
Studies in Computer Science ; 22
Emneord [en]
Explainable AI, Machine Learning, Neural Network, Concept, Generalisation, Out-of-Distribution
Emneord [sv]
Förklaringsbar AI, Maskininlärning, Neurala Nätverk, Koncept, Generalisering, Utanför-distributionen
HSV kategori
Identifikatorer
URN: urn:nbn:se:mau:diva-58450DOI: 10.24834/isbn.9789178773503ISBN: 978-91-7877-351-0 (tryckt)ISBN: 978-91-7877-350-3 (digital)OAI: oai:DiVA.org:mau-58450DiVA, id: diva2:1744266
Disputas
2023-04-13, Orkanen, D138 eller livestream, Nordenskiöldsgatan 10, Malmö, 14:00 (engelsk)
Opponent
Veileder
Merknad

Paper IV and VIII in dissertation as manuscript

Tilgjengelig fra: 2023-03-17 Laget: 2023-03-17 Sist oppdatert: 2025-03-17bibliografisk kontrollert
Delarbeid
1. Evaluating Interpretability in Machine Teaching
Åpne denne publikasjonen i ny fane eller vindu >>Evaluating Interpretability in Machine Teaching
2020 (engelsk)Inngår i: Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness: The PAAMS Collection / [ed] Springer, Springer, 2020, Vol. 1233, s. 54-65Konferansepaper, Publicerat paper (Annet vitenskapelig)
Abstract [en]

Building interpretable machine learning agents is a challenge that needs to be addressed to make the agents trustworthy and align the usage of the technology with human values. In this work, we focus on how to evaluate interpretability in a machine teaching setting, a settingthat involves a human domain expert as a teacher in relation to a machine learning agent. By using a prototype in a study, we discuss theinterpretability denition and show how interpretability can be evaluatedon a functional-, human- and application level. We end the paperby discussing open questions and suggestions on how our results can be transferable to other domains.

sted, utgiver, år, opplag, sider
Springer, 2020
Serie
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1233
HSV kategori
Forskningsprogram
Interaktionsdesign
Identifikatorer
urn:nbn:se:mau:diva-18380 (URN)10.1007/978-3-030-51999-5_5 (DOI)2-s2.0-85088540310 (Scopus ID)978-3-030-51998-8 (ISBN)978-3-030-51999-5 (ISBN)
Konferanse
PAAMS: International Conference on Practical Applications of Agents and Multi-Agent Systems, 7-9 October 2020, L’Aquila, Italy
Tilgjengelig fra: 2020-09-23 Laget: 2020-09-23 Sist oppdatert: 2023-07-06bibliografisk kontrollert
2. Contextual machine teaching
Åpne denne publikasjonen i ny fane eller vindu >>Contextual machine teaching
2020 (engelsk)Inngår i: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2020Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Machine learning research today is dominated by atechnocentric perspective and in many cases disconnected fromthe users of the technology. The machine teaching paradigm insteadshifts the focus from machine learning experts towards thedomain experts and users of machine learning technology. Thisshift opens up for new perspectives on the current use of machinelearning as well as new usage areas to explore. In this study,we apply and map existing machine teaching principles ontoa contextual machine teaching implementation in a commutingsetting. The aim is to highlight areas in machine teaching theorythat requires more attention. The main contribution of this workis an increased focus on available features, the features space andthe potential to transfer some of the domain expert’s explanatorypowers to the machine learning system.

sted, utgiver, år, opplag, sider
IEEE, 2020
Serie
International Conference on Pervasive Computing and Communications, ISSN 2474-2503
Emneord
Machine learning, Machine Teaching, Human in the loop I
HSV kategori
Identifikatorer
urn:nbn:se:mau:diva-17116 (URN)10.1109/PerComWorkshops48775.2020.9156132 (DOI)000612838200047 ()2-s2.0-85091989967 (Scopus ID)978-1-7281-4716-1 (ISBN)978-1-7281-4717-8 (ISBN)
Konferanse
PerCom, Workshop on Context and Activity Modeling and Recognition (CoMoReA). March 23-27, 2020. Austin, Texas, USA.
Tilgjengelig fra: 2020-04-23 Laget: 2020-04-23 Sist oppdatert: 2025-02-04bibliografisk kontrollert
3. The Role of Explanations in Human-Machine Learning
Åpne denne publikasjonen i ny fane eller vindu >>The Role of Explanations in Human-Machine Learning
2021 (engelsk)Inngår i: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2021, s. 1006-1013Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this paper, we study explanations in a setting where human capabilities are in parity with Machine Learning (ML) capabilities. If an ML system is to be trusted in this situation, limitations in the trained ML model’s abilities have to be exposed to the end-user. A majority of current approaches focus on the task of creating explanations for a proposed decision, but less attention is given to the equally important task of exposing limitations in the ML model’s capabilities, limitations that in turn affect the validity of created explanations. Using a small-scale design experiment we compare human explanations with explanations created by an ML system. This paper explores and presents how the structure and terminology of scientific explanations can expose limitations in the ML models knowledge and be used as an approach for research and design in the area of explainable artificial intelligence.

sted, utgiver, år, opplag, sider
IEEE, 2021
Serie
Conference proceedings - IEEE International Conference on Systems, Man, and Cybernetics, ISSN 1062-922X, E-ISSN 2577-1655
Emneord
Training, Terminology, Conferences, Neural networks, Machine learning, Knowledge representation, Iterative methods
HSV kategori
Identifikatorer
urn:nbn:se:mau:diva-50672 (URN)10.1109/SMC52423.2021.9658610 (DOI)000800532000156 ()2-s2.0-85124332156 (Scopus ID)978-1-6654-4207-7 (ISBN)
Konferanse
Systems, Man, and Cybernetics (SMC), Melbourne, Australia 2021
Tilgjengelig fra: 2022-03-17 Laget: 2022-03-17 Sist oppdatert: 2024-03-04bibliografisk kontrollert
4. A Conceptual Approach to Explainable Neural Networks
Åpne denne publikasjonen i ny fane eller vindu >>A Conceptual Approach to Explainable Neural Networks
(engelsk)Manuskript (preprint) (Annet vitenskapelig)
Abstract [en]

The success of neural networks largely builds on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain a neural network’s decision, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we performed a targeted literature review focusing on research that aims to associate internal representations with human understandable concepts. By using deductive nomological explanations combined with causality theories as an analytical lens, we analyse nine carefully selected research papers. We find our analytical lens, the explanation structure and causality, useful to understand what can be expected, and not expected, from explanations inferred from neural networks. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal: is it (a) understanding the ML model, (b) the training data or (c) actionable explanations that are true-to-the-domain?

Emneord
neural networks, causality, scientific explanations, explainable artificial intelligence
HSV kategori
Forskningsprogram
Interaktionsdesign
Identifikatorer
urn:nbn:se:mau:diva-58464 (URN)
Tilgjengelig fra: 2023-03-01 Laget: 2023-03-01 Sist oppdatert: 2023-03-17bibliografisk kontrollert
5. More Sanity Checks for Saliency Maps
Åpne denne publikasjonen i ny fane eller vindu >>More Sanity Checks for Saliency Maps
2022 (engelsk)Inngår i: ISMIS 2022: Foundations of Intelligent Systems / [ed] Michelangelo Ceci; Sergio Flesca; Elio Masciari; Giuseppe Manco; Zbigniew W. Raś, Springer, 2022, s. 175-184Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Concepts are powerful human mental representations used to explain, reason and understand. In this work, we use theories on concepts as an analytical lens to compare internal knowledge representations in neural networks to human concepts. In two image classification studies we find an unclear alignment between these, but more pronounced, we find the need to further develop explanation methods that incorporate concept ontologies. 

sted, utgiver, år, opplag, sider
Springer, 2022
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13515
Emneord
Explainable AI, Understandable AI, Human-centric AI
HSV kategori
Identifikatorer
urn:nbn:se:mau:diva-54924 (URN)10.1007/978-3-031-16564-1_17 (DOI)000886990100017 ()2-s2.0-85140462679 (Scopus ID)978-3-031-16564-1 (ISBN)978-3-031-16563-4 (ISBN)
Konferanse
26th International Symposium on Methodologies for Intelligent Systems, ISMIS 2022, Cosenza, Italy, October 3–5, 2022
Tilgjengelig fra: 2022-09-14 Laget: 2022-09-14 Sist oppdatert: 2023-12-15bibliografisk kontrollert
6. "When can i trust it?": contextualising explainability methods for classifiers
Åpne denne publikasjonen i ny fane eller vindu >>"When can i trust it?": contextualising explainability methods for classifiers
2023 (engelsk)Inngår i: CMLT '23: Proceedings of the 2023 8th International Conference on Machine Learning Technologies, ACM Digital Library, 2023, s. 108-115Konferansepaper, Publicerat paper (Fagfellevurdert)
sted, utgiver, år, opplag, sider
ACM Digital Library, 2023
HSV kategori
Identifikatorer
urn:nbn:se:mau:diva-58441 (URN)10.1145/3589883.3589899 (DOI)001050779800016 ()2-s2.0-85167805603 (Scopus ID)9781450398329 (ISBN)
Konferanse
International Conference on Machine Learning Technologies (ICMLT) Stockholm, Sweden | March 10-12, 2023
Tilgjengelig fra: 2023-03-01 Laget: 2023-03-01 Sist oppdatert: 2023-09-12bibliografisk kontrollert
7. Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts
Åpne denne publikasjonen i ny fane eller vindu >>Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts
2023 (engelsk)Inngår i: Proceedings of Eighth International Congress on Information and Communication Technology / [ed] Yang, XS., Sherratt, R.S., Dey, N., Joshi, A., Springer, 2023, Vol. 1, s. 155-171Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The currently dominating artificial intelligence and machine learning technology, neural networks, builds on inductive statistical learning processes. Being void of knowledge that can be used deductively these systems cannot distinguish exemplars part of the target domain from those not part of it. This ability is critical when the aim is to build human trust in real-world settings and essential to avoid usage in domains wherein a system cannot be trusted. In the work presented here, we conduct two qualitative contextual user studies and one controlled experiment to uncover research paths and design openings for the sought distinction. Through our experiments, we find a need to refocus from average case metrics and benchmarking datasets toward systems that can be falsified. The work uncovers and lays bare the need to incorporate and internalise a domain ontology in the systems and/or present evidence for a decision in a fashion that allows a human to use our unique knowledge and reasoning capability. Additional material and code to reproduce our experiments can be found at https://github.com/k3larra/ood.

sted, utgiver, år, opplag, sider
Springer, 2023
Serie
Lecture Notes in Networks and Systems, ISSN 2367-3370, E-ISSN 2367-3389 ; 693
Emneord
Trustworthy Machine Learning, Explainable AI, Neural Networks, Concepts, Generalisation, Out of Distribution
HSV kategori
Identifikatorer
urn:nbn:se:mau:diva-58465 (URN)10.1007/978-981-99-3243-6_13 (DOI)2-s2.0-85174720293 (Scopus ID)978-981-99-3242-9 (ISBN)978-981-99-3243-6 (ISBN)
Konferanse
International Congress on Information and Communication Technology (ICICT), London, 2023
Tilgjengelig fra: 2023-03-01 Laget: 2023-03-01 Sist oppdatert: 2024-02-05bibliografisk kontrollert
8. Deep Learning, generalisation and concepts
Åpne denne publikasjonen i ny fane eller vindu >>Deep Learning, generalisation and concepts
(engelsk)Manuskript (preprint) (Annet vitenskapelig)
Abstract [en]

Central to deep learning is an ability to generalise within a target domain consistent with human beliefs within the same domain. A label inferred by the neural network then maps to a human mental representation of a, to the label, corresponding concept. If an explanation concerning why a specific decision is promoted it is important that we move from average case performance metrics towards interpretable explanations that build on human understandable concepts connected to the promoted label. In this work, we use Explainable Artificial Intelligence (XAI) methods to investigate if internal knowledge representations in trained neural networks are aligned and generalise in correspondence to human mental representations. Our findings indicate an, in neural networks, epistemic misalignment between machine and human knowledge representations. Consequently, if the goal is classifications explainable for en users we can question the usefulness of neural networks trained without considering concept alignment. 

HSV kategori
Identifikatorer
urn:nbn:se:mau:diva-58467 (URN)
Tilgjengelig fra: 2023-03-01 Laget: 2023-03-01 Sist oppdatert: 2023-03-17bibliografisk kontrollert

Open Access i DiVA

fulltext(34630 kB)2553 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 34630 kBChecksum SHA-512
85d118854d8a68238e2f61e55fcf08778a99dae0f8d72952d34cb2e59c85ab334c38766807059ac6dff71682af89e1e212a77b63453f12268df259ce3f6c0611
Type fulltextMimetype application/pdf

Andre lenker

Forlagets fulltekst

Person

Holmberg, Lars

Søk i DiVA

Av forfatter/redaktør
Holmberg, Lars
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 2554 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 3198 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf