Publikationer från Malmö universitet
Driftinformation
Ett driftavbrott i samband med versionsuppdatering är planerat till 10/12-2024, kl 12.00-13.00. Under den tidsperioden kommer DiVA inte att vara tillgängligt
Ändra sökning
Avgränsa sökresultatet
1 - 17 av 17
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Holmberg, Lars
    Malmö universitet, Fakulteten för kultur och samhälle (KS), Institutionen för konst, kultur och kommunikation (K3).
    Ageing and sexing birds2023Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Ageing and sexing birds require specialist knowledge and training concerning which characteristics to focus on for different species. An expert can formulate an explanation for a classification using these characteristics and, additionally, identify anomalies. Some characteristics require practical training, for example, the difference between moulted and non-moulted feathers, while some knowledge, like feather taxonomy and moulting patterns, can be learned without extensive practical training. An explanation formulated for a classification, by a human, stands in sharp contrast to an explanation produced by a trained neural network. These machine explanations are more an answer to a how-question, related to the inner workings of the neural network, not an answer to a why-question, presenting domain-related characteristics useful for a domain expert. For machine-created explanations to be trustworthy neural networks require a static use context and representative independent and identically distributed training data. These prerequisites do seldom hold in real-world settings. Some challenges related to this are neural networks' inability to identify exemplars outside the training distribution and aligning internal knowledge creation with characteristics used in the target domain. These types of questions are central in the active research field of explainable artificial intelligence (XAI), but, there is a lack of hands-on experiments involving domain experts. This work aims to address the above issues with the goal of producing a prototype where domain experts can train a tool that builds on human expert knowledge in order to produce useful explanations. By using internalised domain expertise we aim at a tool that can produce useful explanations and even new insights for the domain. By working together with domain experts from Ottenby Observatory our goal is to address central XAI challenges and, at the same time, add new perspectives useful to determine age and sex on birds. 

  • 2.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts2023Ingår i: Proceedings of Eighth International Congress on Information and Communication Technology / [ed] Yang, XS., Sherratt, R.S., Dey, N., Joshi, A., Springer, 2023, Vol. 1, s. 155-171Konferensbidrag (Refereegranskat)
    Abstract [en]

    The currently dominating artificial intelligence and machine learning technology, neural networks, builds on inductive statistical learning processes. Being void of knowledge that can be used deductively these systems cannot distinguish exemplars part of the target domain from those not part of it. This ability is critical when the aim is to build human trust in real-world settings and essential to avoid usage in domains wherein a system cannot be trusted. In the work presented here, we conduct two qualitative contextual user studies and one controlled experiment to uncover research paths and design openings for the sought distinction. Through our experiments, we find a need to refocus from average case metrics and benchmarking datasets toward systems that can be falsified. The work uncovers and lays bare the need to incorporate and internalise a domain ontology in the systems and/or present evidence for a decision in a fashion that allows a human to use our unique knowledge and reasoning capability. Additional material and code to reproduce our experiments can be found at https://github.com/k3larra/ood.

  • 3.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Neural networks in context: challenges and opportunities: a critical inquiry into prerequisites for user trust in decisions promoted by neural networks2023Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [sv]

    Artificiell intelligens och i synnerhet Maskininlärning (ML) påverkar i hög grad människors liv genom de kan skapa monetärt värde från data. Denna produktifiering av insamlad data påverkar på många sätt våra liv, från val av partner till att rekommendera nästa produkt att konsumera. ML-baserade system fungerar väl i denna roll eftersom de kan förutsäga människors beteende baserat på genomsnittliga prestandamått, men deras användbarhet är mer begränsad i situationer där det är viktigt med transparens visavi de kunskapsrepresentationer ett enskilt beslut baseras på.

     Målet med detta arbete är att kombinera människors och maskiners styrkor via en tydlig maktrelation där en slutanvändare har kommandot. Denna maktrelation bygger på användning av ML-system som är transparenta med bakomliggande orsaker för ett föreslaget beslut. Artificiella neurala nätverk är ett intressant val av ML-teknik för denna uppgift eftersom de kan bygga interna kunskapsrepresentationer från rå data och därför tränas utan specialiserad ML kunskap. Detta innebär att ett neuralt nätverk kan tränas genom att exponeras för data från en måldomän och i denna process internalisera relevanta kunskapsrepresentationer. Därefter kan nätet presentera kontextuella förslag på beslut baserat på dessa representationer. I icke-statiska situationer behöver det fragment av den verkliga världen som internaliseras i ML-systemet kontextualiseras av en människa för att systemet skall vara användbart och tillförlitligt.

     I detta arbete utforskas det ovan beskrivna området via en övergripande forskningsfråga: Vilka utmaningar och möjligheter kan uppstå när en slutanvändare använder neurala nätverk som stöd för enstaka beslut i ett väldefinierat sammanhang?

     För att besvara forskningsfrågan ovan används metodologin forskning genom design, detta på grund av att den valda metodologin matchar öppenheten i forskningsfrågan. Genom sex designexperiment utforskas utmaningar och möjligheter i situationer där enskilda kontextuella beslut är viktiga. De initiala designexperimenten fokuserar främst på möjligheter i situationer där neurala nätverk presterar i paritet med människors kognitiva förmågor och de senare experimenten utforskar utmaningar i situationer där neurala nätverk överträffar människans kognitiva förmågor.  Den andra delen fokuserar främst på metoder som syftar till att förklara beslut föreslagna av det neurala nätverket.

     Detta arbete bidrar till existerande kunskap på tre sätt: (1) utforskande av lärande relaterat till neurala nätverk med målet att presentera en terminologi användbar för kontextuellt beslutsfattande understött av ML-system, den framtagna terminologin inkluderar generativa begrepp som: sann-i-relation-till-domänen, koncept, utanför-distributionen och generalisering, (2) ett antal designriktlinjer, (3) behovet av att justera interna kunskapsrepresentationer i neurala nätverk så att de överensstämmer med koncept vilket skulle kunna medföra att neurala nätverk kan producera förklaringsbara beslut. Jag föreslår även att en framkomlig forskningsstrategi är att träna neurala nätverk med utgångspunkt från grundläggande koncept, som former och färger. Denna strategi innebär att nätverken kan generalisera utifrån dessa generella koncept i olika domäner. Den föreslagna forskningsriktning syftar till att producera mer komplexa förklaringar från neurala nätverk baserat på grundläggande generaliserbara koncept.

    Delarbeten
    1. Evaluating Interpretability in Machine Teaching
    Öppna denna publikation i ny flik eller fönster >>Evaluating Interpretability in Machine Teaching
    2020 (Engelska)Ingår i: Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness: The PAAMS Collection / [ed] Springer, Springer, 2020, Vol. 1233, s. 54-65Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    Building interpretable machine learning agents is a challenge that needs to be addressed to make the agents trustworthy and align the usage of the technology with human values. In this work, we focus on how to evaluate interpretability in a machine teaching setting, a settingthat involves a human domain expert as a teacher in relation to a machine learning agent. By using a prototype in a study, we discuss theinterpretability denition and show how interpretability can be evaluatedon a functional-, human- and application level. We end the paperby discussing open questions and suggestions on how our results can be transferable to other domains.

    Ort, förlag, år, upplaga, sidor
    Springer, 2020
    Serie
    Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1233
    Nationell ämneskategori
    Människa-datorinteraktion (interaktionsdesign)
    Forskningsämne
    Interaktionsdesign
    Identifikatorer
    urn:nbn:se:mau:diva-18380 (URN)10.1007/978-3-030-51999-5_5 (DOI)2-s2.0-85088540310 (Scopus ID)978-3-030-51998-8 (ISBN)978-3-030-51999-5 (ISBN)
    Konferens
    PAAMS: International Conference on Practical Applications of Agents and Multi-Agent Systems, 7-9 October 2020, L’Aquila, Italy
    Tillgänglig från: 2020-09-23 Skapad: 2020-09-23 Senast uppdaterad: 2023-07-06Bibliografiskt granskad
    2. Contextual machine teaching
    Öppna denna publikation i ny flik eller fönster >>Contextual machine teaching
    2020 (Engelska)Ingår i: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2020Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    Machine learning research today is dominated by atechnocentric perspective and in many cases disconnected fromthe users of the technology. The machine teaching paradigm insteadshifts the focus from machine learning experts towards thedomain experts and users of machine learning technology. Thisshift opens up for new perspectives on the current use of machinelearning as well as new usage areas to explore. In this study,we apply and map existing machine teaching principles ontoa contextual machine teaching implementation in a commutingsetting. The aim is to highlight areas in machine teaching theorythat requires more attention. The main contribution of this workis an increased focus on available features, the features space andthe potential to transfer some of the domain expert’s explanatorypowers to the machine learning system.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2020
    Nyckelord
    Machine learning, Machine Teaching, Human in the loop I
    Nationell ämneskategori
    Datorsystem
    Identifikatorer
    urn:nbn:se:mau:diva-17116 (URN)10.1109/PerComWorkshops48775.2020.9156132 (DOI)000612838200047 ()2-s2.0-85091989967 (Scopus ID)978-1-7281-4716-1 (ISBN)978-1-7281-4717-8 (ISBN)
    Konferens
    PerCom, Workshop on Context and Activity Modeling and Recognition (CoMoReA). March 23-27, 2020. Austin, Texas, USA.
    Tillgänglig från: 2020-04-23 Skapad: 2020-04-23 Senast uppdaterad: 2024-02-05Bibliografiskt granskad
    3. The Role of Explanations in Human-Machine Learning
    Öppna denna publikation i ny flik eller fönster >>The Role of Explanations in Human-Machine Learning
    2021 (Engelska)Ingår i: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2021, s. 1006-1013Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    In this paper, we study explanations in a setting where human capabilities are in parity with Machine Learning (ML) capabilities. If an ML system is to be trusted in this situation, limitations in the trained ML model’s abilities have to be exposed to the end-user. A majority of current approaches focus on the task of creating explanations for a proposed decision, but less attention is given to the equally important task of exposing limitations in the ML model’s capabilities, limitations that in turn affect the validity of created explanations. Using a small-scale design experiment we compare human explanations with explanations created by an ML system. This paper explores and presents how the structure and terminology of scientific explanations can expose limitations in the ML models knowledge and be used as an approach for research and design in the area of explainable artificial intelligence.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2021
    Serie
    Conference proceedings - IEEE International Conference on Systems, Man, and Cybernetics, ISSN 1062-922X, E-ISSN 2577-1655
    Nyckelord
    Training, Terminology, Conferences, Neural networks, Machine learning, Knowledge representation, Iterative methods
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:mau:diva-50672 (URN)10.1109/SMC52423.2021.9658610 (DOI)000800532000156 ()2-s2.0-85124332156 (Scopus ID)978-1-6654-4207-7 (ISBN)
    Konferens
    Systems, Man, and Cybernetics (SMC), Melbourne, Australia 2021
    Tillgänglig från: 2022-03-17 Skapad: 2022-03-17 Senast uppdaterad: 2024-03-04Bibliografiskt granskad
    4. A Conceptual Approach to Explainable Neural Networks
    Öppna denna publikation i ny flik eller fönster >>A Conceptual Approach to Explainable Neural Networks
    (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    The success of neural networks largely builds on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain a neural network’s decision, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we performed a targeted literature review focusing on research that aims to associate internal representations with human understandable concepts. By using deductive nomological explanations combined with causality theories as an analytical lens, we analyse nine carefully selected research papers. We find our analytical lens, the explanation structure and causality, useful to understand what can be expected, and not expected, from explanations inferred from neural networks. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal: is it (a) understanding the ML model, (b) the training data or (c) actionable explanations that are true-to-the-domain?

    Nyckelord
    neural networks, causality, scientific explanations, explainable artificial intelligence
    Nationell ämneskategori
    Människa-datorinteraktion (interaktionsdesign) Datorteknik
    Forskningsämne
    Interaktionsdesign
    Identifikatorer
    urn:nbn:se:mau:diva-58464 (URN)
    Tillgänglig från: 2023-03-01 Skapad: 2023-03-01 Senast uppdaterad: 2023-03-17Bibliografiskt granskad
    5. More Sanity Checks for Saliency Maps
    Öppna denna publikation i ny flik eller fönster >>More Sanity Checks for Saliency Maps
    2022 (Engelska)Ingår i: ISMIS 2022: Foundations of Intelligent Systems / [ed] Michelangelo Ceci; Sergio Flesca; Elio Masciari; Giuseppe Manco; Zbigniew W. Raś, Springer, 2022, s. 175-184Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    Concepts are powerful human mental representations used to explain, reason and understand. In this work, we use theories on concepts as an analytical lens to compare internal knowledge representations in neural networks to human concepts. In two image classification studies we find an unclear alignment between these, but more pronounced, we find the need to further develop explanation methods that incorporate concept ontologies. 

    Ort, förlag, år, upplaga, sidor
    Springer, 2022
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13515
    Nyckelord
    Explainable AI, Understandable AI, Human-centric AI
    Nationell ämneskategori
    Människa-datorinteraktion (interaktionsdesign)
    Identifikatorer
    urn:nbn:se:mau:diva-54924 (URN)10.1007/978-3-031-16564-1_17 (DOI)000886990100017 ()2-s2.0-85140462679 (Scopus ID)978-3-031-16564-1 (ISBN)978-3-031-16563-4 (ISBN)
    Konferens
    26th International Symposium on Methodologies for Intelligent Systems, ISMIS 2022, Cosenza, Italy, October 3–5, 2022
    Tillgänglig från: 2022-09-14 Skapad: 2022-09-14 Senast uppdaterad: 2023-12-15Bibliografiskt granskad
    6. "When can i trust it?": contextualising explainability methods for classifiers
    Öppna denna publikation i ny flik eller fönster >>"When can i trust it?": contextualising explainability methods for classifiers
    2023 (Engelska)Ingår i: CMLT '23: Proceedings of the 2023 8th International Conference on Machine Learning Technologies, ACM Digital Library, 2023, s. 108-115Konferensbidrag, Publicerat paper (Refereegranskat)
    Ort, förlag, år, upplaga, sidor
    ACM Digital Library, 2023
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:mau:diva-58441 (URN)10.1145/3589883.3589899 (DOI)001050779800016 ()2-s2.0-85167805603 (Scopus ID)9781450398329 (ISBN)
    Konferens
    International Conference on Machine Learning Technologies (ICMLT) Stockholm, Sweden | March 10-12, 2023
    Tillgänglig från: 2023-03-01 Skapad: 2023-03-01 Senast uppdaterad: 2023-09-12Bibliografiskt granskad
    7. Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts
    Öppna denna publikation i ny flik eller fönster >>Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts
    2023 (Engelska)Ingår i: Proceedings of Eighth International Congress on Information and Communication Technology / [ed] Yang, XS., Sherratt, R.S., Dey, N., Joshi, A., Springer, 2023, Vol. 1, s. 155-171Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    The currently dominating artificial intelligence and machine learning technology, neural networks, builds on inductive statistical learning processes. Being void of knowledge that can be used deductively these systems cannot distinguish exemplars part of the target domain from those not part of it. This ability is critical when the aim is to build human trust in real-world settings and essential to avoid usage in domains wherein a system cannot be trusted. In the work presented here, we conduct two qualitative contextual user studies and one controlled experiment to uncover research paths and design openings for the sought distinction. Through our experiments, we find a need to refocus from average case metrics and benchmarking datasets toward systems that can be falsified. The work uncovers and lays bare the need to incorporate and internalise a domain ontology in the systems and/or present evidence for a decision in a fashion that allows a human to use our unique knowledge and reasoning capability. Additional material and code to reproduce our experiments can be found at https://github.com/k3larra/ood.

    Ort, förlag, år, upplaga, sidor
    Springer, 2023
    Serie
    Lecture Notes in Networks and Systems, ISSN 2367-3370, E-ISSN 2367-3389 ; 693
    Nyckelord
    Trustworthy Machine Learning, Explainable AI, Neural Networks, Concepts, Generalisation, Out of Distribution
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:mau:diva-58465 (URN)10.1007/978-981-99-3243-6_13 (DOI)2-s2.0-85174720293 (Scopus ID)978-981-99-3242-9 (ISBN)978-981-99-3243-6 (ISBN)
    Konferens
    International Congress on Information and Communication Technology (ICICT), London, 2023
    Tillgänglig från: 2023-03-01 Skapad: 2023-03-01 Senast uppdaterad: 2024-02-05Bibliografiskt granskad
    8. Deep Learning, generalisation and concepts
    Öppna denna publikation i ny flik eller fönster >>Deep Learning, generalisation and concepts
    (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Central to deep learning is an ability to generalise within a target domain consistent with human beliefs within the same domain. A label inferred by the neural network then maps to a human mental representation of a, to the label, corresponding concept. If an explanation concerning why a specific decision is promoted it is important that we move from average case performance metrics towards interpretable explanations that build on human understandable concepts connected to the promoted label. In this work, we use Explainable Artificial Intelligence (XAI) methods to investigate if internal knowledge representations in trained neural networks are aligned and generalise in correspondence to human mental representations. Our findings indicate an, in neural networks, epistemic misalignment between machine and human knowledge representations. Consequently, if the goal is classifications explainable for en users we can question the usefulness of neural networks trained without considering concept alignment. 

    Nationell ämneskategori
    Datorteknik
    Identifikatorer
    urn:nbn:se:mau:diva-58467 (URN)
    Tillgänglig från: 2023-03-01 Skapad: 2023-03-01 Senast uppdaterad: 2023-03-17Bibliografiskt granskad
    Ladda ner fulltext (pdf)
    fulltext
    Ladda ner (jpg)
    presentationsbild
  • 4.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    "When can i trust it?": contextualising explainability methods for classifiers2023Ingår i: CMLT '23: Proceedings of the 2023 8th International Conference on Machine Learning Technologies, ACM Digital Library, 2023, s. 108-115Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 5.
    Holmberg, Lars
    et al.
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Davidsson, Paul
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Linde, Per
    Malmö universitet, Fakulteten för kultur och samhälle (KS), Collaborative Future Making (CFM). Malmö universitet, Fakulteten för kultur och samhälle (KS), Institutionen för konst, kultur och kommunikation (K3). Malmö universitet, Internet of Things and People (IOTAP).
    Mapping Knowledge Representations to Concepts: A Review and New Perspectives2022Ingår i: Explainable Agency in Artificial Intelligence Workshop Proceedings, 2022, s. 61-70Konferensbidrag (Refereegranskat)
    Abstract [en]

    The success of neural networks builds to a large extent on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain the neural network's decisions, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we have performed a targeted review focusing on research that aims to associate internal representations with human understandable concepts. In doing this, we added a perspective on the existing research by using primarily deductive nomological explanations as a proposed taxonomy. We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability; is it understanding the ML model or, is it actionable explanations useful in the deployment domain? 

    Ladda ner fulltext (pdf)
    fulltext
  • 6.
    Holmberg, Lars
    et al.
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Helgstrand, Carl Johan
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    Hultin, Niklas
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    More Sanity Checks for Saliency Maps2022Ingår i: ISMIS 2022: Foundations of Intelligent Systems / [ed] Michelangelo Ceci; Sergio Flesca; Elio Masciari; Giuseppe Manco; Zbigniew W. Raś, Springer, 2022, s. 175-184Konferensbidrag (Refereegranskat)
    Abstract [en]

    Concepts are powerful human mental representations used to explain, reason and understand. In this work, we use theories on concepts as an analytical lens to compare internal knowledge representations in neural networks to human concepts. In two image classification studies we find an unclear alignment between these, but more pronounced, we find the need to further develop explanation methods that incorporate concept ontologies. 

    Ladda ner fulltext (pdf)
    fulltext
  • 7.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Human In Command Machine Learning2021Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Machine Learning (ML) and Artificial Intelligence (AI) impact many aspects of human life, from recommending a significant other to assist the search for extraterrestrial life. The area develops rapidly and exiting unexplored design spaces are constantly laid bare. The focus in this work is one of these areas; ML systems where decisions concerning ML model training, usage and selection of target domain lay in the hands of domain experts. 

    This work is then on ML systems that function as a tool that augments and/or enhance human capabilities. The approach presented is denoted Human In Command ML (HIC-ML) systems. To enquire into this research domain design experiments of varying fidelity were used. Two of these experiments focus on augmenting human capabilities and targets the domains commuting and sorting batteries. One experiment focuses on enhancing human capabilities by identifying similar hand-painted plates. The experiments are used as illustrative examples to explore settings where domain experts potentially can: independently train an ML model and in an iterative fashion, interact with it and interpret and understand its decisions. 

    HIC-ML should be seen as a governance principle that focuses on adding value and meaning to users. In this work, concrete application areas are presented and discussed. To open up for designing ML-based products for the area an abstract model for HIC-ML is constructed and design guidelines are proposed. In addition, terminology and abstractions useful when designing for explicability are presented by imposing structure and rigidity derived from scientific explanations. Together, this opens up for a contextual shift in ML and makes new application areas probable, areas that naturally couples the usage of AI technology to human virtues and potentially, as a consequence, can result in a democratisation of the usage and knowledge concerning this powerful technology.

    Delarbeten
    1. Evaluating Interpretability in Machine Teaching
    Öppna denna publikation i ny flik eller fönster >>Evaluating Interpretability in Machine Teaching
    2020 (Engelska)Ingår i: Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness: The PAAMS Collection / [ed] Springer, Springer, 2020, Vol. 1233, s. 54-65Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    Building interpretable machine learning agents is a challenge that needs to be addressed to make the agents trustworthy and align the usage of the technology with human values. In this work, we focus on how to evaluate interpretability in a machine teaching setting, a settingthat involves a human domain expert as a teacher in relation to a machine learning agent. By using a prototype in a study, we discuss theinterpretability denition and show how interpretability can be evaluatedon a functional-, human- and application level. We end the paperby discussing open questions and suggestions on how our results can be transferable to other domains.

    Ort, förlag, år, upplaga, sidor
    Springer, 2020
    Serie
    Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1233
    Nationell ämneskategori
    Människa-datorinteraktion (interaktionsdesign)
    Forskningsämne
    Interaktionsdesign
    Identifikatorer
    urn:nbn:se:mau:diva-18380 (URN)10.1007/978-3-030-51999-5_5 (DOI)2-s2.0-85088540310 (Scopus ID)978-3-030-51998-8 (ISBN)978-3-030-51999-5 (ISBN)
    Konferens
    PAAMS: International Conference on Practical Applications of Agents and Multi-Agent Systems, 7-9 October 2020, L’Aquila, Italy
    Tillgänglig från: 2020-09-23 Skapad: 2020-09-23 Senast uppdaterad: 2023-07-06Bibliografiskt granskad
    2. Contextual machine teaching
    Öppna denna publikation i ny flik eller fönster >>Contextual machine teaching
    2020 (Engelska)Ingår i: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2020Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    Machine learning research today is dominated by atechnocentric perspective and in many cases disconnected fromthe users of the technology. The machine teaching paradigm insteadshifts the focus from machine learning experts towards thedomain experts and users of machine learning technology. Thisshift opens up for new perspectives on the current use of machinelearning as well as new usage areas to explore. In this study,we apply and map existing machine teaching principles ontoa contextual machine teaching implementation in a commutingsetting. The aim is to highlight areas in machine teaching theorythat requires more attention. The main contribution of this workis an increased focus on available features, the features space andthe potential to transfer some of the domain expert’s explanatorypowers to the machine learning system.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2020
    Nyckelord
    Machine learning, Machine Teaching, Human in the loop I
    Nationell ämneskategori
    Datorsystem
    Identifikatorer
    urn:nbn:se:mau:diva-17116 (URN)10.1109/PerComWorkshops48775.2020.9156132 (DOI)000612838200047 ()2-s2.0-85091989967 (Scopus ID)978-1-7281-4716-1 (ISBN)978-1-7281-4717-8 (ISBN)
    Konferens
    PerCom, Workshop on Context and Activity Modeling and Recognition (CoMoReA). March 23-27, 2020. Austin, Texas, USA.
    Tillgänglig från: 2020-04-23 Skapad: 2020-04-23 Senast uppdaterad: 2024-02-05Bibliografiskt granskad
    3. The Role of Explanations in Human-Machine Learning
    Öppna denna publikation i ny flik eller fönster >>The Role of Explanations in Human-Machine Learning
    2021 (Engelska)Ingår i: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2021, s. 1006-1013Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    In this paper, we study explanations in a setting where human capabilities are in parity with Machine Learning (ML) capabilities. If an ML system is to be trusted in this situation, limitations in the trained ML model’s abilities have to be exposed to the end-user. A majority of current approaches focus on the task of creating explanations for a proposed decision, but less attention is given to the equally important task of exposing limitations in the ML model’s capabilities, limitations that in turn affect the validity of created explanations. Using a small-scale design experiment we compare human explanations with explanations created by an ML system. This paper explores and presents how the structure and terminology of scientific explanations can expose limitations in the ML models knowledge and be used as an approach for research and design in the area of explainable artificial intelligence.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2021
    Serie
    Conference proceedings - IEEE International Conference on Systems, Man, and Cybernetics, ISSN 1062-922X, E-ISSN 2577-1655
    Nyckelord
    Training, Terminology, Conferences, Neural networks, Machine learning, Knowledge representation, Iterative methods
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:mau:diva-50672 (URN)10.1109/SMC52423.2021.9658610 (DOI)000800532000156 ()2-s2.0-85124332156 (Scopus ID)978-1-6654-4207-7 (ISBN)
    Konferens
    Systems, Man, and Cybernetics (SMC), Melbourne, Australia 2021
    Tillgänglig från: 2022-03-17 Skapad: 2022-03-17 Senast uppdaterad: 2024-03-04Bibliografiskt granskad
    Ladda ner fulltext (pdf)
    fulltext
    Ladda ner (jpg)
    preview image
  • 8.
    Holmberg, Lars
    et al.
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Generalao, Stefan
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    Hermansson, Adam
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    The Role of Explanations in Human-Machine Learning2021Ingår i: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2021, s. 1006-1013Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we study explanations in a setting where human capabilities are in parity with Machine Learning (ML) capabilities. If an ML system is to be trusted in this situation, limitations in the trained ML model’s abilities have to be exposed to the end-user. A majority of current approaches focus on the task of creating explanations for a proposed decision, but less attention is given to the equally important task of exposing limitations in the ML model’s capabilities, limitations that in turn affect the validity of created explanations. Using a small-scale design experiment we compare human explanations with explanations created by an ML system. This paper explores and presents how the structure and terminology of scientific explanations can expose limitations in the ML models knowledge and be used as an approach for research and design in the area of explainable artificial intelligence.

  • 9.
    Holmberg, Lars
    et al.
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Davidsson, Paul
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Linde, Per
    Malmö universitet, Fakulteten för kultur och samhälle (KS), Institutionen för konst, kultur och kommunikation (K3). Malmö universitet, Internet of Things and People (IOTAP).
    A Feature Space Focus in Machine Teaching2020Ingår i: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), 2020Konferensbidrag (Refereegranskat)
    Abstract [en]

    Contemporary Machine Learning (ML) often focuseson large existing and labeled datasets and metrics aroundaccuracy and performance. In pervasive online systems, conditionschange constantly and there is a need for systems thatcan adapt. In Machine Teaching (MT) a human domain expertis responsible for the knowledge transfer and can thus addressthis. In my work, I focus on domain experts and the importanceof, for the ML system, available features and the space they span.This space confines the, to the ML systems, observable fragmentof the physical world. My investigation of the feature space isgrounded in a conducted study and related theories. The resultof this work is applicable when designing systems where domainexperts have a key role as teachers.

    Ladda ner fulltext (pdf)
    fulltext
  • 10.
    Holmberg, Lars
    et al.
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Davidsson, Paul
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Olsson, Carl Magnus
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Linde, Per
    Malmö universitet, Fakulteten för kultur och samhälle (KS), Institutionen för konst, kultur och kommunikation (K3). Malmö universitet, Internet of Things and People (IOTAP).
    Contextual machine teaching2020Ingår i: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2020Konferensbidrag (Refereegranskat)
    Abstract [en]

    Machine learning research today is dominated by atechnocentric perspective and in many cases disconnected fromthe users of the technology. The machine teaching paradigm insteadshifts the focus from machine learning experts towards thedomain experts and users of machine learning technology. Thisshift opens up for new perspectives on the current use of machinelearning as well as new usage areas to explore. In this study,we apply and map existing machine teaching principles ontoa contextual machine teaching implementation in a commutingsetting. The aim is to highlight areas in machine teaching theorythat requires more attention. The main contribution of this workis an increased focus on available features, the features space andthe potential to transfer some of the domain expert’s explanatorypowers to the machine learning system.

    Ladda ner fulltext (pdf)
    fulltext
  • 11.
    Holmberg, Lars
    et al.
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    Davidsson, Paul
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    Linde, Per
    Malmö universitet, Fakulteten för kultur och samhälle (KS), Institutionen för konst, kultur och kommunikation (K3).
    Evaluating Interpretability in Machine Teaching2020Ingår i: Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness: The PAAMS Collection / [ed] Springer, Springer, 2020, Vol. 1233, s. 54-65Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Building interpretable machine learning agents is a challenge that needs to be addressed to make the agents trustworthy and align the usage of the technology with human values. In this work, we focus on how to evaluate interpretability in a machine teaching setting, a settingthat involves a human domain expert as a teacher in relation to a machine learning agent. By using a prototype in a study, we discuss theinterpretability denition and show how interpretability can be evaluatedon a functional-, human- and application level. We end the paperby discussing open questions and suggestions on how our results can be transferable to other domains.

    Ladda ner fulltext (pdf)
    fulltext
  • 12.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    Human in Command Machine Learning – Poster version2020Konferensbidrag (Övrigt vetenskapligt)
    Ladda ner fulltext (pdf)
    fulltext
  • 13.
    Ghajargar, Maliheh
    et al.
    Malmö universitet, Internet of Things and People (IOTAP). Malmö universitet, Fakulteten för kultur och samhälle (KS), Institutionen för konst, kultur och kommunikation (K3).
    Persson, Jan A.
    Malmö universitet, Internet of Things and People (IOTAP). Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    Bardzell, Jeffrey
    Pennsylvania State University.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Tegen, Agnes
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    The UX of Interactive Machine Learning2020Ingår i: NordiCHI 2020, 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, New York, USA: Association for Computing Machinery (ACM), 2020, artikel-id Article No.: 138Konferensbidrag (Refereegranskat)
    Abstract [en]

    Machine Learning (ML) has been a prominent area of research within Artificial Intelligence (AI). ML uses mathematical models to recognize patterns in large and complex data sets to aid decision making in different application areas, such as image and speech recognition, consumer recommendations, fraud detection and more. ML systems typically go through a training period in which the system encounters and learns about the data; further, this training often requires some degree of human intervention. Interactive machine learning (IML) refers to ML applications that depend on continuous user interaction. From an HCI perspective, how humans interact with and experience ML models in training is the main focus of this workshop proposal. In this workshop we focus on the user experience (UX) of Interactive Machine Learning, a topic with implications not only for usability but also for the long-term success of the IML systems themselves.

  • 14.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Interactive Machine Learning for Commuters: Achieving Personalised Travel Planners through Machine Teaching2019Konferensbidrag (Övrig (populärvetenskap, debatt, mm))
    Abstract [en]

    Mobile apps are an increasingly important part of public transport, and can be seen as part of the journey experience. Personalisation of the app is then one aspect of the experience that, for example, can give travellers a possibility to save favourite journeys for easy access. Such a list of journeys can be extensive and inaccurate if it doesn’t consider the traveller’s context. Making an app context aware and present upcoming journeys transforms the app experience in a personal direction, especially for commuters. By using historical personal contextual data, a travel app can present probable journeys or accurately predict and present an upcoming journey with departure times. The predictions can take place when the app is started or be used to remind a commuter when it is time to leave in order to catch a regularly travelled bus or train.

    To address this research opportunity we have created an individually trained Machine Learning (ML) agent that we added to a publicly available commuter app. The added part of the app uses weekday, time, user activity and location to predict a user’s upcoming journey. Predictions are made when the app starts and departure times for the most probable transport are presented to the traveller. In our case a commuter only makes a few journey searches in the app every day which implies that, based on our contextual parameters, it will take at least some weeks to create journey patterns that can give acceptable accuracy for the predictions. In the work we present here, we focus on how to handle this cold start problem e.g. the situation when no or inaccurate historical data is available for the Machine Learning agent to train from. These situations will occur both initially when no data exists and due to concept drift originating from changes in travel patterns. In these situations, no predictions or only inaccurate predictions of upcoming journeys can be made.    

    We present experiences and evaluate results gathered when designing the interactions needed for the MT session as well as design decisions for the ML pipeline and the ML agent. The user’s interaction with the ML agent during the teaching session is a crucial factor for the success. During the teaching session, information on what the agent already has learnt has to be presented to the user as well as possibilities to unlearn obsolete commute patterns and to teach new. We present a baseline that shows an idealised situation and the amount of training data that the user needs to add in a MT session to reach acceptable accuracy in predictions. Our main contribution is user evaluated design proposals for the MT session.

    Using individually trained ML agents opens up opportunities to protect personal data and this approach can be used to create mobile applications that is independent of local transport providers and thus act on open data on a global scale.

    Ladda ner fulltext (pdf)
    fulltext
  • 15.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Human-Technology relations in a machinelearning based commuter app2018Ingår i: Workshop on Interactive Adaptive Learning (IAL@ECML PKDD), 2018, s. 73-76Konferensbidrag (Övrigt vetenskapligt)
    Ladda ner fulltext (pdf)
    fulltext
  • 16.
    Holmberg, Lars
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    A Conceptual Approach to Explainable Neural NetworksManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    The success of neural networks largely builds on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain a neural network’s decision, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we performed a targeted literature review focusing on research that aims to associate internal representations with human understandable concepts. By using deductive nomological explanations combined with causality theories as an analytical lens, we analyse nine carefully selected research papers. We find our analytical lens, the explanation structure and causality, useful to understand what can be expected, and not expected, from explanations inferred from neural networks. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal: is it (a) understanding the ML model, (b) the training data or (c) actionable explanations that are true-to-the-domain?

  • 17.
    Holmberg, Lars
    et al.
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT). Malmö universitet, Internet of Things and People (IOTAP).
    Alvarez, Alberto
    Malmö universitet, Fakulteten för teknik och samhälle (TS), Institutionen för datavetenskap och medieteknik (DVMT).
    Deep Learning, generalisation and conceptsManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Central to deep learning is an ability to generalise within a target domain consistent with human beliefs within the same domain. A label inferred by the neural network then maps to a human mental representation of a, to the label, corresponding concept. If an explanation concerning why a specific decision is promoted it is important that we move from average case performance metrics towards interpretable explanations that build on human understandable concepts connected to the promoted label. In this work, we use Explainable Artificial Intelligence (XAI) methods to investigate if internal knowledge representations in trained neural networks are aligned and generalise in correspondence to human mental representations. Our findings indicate an, in neural networks, epistemic misalignment between machine and human knowledge representations. Consequently, if the goal is classifications explainable for en users we can question the usefulness of neural networks trained without considering concept alignment. 

1 - 17 av 17
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf