Malmö University Publications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 17) Show all publications
Holmberg, L. (2023). Ageing and sexing birds. In: : . Paper presented at International Forum for Computer Vision in Ecology and Evolutionary Biology, Lund University, 18-20 September, 2023.
Open this publication in new window or tab >>Ageing and sexing birds
2023 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

Ageing and sexing birds require specialist knowledge and training concerning which characteristics to focus on for different species. An expert can formulate an explanation for a classification using these characteristics and, additionally, identify anomalies. Some characteristics require practical training, for example, the difference between moulted and non-moulted feathers, while some knowledge, like feather taxonomy and moulting patterns, can be learned without extensive practical training. An explanation formulated for a classification, by a human, stands in sharp contrast to an explanation produced by a trained neural network. These machine explanations are more an answer to a how-question, related to the inner workings of the neural network, not an answer to a why-question, presenting domain-related characteristics useful for a domain expert. For machine-created explanations to be trustworthy neural networks require a static use context and representative independent and identically distributed training data. These prerequisites do seldom hold in real-world settings. Some challenges related to this are neural networks' inability to identify exemplars outside the training distribution and aligning internal knowledge creation with characteristics used in the target domain. These types of questions are central in the active research field of explainable artificial intelligence (XAI), but, there is a lack of hands-on experiments involving domain experts. This work aims to address the above issues with the goal of producing a prototype where domain experts can train a tool that builds on human expert knowledge in order to produce useful explanations. By using internalised domain expertise we aim at a tool that can produce useful explanations and even new insights for the domain. By working together with domain experts from Ottenby Observatory our goal is to address central XAI challenges and, at the same time, add new perspectives useful to determine age and sex on birds. 

Keywords
Birds, Explainable Artificial Intelligence, Neural Networks
National Category
Biological Sciences Human Computer Interaction Computer Engineering
Research subject
Interaktionsdesign
Identifiers
urn:nbn:se:mau:diva-65068 (URN)
Conference
International Forum for Computer Vision in Ecology and Evolutionary Biology, Lund University, 18-20 September, 2023
Funder
The Crafoord Foundation, 20220631
Available from: 2024-01-17 Created: 2024-01-17 Last updated: 2024-01-19Bibliographically approved
Holmberg, L. (2023). Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts. In: Yang, XS., Sherratt, R.S., Dey, N., Joshi, A. (Ed.), Proceedings of Eighth International Congress on Information and Communication Technology: . Paper presented at International Congress on Information and Communication Technology (ICICT), London, 2023 (pp. 155-171). Springer, 1
Open this publication in new window or tab >>Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts
2023 (English)In: Proceedings of Eighth International Congress on Information and Communication Technology / [ed] Yang, XS., Sherratt, R.S., Dey, N., Joshi, A., Springer, 2023, Vol. 1, p. 155-171Conference paper, Published paper (Refereed)
Abstract [en]

The currently dominating artificial intelligence and machine learning technology, neural networks, builds on inductive statistical learning processes. Being void of knowledge that can be used deductively these systems cannot distinguish exemplars part of the target domain from those not part of it. This ability is critical when the aim is to build human trust in real-world settings and essential to avoid usage in domains wherein a system cannot be trusted. In the work presented here, we conduct two qualitative contextual user studies and one controlled experiment to uncover research paths and design openings for the sought distinction. Through our experiments, we find a need to refocus from average case metrics and benchmarking datasets toward systems that can be falsified. The work uncovers and lays bare the need to incorporate and internalise a domain ontology in the systems and/or present evidence for a decision in a fashion that allows a human to use our unique knowledge and reasoning capability. Additional material and code to reproduce our experiments can be found at https://github.com/k3larra/ood.

Place, publisher, year, edition, pages
Springer, 2023
Series
Lecture Notes in Networks and Systems, ISSN 2367-3370, E-ISSN 2367-3389 ; 693
Keywords
Trustworthy Machine Learning, Explainable AI, Neural Networks, Concepts, Generalisation, Out of Distribution
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-58465 (URN)10.1007/978-981-99-3243-6_13 (DOI)2-s2.0-85174720293 (Scopus ID)978-981-99-3242-9 (ISBN)978-981-99-3243-6 (ISBN)
Conference
International Congress on Information and Communication Technology (ICICT), London, 2023
Available from: 2023-03-01 Created: 2023-03-01 Last updated: 2024-02-05Bibliographically approved
Holmberg, L. (2023). Neural networks in context: challenges and opportunities: a critical inquiry into prerequisites for user trust in decisions promoted by neural networks. (Doctoral dissertation). Malmö: Malmö University Press
Open this publication in new window or tab >>Neural networks in context: challenges and opportunities: a critical inquiry into prerequisites for user trust in decisions promoted by neural networks
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Artificial intelligence and machine learning (ML) in particular increasingly impact human life by creating value from collected data. This assetisation affects all aspectsof human life, from choosing a significant other to recommending a product for us to consume. This type of ML-based system thrives because it predicts human behaviour based on average case performance metrics (like accuracy). However, its usefulnessis more limited when it comes to being transparent about its internal knowledge representations for singular decisions, for example, it is not good at explaining why ithas suggested a particular decision in a specific context.The goal of this work is to let end users be in command of how ML systems are used and thereby combine the strengths of humans and machines – machines which can propose transparent decisions. Artificial neural networks are an interesting candidate for a setting of this type, given that this technology has been successful in building knowledge representations from raw data. A neural network can be trained by exposing it to data from the target domain. It can then internalise knowledge representations from the domain and perform contextual tasks. In these situations, the fragment of the actual world internalised in an ML system has to be contextualised by a human to beuseful and trustworthy in non-static settings.This setting is explored through the overarching research question: What challenges and opportunities can emerge when an end user uses neural networks in context to support singular decision-making? To address this question, Research through Design is used as the central methodology, as this research approach matches the openness of the research question. Through six design experiments, I explore and expand on challenges and opportunities in settings where singular contextual decisions matter. The initial design experiments focus on opportunities in settings that augment human cognitive abilities. Thereafter, the experiments explore challenges related to settings where neural networks can enhance human cognitive abilities. This part concerns approaches intended to explain promoted decisions.This work contributes in three ways: 1) exploring learning related to neural networks in context to put forward a core terminology for contextual decision-making using ML systems, wherein the terminology includes the generative notions of true-to-the-domain, concept, out-of-distribution and generalisation; 2) presenting a number of design guidelines; and 3) showing the need to align internal knowledge representations with concepts if neural networks are to produce explainable decisions. I also argue that training neural networks to generalise basic concepts like shapes and colours, concepts easily understandable by humans, is a path forward. This research direction leads towards neural network-based systems that can produce more complex explanations that build on basic generalisable concepts.

Abstract [sv]

Artificiell intelligens och i synnerhet Maskininlärning (ML) påverkar i hög grad människors liv genom de kan skapa monetärt värde från data. Denna produktifiering av insamlad data påverkar på många sätt våra liv, från val av partner till att rekommendera nästa produkt att konsumera. ML-baserade system fungerar väl i denna roll eftersom de kan förutsäga människors beteende baserat på genomsnittliga prestandamått, men deras användbarhet är mer begränsad i situationer där det är viktigt med transparens visavi de kunskapsrepresentationer ett enskilt beslut baseras på.

 Målet med detta arbete är att kombinera människors och maskiners styrkor via en tydlig maktrelation där en slutanvändare har kommandot. Denna maktrelation bygger på användning av ML-system som är transparenta med bakomliggande orsaker för ett föreslaget beslut. Artificiella neurala nätverk är ett intressant val av ML-teknik för denna uppgift eftersom de kan bygga interna kunskapsrepresentationer från rå data och därför tränas utan specialiserad ML kunskap. Detta innebär att ett neuralt nätverk kan tränas genom att exponeras för data från en måldomän och i denna process internalisera relevanta kunskapsrepresentationer. Därefter kan nätet presentera kontextuella förslag på beslut baserat på dessa representationer. I icke-statiska situationer behöver det fragment av den verkliga världen som internaliseras i ML-systemet kontextualiseras av en människa för att systemet skall vara användbart och tillförlitligt.

 I detta arbete utforskas det ovan beskrivna området via en övergripande forskningsfråga: Vilka utmaningar och möjligheter kan uppstå när en slutanvändare använder neurala nätverk som stöd för enstaka beslut i ett väldefinierat sammanhang?

 För att besvara forskningsfrågan ovan används metodologin forskning genom design, detta på grund av att den valda metodologin matchar öppenheten i forskningsfrågan. Genom sex designexperiment utforskas utmaningar och möjligheter i situationer där enskilda kontextuella beslut är viktiga. De initiala designexperimenten fokuserar främst på möjligheter i situationer där neurala nätverk presterar i paritet med människors kognitiva förmågor och de senare experimenten utforskar utmaningar i situationer där neurala nätverk överträffar människans kognitiva förmågor.  Den andra delen fokuserar främst på metoder som syftar till att förklara beslut föreslagna av det neurala nätverket.

 Detta arbete bidrar till existerande kunskap på tre sätt: (1) utforskande av lärande relaterat till neurala nätverk med målet att presentera en terminologi användbar för kontextuellt beslutsfattande understött av ML-system, den framtagna terminologin inkluderar generativa begrepp som: sann-i-relation-till-domänen, koncept, utanför-distributionen och generalisering, (2) ett antal designriktlinjer, (3) behovet av att justera interna kunskapsrepresentationer i neurala nätverk så att de överensstämmer med koncept vilket skulle kunna medföra att neurala nätverk kan producera förklaringsbara beslut. Jag föreslår även att en framkomlig forskningsstrategi är att träna neurala nätverk med utgångspunkt från grundläggande koncept, som former och färger. Denna strategi innebär att nätverken kan generalisera utifrån dessa generella koncept i olika domäner. Den föreslagna forskningsriktning syftar till att producera mer komplexa förklaringar från neurala nätverk baserat på grundläggande generaliserbara koncept.

Place, publisher, year, edition, pages
Malmö: Malmö University Press, 2023. p. 70
Series
Studies in Computer Science ; 22
Keywords
Explainable AI, Machine Learning, Neural Network, Concept, Generalisation, Out-of-Distribution, Förklaringsbar AI, Maskininlärning, Neurala Nätverk, Koncept, Generalisering, Utanför-distributionen
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-58450 (URN)10.24834/isbn.9789178773503 (DOI)978-91-7877-351-0 (ISBN)978-91-7877-350-3 (ISBN)
Public defence
2023-04-13, Orkanen, D138 eller livestream, Nordenskiöldsgatan 10, Malmö, 14:00 (English)
Opponent
Supervisors
Note

Paper IV and VIII in dissertation as manuscript

Available from: 2023-03-17 Created: 2023-03-17 Last updated: 2024-02-29Bibliographically approved
Holmberg, L. (2023). "When can i trust it?": contextualising explainability methods for classifiers. In: CMLT '23: Proceedings of the 2023 8th International Conference on Machine Learning Technologies. Paper presented at International Conference on Machine Learning Technologies (ICMLT) Stockholm, Sweden | March 10-12, 2023 (pp. 108-115). ACM Digital Library
Open this publication in new window or tab >>"When can i trust it?": contextualising explainability methods for classifiers
2023 (English)In: CMLT '23: Proceedings of the 2023 8th International Conference on Machine Learning Technologies, ACM Digital Library, 2023, p. 108-115Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
ACM Digital Library, 2023
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-58441 (URN)10.1145/3589883.3589899 (DOI)001050779800016 ()2-s2.0-85167805603 (Scopus ID)9781450398329 (ISBN)
Conference
International Conference on Machine Learning Technologies (ICMLT) Stockholm, Sweden | March 10-12, 2023
Available from: 2023-03-01 Created: 2023-03-01 Last updated: 2023-09-12Bibliographically approved
Holmberg, L., Davidsson, P. & Linde, P. (2022). Mapping Knowledge Representations to Concepts: A Review and New Perspectives. In: Explainable Agency in Artificial Intelligence Workshop Proceedings: . Paper presented at 36th AAAI Conference on Artificial Intelligence, February 28-March 1 2022, Vancouver, Canada (pp. 61-70).
Open this publication in new window or tab >>Mapping Knowledge Representations to Concepts: A Review and New Perspectives
2022 (English)In: Explainable Agency in Artificial Intelligence Workshop Proceedings, 2022, p. 61-70Conference paper, Published paper (Refereed)
Abstract [en]

The success of neural networks builds to a large extent on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain the neural network's decisions, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we have performed a targeted review focusing on research that aims to associate internal representations with human understandable concepts. In doing this, we added a perspective on the existing research by using primarily deductive nomological explanations as a proposed taxonomy. We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability; is it understanding the ML model or, is it actionable explanations useful in the deployment domain? 

National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-64797 (URN)10.48550/arXiv.2301.00189 (DOI)
Conference
36th AAAI Conference on Artificial Intelligence, February 28-March 1 2022, Vancouver, Canada
Available from: 2023-12-29 Created: 2023-12-29 Last updated: 2023-12-29Bibliographically approved
Holmberg, L., Helgstrand, C. J. & Hultin, N. (2022). More Sanity Checks for Saliency Maps. In: Michelangelo Ceci; Sergio Flesca; Elio Masciari; Giuseppe Manco; Zbigniew W. Raś (Ed.), ISMIS 2022: Foundations of Intelligent Systems: . Paper presented at 26th International Symposium on Methodologies for Intelligent Systems, ISMIS 2022, Cosenza, Italy, October 3–5, 2022 (pp. 175-184). Springer
Open this publication in new window or tab >>More Sanity Checks for Saliency Maps
2022 (English)In: ISMIS 2022: Foundations of Intelligent Systems / [ed] Michelangelo Ceci; Sergio Flesca; Elio Masciari; Giuseppe Manco; Zbigniew W. Raś, Springer, 2022, p. 175-184Conference paper, Published paper (Refereed)
Abstract [en]

Concepts are powerful human mental representations used to explain, reason and understand. In this work, we use theories on concepts as an analytical lens to compare internal knowledge representations in neural networks to human concepts. In two image classification studies we find an unclear alignment between these, but more pronounced, we find the need to further develop explanation methods that incorporate concept ontologies. 

Place, publisher, year, edition, pages
Springer, 2022
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13515
Keywords
Explainable AI, Understandable AI, Human-centric AI
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:mau:diva-54924 (URN)10.1007/978-3-031-16564-1_17 (DOI)000886990100017 ()2-s2.0-85140462679 (Scopus ID)978-3-031-16564-1 (ISBN)978-3-031-16563-4 (ISBN)
Conference
26th International Symposium on Methodologies for Intelligent Systems, ISMIS 2022, Cosenza, Italy, October 3–5, 2022
Available from: 2022-09-14 Created: 2022-09-14 Last updated: 2023-12-15Bibliographically approved
Holmberg, L. (2021). Human In Command Machine Learning. (Licentiate dissertation). Malmö: Malmö universitet
Open this publication in new window or tab >>Human In Command Machine Learning
2021 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Machine Learning (ML) and Artificial Intelligence (AI) impact many aspects of human life, from recommending a significant other to assist the search for extraterrestrial life. The area develops rapidly and exiting unexplored design spaces are constantly laid bare. The focus in this work is one of these areas; ML systems where decisions concerning ML model training, usage and selection of target domain lay in the hands of domain experts. 

This work is then on ML systems that function as a tool that augments and/or enhance human capabilities. The approach presented is denoted Human In Command ML (HIC-ML) systems. To enquire into this research domain design experiments of varying fidelity were used. Two of these experiments focus on augmenting human capabilities and targets the domains commuting and sorting batteries. One experiment focuses on enhancing human capabilities by identifying similar hand-painted plates. The experiments are used as illustrative examples to explore settings where domain experts potentially can: independently train an ML model and in an iterative fashion, interact with it and interpret and understand its decisions. 

HIC-ML should be seen as a governance principle that focuses on adding value and meaning to users. In this work, concrete application areas are presented and discussed. To open up for designing ML-based products for the area an abstract model for HIC-ML is constructed and design guidelines are proposed. In addition, terminology and abstractions useful when designing for explicability are presented by imposing structure and rigidity derived from scientific explanations. Together, this opens up for a contextual shift in ML and makes new application areas probable, areas that naturally couples the usage of AI technology to human virtues and potentially, as a consequence, can result in a democratisation of the usage and knowledge concerning this powerful technology.

Place, publisher, year, edition, pages
Malmö: Malmö universitet, 2021. p. 136
Series
Studies in Computer Science ; 16
Keywords
Human-centered AI/ML, Explainable AI, Machine Learning, Human In the Loop ML
National Category
Human Computer Interaction Computer Engineering
Research subject
Interaktionsdesign
Identifiers
urn:nbn:se:mau:diva-42576 (URN)10.24834/isbn.9789178771875 (DOI)978-91-7877-186-8 (ISBN)978-91-7877-187-5 (ISBN)
Presentation
2021-06-17, 13:00 (English)
Supervisors
Available from: 2021-06-03 Created: 2021-06-02 Last updated: 2023-07-06Bibliographically approved
Holmberg, L., Generalao, S. & Hermansson, A. (2021). The Role of Explanations in Human-Machine Learning. In: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC): . Paper presented at Systems, Man, and Cybernetics (SMC), Melbourne, Australia 2021 (pp. 1006-1013). IEEE
Open this publication in new window or tab >>The Role of Explanations in Human-Machine Learning
2021 (English)In: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2021, p. 1006-1013Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we study explanations in a setting where human capabilities are in parity with Machine Learning (ML) capabilities. If an ML system is to be trusted in this situation, limitations in the trained ML model’s abilities have to be exposed to the end-user. A majority of current approaches focus on the task of creating explanations for a proposed decision, but less attention is given to the equally important task of exposing limitations in the ML model’s capabilities, limitations that in turn affect the validity of created explanations. Using a small-scale design experiment we compare human explanations with explanations created by an ML system. This paper explores and presents how the structure and terminology of scientific explanations can expose limitations in the ML models knowledge and be used as an approach for research and design in the area of explainable artificial intelligence.

Place, publisher, year, edition, pages
IEEE, 2021
Series
Conference proceedings - IEEE International Conference on Systems, Man, and Cybernetics, ISSN 1062-922X, E-ISSN 2577-1655
Keywords
Training, Terminology, Conferences, Neural networks, Machine learning, Knowledge representation, Iterative methods
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-50672 (URN)10.1109/SMC52423.2021.9658610 (DOI)000800532000156 ()2-s2.0-85124332156 (Scopus ID)978-1-6654-4207-7 (ISBN)
Conference
Systems, Man, and Cybernetics (SMC), Melbourne, Australia 2021
Available from: 2022-03-17 Created: 2022-03-17 Last updated: 2024-02-05Bibliographically approved
Holmberg, L., Davidsson, P. & Linde, P. (2020). A Feature Space Focus in Machine Teaching. In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops): . Paper presented at PerCom 2020 PhD forum. March 23-27, 2020. Austin, Texas, USA..
Open this publication in new window or tab >>A Feature Space Focus in Machine Teaching
2020 (English)In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), 2020Conference paper, Published paper (Refereed)
Abstract [en]

Contemporary Machine Learning (ML) often focuseson large existing and labeled datasets and metrics aroundaccuracy and performance. In pervasive online systems, conditionschange constantly and there is a need for systems thatcan adapt. In Machine Teaching (MT) a human domain expertis responsible for the knowledge transfer and can thus addressthis. In my work, I focus on domain experts and the importanceof, for the ML system, available features and the space they span.This space confines the, to the ML systems, observable fragmentof the physical world. My investigation of the feature space isgrounded in a conducted study and related theories. The resultof this work is applicable when designing systems where domainexperts have a key role as teachers.

Keywords
Machine learning, Machine Teaching, Human in the loop
National Category
Computer Systems
Identifiers
urn:nbn:se:mau:diva-17165 (URN)10.1109/PerComWorkshops48775.2020.9156175 (DOI)000612838200082 ()2-s2.0-85091981537 (Scopus ID)978-1-7281-4716-1 (ISBN)978-1-7281-4717-8 (ISBN)
Conference
PerCom 2020 PhD forum. March 23-27, 2020. Austin, Texas, USA.
Available from: 2020-05-05 Created: 2020-05-05 Last updated: 2024-02-05Bibliographically approved
Holmberg, L., Davidsson, P., Olsson, C. M. & Linde, P. (2020). Contextual machine teaching. In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops): . Paper presented at PerCom, Workshop on Context and Activity Modeling and Recognition (CoMoReA). March 23-27, 2020. Austin, Texas, USA.. IEEE
Open this publication in new window or tab >>Contextual machine teaching
2020 (English)In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2020Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning research today is dominated by atechnocentric perspective and in many cases disconnected fromthe users of the technology. The machine teaching paradigm insteadshifts the focus from machine learning experts towards thedomain experts and users of machine learning technology. Thisshift opens up for new perspectives on the current use of machinelearning as well as new usage areas to explore. In this study,we apply and map existing machine teaching principles ontoa contextual machine teaching implementation in a commutingsetting. The aim is to highlight areas in machine teaching theorythat requires more attention. The main contribution of this workis an increased focus on available features, the features space andthe potential to transfer some of the domain expert’s explanatorypowers to the machine learning system.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Machine learning, Machine Teaching, Human in the loop I
National Category
Computer Systems
Identifiers
urn:nbn:se:mau:diva-17116 (URN)10.1109/PerComWorkshops48775.2020.9156132 (DOI)000612838200047 ()2-s2.0-85091989967 (Scopus ID)978-1-7281-4716-1 (ISBN)978-1-7281-4717-8 (ISBN)
Conference
PerCom, Workshop on Context and Activity Modeling and Recognition (CoMoReA). March 23-27, 2020. Austin, Texas, USA.
Available from: 2020-04-23 Created: 2020-04-23 Last updated: 2024-02-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5676-1931

Search in DiVA

Show all publications