Publikationer från Malmö universitet
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Tangible XAI
Malmö universitet, Internet of Things and People (IOTAP). Malmö universitet, Fakulteten för kultur och samhälle (KS), Institutionen för konst, kultur och kommunikation (K3).ORCID-id: 0000-0003-1852-3937
Indiana University Bloomingtonm,USA.ORCID-id: 0000-0002-6864-6065
Dataminr, USA.
Royal Institute of Technology (KTH).
Vise andre og tillknytning
2022 (engelsk)Annet (Annet (populærvitenskap, debatt, mm))
Abstract [en]

Computational systems are becoming increasingly smart and automated. Artificial intelligence (AI) systems perceive things in the world, produce content, make decisions for and about us, and serve as emotional companions. From music recommendations to higher-stakes scenarios such as policy decisions, drone-based warfare, and automated driving directions, automated systems affect us all.

But researchers and other experts are asking, How well do we understand this alien intelligence? If even AI developers don’t fully understand how their own neural networks make decisions, what chance does the public have to understand AI outcomes? For example, AI systems decide whether a person should get a loan; so what should—what can—that person understand about how the decision was made? And if we can’t understand it, how can any of us trust AI?

The emerging area of explainable AI (XAI) addresses these issues by helping to disclose how an AI system arrives at its outcomes. But the nature of the disclosure depends in part on the audience, or who needs to understand the AI. A car, for example, can send warnings to consumers (“Tire Pressure Low”) and also send highly technical diagnostic codes that only trained mechanics can understand. Explanation modality is also important to consider. Some people might prefer spoken explanations compared to visual ones. Physical forms afford natural interaction with some smart systems, like vehicles and vacuums, but whether tangible interaction can support AI explanation has not yet been explored.

In the summer of 2020, a group of multidisciplinary researchers collaborated on a studio proposal for the 2021 ACM Tangible Embodied and Embedded (TEI) conference. The basic idea was to link conversations about tangible and embodied interaction and product semantics to XAI. Here, we first describe the background and motivation for the workshop and then report on its outcomes and offer some discussion points.

sted, utgiver, år, sider
New York, USA: Association for Computing Machinery (ACM), 2022.
Emneord [en]
Explainable AI, Tangible Embodied Interaction, Human-Centred AI
HSV kategori
Forskningsprogram
Interaktionsdesign
Identifikatorer
URN: urn:nbn:se:mau:diva-50374OAI: oai:DiVA.org:mau-50374DiVA, id: diva2:1641101
Merknad

TANGIBLE XAI Blogs Posted: Tue, February 15, 2022 

Tilgjengelig fra: 2022-02-28 Laget: 2022-02-28 Sist oppdatert: 2022-12-13bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Fulltext

Person

Ghajargar, Maliheh

Søk i DiVA

Av forfatter/redaktør
Ghajargar, MalihehBardzell, Jeffrey
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric

urn-nbn
Totalt: 450 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf