Malmö University Publications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Machine Learning Explainability in Text Classification for Fake News Detection
Malmö University, Faculty of Technology and Society (TS), Department of Computer Science and Media Technology (DVMT).
Malmö University, Faculty of Technology and Society (TS), Department of Computer Science and Media Technology (DVMT).
2020 (English)In: 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), IEEE, 2020Conference paper, Published paper (Refereed)
Abstract [en]

The digital media landscape has been exposed in recent years to an increasing number of deliberately misleading news and disinformation campaigns, a phenomenon popularly referred as fake news. In an effort to combat the dissemination of fake news, designing machine learning models that can classify text as fake or not has become an active line of research. While new models are continuously being developed, the focus so far has mainly been aimed at improving the accuracy of the models for given datasets. Hence, there is little research done in the direction of explainability of the deep learning (DL) models constructed for the task of fake news detection.In order to add a level of explainability, several aspects have to be taken into consideration. For instance, the pre-processing phase, or the length and complexity of the text play an important role in achieving a successful classification. These aspects need to be considered in conjunction with the model's architecture. All of these issues are addressed and analyzed in this paper. Visualizations are further employed to grasp a better understanding how different models distribute their attention when classifying fake news texts. In addition, statistical data is gathered to deepen the analysis and to provide insights with respect to the model's interpretability.

Place, publisher, year, edition, pages
IEEE, 2020.
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:mau:diva-51518DOI: 10.1109/icmla51294.2020.00127Scopus ID: 2-s2.0-85102496989ISBN: 978-1-7281-8470-8 (electronic)ISBN: 978-1-7281-8471-5 (print)OAI: oai:DiVA.org:mau-51518DiVA, id: diva2:1658984
Conference
2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 14-17 Dec. 2020
Available from: 2022-05-18 Created: 2022-05-18 Last updated: 2024-02-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Mihailescu, Radu-Casian

Search in DiVA

By author/editor
Mihailescu, Radu-Casian
By organisation
Department of Computer Science and Media Technology (DVMT)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 36 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf