Malmö University Publications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 22) Show all publications
Asghari Varzaneh, Z., Wölner-Hanssen, N. & Khoshkangini, R. (2025). A Lightweight Transformer Approach for Predicting Blastocyst Formation on Limited Embryo Images. In: International Conference on Visual Communications and Image Processing, Klagenfurt, Austria, Dec 1-4, 2025: . Paper presented at International Conference on Visual Communications and Image Processing.
Open this publication in new window or tab >>A Lightweight Transformer Approach for Predicting Blastocyst Formation on Limited Embryo Images
2025 (English)In: International Conference on Visual Communications and Image Processing, Klagenfurt, Austria, Dec 1-4, 2025, 2025Conference paper, Published paper (Other academic)
Abstract [en]

In vitro fertilisation (IVF) is a widely adopted assisted reproductive technology that facilitates embryo development in cases of infertility. A critical step in this process is identifying which embryos successfully develop into blastocysts. The process needs continuous video monitoring, which is time-consuming and expensive. In this paper, we show a new deep learning method that predicts blastocyst formation by using just one image per day from each embryo, not full videos. Our method uses DINOv2 to extract detailed features from embryo images and a lightweight Video Vision Transformer(ViViT) version to analyze and classify the image sequences. Our experiments show that the proposed model can predict blastocyst and non-blastocyst formation with 95.7% accuracy. The proposed approach minimizes the reliance on costly continuous videomonitoring, indicating that more affordable equipment could be used without compromising precision. This improves both the practicality and cost-efficiency of implementation in IVF clinics, supporting embryologists in making more confident embryoselection decisions.

National Category
Medical and Health Sciences
Research subject
Health and society studies
Identifiers
urn:nbn:se:mau:diva-82547 (URN)
Conference
International Conference on Visual Communications and Image Processing
Available from: 2026-02-06 Created: 2026-02-06 Last updated: 2026-02-09Bibliographically approved
Tajgardan, M., Shiranzaei, A., Jamali, M., Khoshkangini, R. & Rabbani, M. (2025). Advanced Stock Market Prediction Using Unsupervised Federated Learning Techniques. In: 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025: . Paper presented at 29th International Computer Conference, Computer Society of Iran, CSICC 2025, 05-06 Feb 2025, Tehran, Iran. Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Advanced Stock Market Prediction Using Unsupervised Federated Learning Techniques
Show others...
2025 (English)In: 2025 29th International Computer Conference, Computer Society of Iran, CSICC 2025, Institute of Electrical and Electronics Engineers Inc. , 2025Conference paper, Published paper (Refereed)
Abstract [en]

In the realm of stock market prediction, traditional supervised learning approaches often struggle with the vast and diverse nature of financial data, coupled with privacy concerns. This paper explores a novel methodology that combines unsupervised learning techniques with federated learning system to enhance stock market prediction models. We present a comprehensive system where local models, trained using unsupervised methods, contribute to a global model through federated aggregation. By leveraging federated learning, our approach allows multiple financial institutions to collaboratively train models on their decentralized data while preserving data privacy. This approach addresses the challenges of data heterogeneity and communication efficiency, providing a robust and scalable solution for advanced stock market forecasting. Our experiments demonstrate that integrating unsupervised learning with federated learning not only improves predictive accuracy but also enhances the model’s ability to identify emerging market trends and anomalies. Finally, we compare our distributed data model with other machine learning models that use local data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2025
Keywords
Federated Learning, Financial Market Forecasting, Unsupervised Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-76099 (URN)10.1109/CSICC65765.2025.10967449 (DOI)2-s2.0-105005140827 (Scopus ID)9798331523114 (ISBN)
Conference
29th International Computer Conference, Computer Society of Iran, CSICC 2025, 05-06 Feb 2025, Tehran, Iran
Available from: 2025-05-27 Created: 2025-05-27 Last updated: 2025-05-28Bibliographically approved
Asghari Varzaneh, Z., Mousavi, S. M., Khoshkangini, R. & Moosavi Khaliji, S. M. (2025). An ensemble model based on transfer learning for the early detection of Alzheimer’s disease. Scientific Reports, 15(1), Article ID 34634.
Open this publication in new window or tab >>An ensemble model based on transfer learning for the early detection of Alzheimer’s disease
2025 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 15, no 1, article id 34634Article in journal (Refereed) Published
Abstract [en]

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by the gradual decline in cognitive functions, particularly memory and reasoning. Early detection, especially during cognitive impairment (MCI) stage, is crucial for timely intervention and management. Enhanced diagnostic methods are essential for facilitating early identification and improving patient outcomes. This study presents a robust deep learning framework for the early detection of Alzheimer’s disease. It employs transfer learning and hyperparameter-tuning of InceptionResnetV2, InceptionV3, Xception architectures to enhance feature extraction by leveraging their pre-trained capabilities. An ensemble voting mechanism has been integrated to combine predictions from different models, optimizing both accuracy and robustness. The proposed ensemble voting approach demonstrated exceptional performance, achieving 98.96% accuracy and 100% precision for predicting classes Mildly Demented and Moderately Demented. It outperformed baseline and state-of-the-art models, highlighting its potential as a reliable tool for early diagnosis and intervention.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Alzheimer’s disease, Convolutional neural network, Ensemble learning, Medical imaging, Transfer learning
National Category
Neurosciences
Identifiers
urn:nbn:se:mau:diva-80005 (URN)10.1038/s41598-025-22025-y (DOI)001587520600011 ()41044176 (PubMedID)2-s2.0-105017802590 (Scopus ID)
Available from: 2025-10-14 Created: 2025-10-14 Last updated: 2026-02-09Bibliographically approved
Jamali, M., Davidsson, P., Khoshkangini, R., Ljungqvist, M. G. & Mihailescu, R.-C. (2025). Context in object detection: a systematic literature review. Artificial Intelligence Review, 58(6), Article ID 175.
Open this publication in new window or tab >>Context in object detection: a systematic literature review
Show others...
2025 (English)In: Artificial Intelligence Review, ISSN 0269-2821, E-ISSN 1573-7462, Vol. 58, no 6, article id 175Article in journal (Refereed) Published
Abstract [en]

Context is an important factor in computer vision as it offers valuable information to clarify and analyze visual data. Utilizing the contextual information inherent in an image or a video can improve the precision and effectiveness of object detectors. For example, where recognizing an isolated object might be challenging, context information can improve comprehension of the scene. This study explores the impact of various context-based approaches to object detection. Initially, we investigate the role of context in object detection and survey it from several perspectives. We then review and discuss the most recent context-based object detection approaches and compare them. Finally, we conclude by addressing research questions and identifying gaps for further studies. More than 265 publications are included in this survey, covering different aspects of context in different categories of object detection, including general object detection, video object detection, small object detection, camouflaged object detection, zero-shot, one-shot, and few-shot object detection. This literature review presents a comprehensive overview of the latest advancements in context-based object detection, providing valuable contributions such as a thorough understanding of contextual information and effective methods for integrating various context types into object detection, thus benefiting researchers.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Computer vision, Context, Contextual information, Object detection, Object recognition
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mau:diva-75029 (URN)10.1007/s10462-025-11186-x (DOI)001448979900001 ()2-s2.0-105000389895 (Scopus ID)
Available from: 2025-04-01 Created: 2025-04-01 Last updated: 2025-10-10Bibliographically approved
Khoshkangini, R., Mangrio, E. & Johnsson, M. (2025). Enhancing In Vitro Fertilization with Environment Optimization Utilizing Artificial Intelligence (EIVF-AI). In: Haridimos Kondylakis; Andreas Triantafyllidis (Ed.), Pervasive Computing Technologies for Healthcare: 18th EAI International Conference, PervasiveHealth 2024, Heraklion, Crete, Greece, September 17–18, 2024, Proceedings, Part II. Paper presented at 18th EAI International Conference on Pervasive Computing Technologies for Healthcare, PervasiveHealth 2024, 17-18 Sep 2024, Heraklion, Crete, Greece (pp. 151-158). Springer Nature
Open this publication in new window or tab >>Enhancing In Vitro Fertilization with Environment Optimization Utilizing Artificial Intelligence (EIVF-AI)
2025 (English)In: Pervasive Computing Technologies for Healthcare: 18th EAI International Conference, PervasiveHealth 2024, Heraklion, Crete, Greece, September 17–18, 2024, Proceedings, Part II / [ed] Haridimos Kondylakis; Andreas Triantafyllidis, Springer Nature , 2025, p. 151-158Conference paper, Published paper (Refereed)
Abstract [en]

In vitro fertilization (IVF) is of great aid to couples who are struggling to conceive. The IVF clinics, where couples undergo fertility treatments, require a carefully controlled environment to ensure the effectiveness of the procedures. In recent years, IVF has seen significant progress, thanks to new technologies and methods that improve success rates and expand options for infertile couples. One notable advancement involves combining pre-implantation genetic testing (PGT) with time-lapse imaging technology, which allows continuous monitoring of embryo development with minimal disturbance. This innovation improves the selection of healthy embryos for transfer, increasing success rates and reducing the risk of multiple pregnancies. However, maintaining a stable environment remains a key challenge. Fluctuations in temperature, humidity, air quality, and particulate matter can affect IVF success rates by disrupting the embryo’s delicate environment and potentially causing implantation failure. We discuss in this position paper our approach to alleviate such environmental problems in our project EIVF-AI funded by the Swedish funding agency Vinnova.

Place, publisher, year, edition, pages
Springer Nature, 2025
Series
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, ISSN 1867-8211, E-ISSN 1867-822X ; 612
Keywords
Artificial intelligence, In vitro fertilization (IVF), Machine Learning, Optimization
National Category
Gynaecology, Obstetrics and Reproductive Medicine
Identifiers
urn:nbn:se:mau:diva-76111 (URN)10.1007/978-3-031-85575-7_8 (DOI)001484285000008 ()2-s2.0-105004255453 (Scopus ID)978-3-031-85574-0 (ISBN)978-3-031-85575-7 (ISBN)
Conference
18th EAI International Conference on Pervasive Computing Technologies for Healthcare, PervasiveHealth 2024, 17-18 Sep 2024, Heraklion, Crete, Greece
Available from: 2025-05-27 Created: 2025-05-27 Last updated: 2026-01-27Bibliographically approved
Madhavan, M., Nkhoma, P., Khoshkangini, R., Jamali, M., Davidsson, P., Åberg, J. & Ljungqvist, M. (2025). Object Detection and Human Activity Recognition for Improved Patient Mobility and Caregiver Ergonomics. Journal of WSCG, 33(1-2), 11-20
Open this publication in new window or tab >>Object Detection and Human Activity Recognition for Improved Patient Mobility and Caregiver Ergonomics
Show others...
2025 (English)In: Journal of WSCG, ISSN 1213-6972, E-ISSN 1213-6964, Vol. 33, no 1-2, p. 11-20Article in journal (Refereed) Published
Abstract [en]

This study explores the use of machine learning to enhance patient mobility and caregiver ergonomics by optimizing the use of mobility aids. Traditional manual assessments can be subjective and inaccurate, so this research develops a data-driven model for object detection and human activity recognition. A computer vision dataset was created using video recordings of controlled caregiving scenarios. The study leverages advanced machine learning models, including YOLO for object detection, pose estimation, ResNet-18 for frame classification, Inception-v4 for feature extraction, and LSTM for sequence modeling. The findings provide valuable insights into integrating machine learning into mobility aids, improving both patient outcomes and caregiver well-being.

Place, publisher, year, edition, pages
University of West Bohemia, 2025
Keywords
Caregiver, Ergonomics, Machine Learning, Mobility aid, Musculoskeletal disorders
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mau:diva-79119 (URN)10.24132/JWSCG.2025-2 (DOI)2-s2.0-105013121738 (Scopus ID)
Available from: 2025-08-28 Created: 2025-08-28 Last updated: 2025-09-02Bibliographically approved
Tajgardan, M., Shamsi, M., Khoshkangini, R. & Kenari, A. R. (2025). Optimizing ICU Hospitalization Prediction Models for COVID-19 Patients Using Pattern Discovery and Machine Learning. SCJ: Soft Computing Journal
Open this publication in new window or tab >>Optimizing ICU Hospitalization Prediction Models for COVID-19 Patients Using Pattern Discovery and Machine Learning
2025 (English)In: SCJ: Soft Computing Journal, E-ISSN 2322-3707Article in journal (Refereed) Epub ahead of print
Abstract [en]

The COVID-19 pandemic has underscored the critical challenges faced by healthcare systems worldwide, particularly in meeting the escalating demand for resources such as ICU beds, specialized care, and medical equipment. This shortfall has resulted in significant loss of life, highlighting the urgent need for accurate and timely diagnosis to optimize patient outcomes and reduce healthcare costs. In response to these challenges, our research focuses on developing a machine learning system capable of predicting whether patients will require ICU admission or can be managed remotely at home during peak periods of demand. Leveraging a novel two-dimensional reduction approach that combines evolutionary algorithms, Pattern Discovery, and machine learning techniques, we aim to streamline patient-collected data to train predictive models capable of forecasting ICU needs and remote care requirements. By providing healthcare systems with the ability to anticipate patient needs during critical phases of the pandemic, our predictive model empowers healthcare providers to allocate resources more effectively, optimize patient care delivery, and mitigate the impact of healthcare crises. The results of our experimental evaluation demonstrate the promising potential of our approach in addressing the pressing challenges posed by the COVID-19 pandemic and similar public health emergencies.

Place, publisher, year, edition, pages
University of Kashan, 2025
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-79974 (URN)10.22052/scj.2025.255123.1256 (DOI)
Available from: 2025-10-10 Created: 2025-10-10 Last updated: 2025-10-10Bibliographically approved
Jamali, M., Davidsson, P., Khoshkangini, R., Ljungqvist, M. G. & Mihailescu, R.-C. (2025). RetinaGate: A Gated Feature Pyramid Network for Improved Object Detection with SE-based Attention. In: Sławomir Nowaczyk; Anna Vettoruzzo (Ed.), Proceedings of Swedish AI Society Workshop 2025 (SAIS 2025): . Paper presented at Swedish AI Society Workshop 2025 (SAIS 2025) Halmstad, Sweden, 16-17 June 2025. (pp. 1-11). CEUR
Open this publication in new window or tab >>RetinaGate: A Gated Feature Pyramid Network for Improved Object Detection with SE-based Attention
Show others...
2025 (English)In: Proceedings of Swedish AI Society Workshop 2025 (SAIS 2025) / [ed] Sławomir Nowaczyk; Anna Vettoruzzo, CEUR , 2025, p. 1-11Conference paper, Published paper (Refereed)
Abstract [en]

Object detection is a critical task in computer vision with wide-ranging applications, from autonomous driving tosurveillance systems. Despite notable progress, challenges such as detecting small objects, managing occlusions,and effectively integrating multiscale features persist. We propose RetinaGate, a novel object detection architec-ture that introduces a Gated Feature Pyramid Network (G-FPN) to adaptively fuse multi-scale features, enhancedby Squeeze-and-Excitation-based channel attention for improved accuracy. As a plug-and-play module, G-FPNcan be seamlessly integrated into existing detection models to enhance their accuracy. These enhancementsstrengthen the model’s capacity to capture fine-grained details and leverage contextual information more effec-tively. Experimental results on three benchmark datasets demonstrate that RetinaGate outperforms the baselineRetinaNet in terms of detection accuracy, particularly in challenging detection scenarios such as underwater.

Place, publisher, year, edition, pages
CEUR, 2025
Series
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 4037
Keywords
Object Detection, RetinaNet, FPN, Gated Fusion, RetinaGate, SEBlock
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:mau:diva-79975 (URN)2-s2.0-105017747184 (Scopus ID)
Conference
Swedish AI Society Workshop 2025 (SAIS 2025) Halmstad, Sweden, 16-17 June 2025.
Available from: 2025-10-10 Created: 2025-10-10 Last updated: 2025-10-14Bibliographically approved
Soulaimani, A., Schwaiger, C., Khoshkangini, R., Johnsson, M. & Ebner, T. (2025). Transformers to Predict Embryo Quality Using Imagesand External Factors. In: Sławomir Nowaczyk; Anna Vettoruzzo (Ed.), Proceedings of Swedish AI Society Workshop 2025 (SAIS 2025): . Paper presented at Swedish AI Society Workshop 2025 (SAIS 2025) Halmstad, Sweden, 16-17 June 2025. (pp. 93-104). CEUR
Open this publication in new window or tab >>Transformers to Predict Embryo Quality Using Imagesand External Factors
Show others...
2025 (English)In: Proceedings of Swedish AI Society Workshop 2025 (SAIS 2025) / [ed] Sławomir Nowaczyk; Anna Vettoruzzo, CEUR , 2025, p. 93-104Conference paper, Published paper (Refereed)
Abstract [en]

This study aims to integrate embryo images and environmental laboratory factors to predict embryoquality using a complex machine learning model. A challenge was data misalignment, which was solvedby using a Random Forest Regressor to synthesise data for a complete dataset. The fine-tuned InceptionV3 model with the added attention mechanisms inherent to transformers was used for multitask learningto predict three different scores for embryo quality: cell expansion (EXP), inner cell mass (ICP) andtrophectoderm (TE). The model achieved an accuracy and F1 Score of 92.76%, 92.74% for EXP, 72.63%,59.52% for TE and 63.69%, 10.77% for ICM prediction. These results indicate a great performance for 2 ofthe three scores and build a basis for a reliable model for prediction of embryo quality.

Place, publisher, year, edition, pages
CEUR, 2025
Series
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 4037
Keywords
Transformers, Attention Mechanism, Multitask Learning, Transfer Learning, Inception V3, Embryo Quality
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-79976 (URN)2-s2.0-105017563221 (Scopus ID)
Conference
Swedish AI Society Workshop 2025 (SAIS 2025) Halmstad, Sweden, 16-17 June 2025.
Available from: 2025-10-10 Created: 2025-10-10 Last updated: 2025-10-14Bibliographically approved
Jamali, M., Davidsson, P., Khoshkangini, R., Mihailescu, R.-C., Sexton, E., Johannesson, V. & Tillström, J. (2025). Video-Audio Multimodal Fall Detection Method. In: Rafik Hadfi; Patricia Anthony; Alok Sharma; Takayuki Ito; Quan Bai (Ed.), PRICAI 2024: Trends in Artificial Intelligence: 21st Pacific Rim International Conference on Artificial Intelligence, PRICAI 2024, Kyoto, Japan, November 18–24, 2024, Proceedings, Part IV. Paper presented at 21st Pacific Rim International Conference on Artificial Intelligence, PRICAI 2024, Kyoto, Japan, November 18–24, 2024 (pp. 62-75). Springer
Open this publication in new window or tab >>Video-Audio Multimodal Fall Detection Method
Show others...
2025 (English)In: PRICAI 2024: Trends in Artificial Intelligence: 21st Pacific Rim International Conference on Artificial Intelligence, PRICAI 2024, Kyoto, Japan, November 18–24, 2024, Proceedings, Part IV / [ed] Rafik Hadfi; Patricia Anthony; Alok Sharma; Takayuki Ito; Quan Bai, Springer, 2025, p. 62-75Conference paper, Published paper (Refereed)
Abstract [en]

Falls frequently present substantial safety hazards to those who are alone, particularly the elderly. Deploying a rapid and proficient method for detecting falls is a highly effective approach to tackle this concealed peril. The majority of existing fall detection methods rely on either visual data or wearable devices, both of which have drawbacks. This research presents a multimodal approach that integrates video and audio modalities to address the issue of fall detection systems and enhances the accuracy of fall detection in challenging environmental conditions. This multimodal approach, which leverages the benefits of attention mechanism in both video and audio streams, utilizes features from both modalities through feature-level fusion to detect falls in unfavorable conditions where visual systems alone are unable to do so. We assessed the performance of our multimodal fall detection model using Le2i and UP-Fall datasets. Additionally, we compared our findings with other fall detection methods. The outstanding results of our multimodal model indicate its superior performance compared to single fall detection models.

Place, publisher, year, edition, pages
Springer, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15284
Keywords
Audio classification, Fall detection, Multimodal, Video classification, Video analysis, Detection methods, Detection models, Effective approaches, Multi-modal, Multi-modal approach, Performance, Safety hazards
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:mau:diva-72628 (URN)10.1007/978-981-96-0125-7_6 (DOI)001540369300006 ()2-s2.0-85210317498 (Scopus ID)978-981-96-0124-0 (ISBN)978-981-96-0125-7 (ISBN)
Conference
21st Pacific Rim International Conference on Artificial Intelligence, PRICAI 2024, Kyoto, Japan, November 18–24, 2024
Available from: 2024-12-10 Created: 2024-12-10 Last updated: 2025-09-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3797-4605

Search in DiVA

Show all publications