A mobile stroke unit (MSU) is an advanced ambulance equipped with specialized technology and trained healthcare personnel to provide on-site diagnosis and treatment for stroke patients. Providing efficient access to healthcare (in a viable way) requires optimizing the placement of MSUs. In this study, we propose a time-efficient method based on a genetic algorithm (GA) to find the most suitable ambulance sites for the placement of MSUs (given the number of MSUs and a set of potential sites). We designed an efficient encoding scheme for the input data (the number of MSUs and potential sites) and developed custom selection, crossover, and mutation operators that are tailored according to the characteristics of the MSU allocation problem. We present a case study on the Southern Healthcare Region in Sweden to demonstrate the generality and robustness of our proposed GA method. Particularly, we demonstrate our method's flexibility and adaptability through a series of experiments across multiple settings. For the considered scenario, our proposed method outperforms the exhaustive search method by finding the best locations within 0.16, 1.44, and 10.09 minutes in the deployment of three MSUs, four MSUs, and five MSUs, resulting in 8.75x, 16.36x, and 24.77x faster performance, respectively. Furthermore, we validate the method's robustness by iterating GA multiple times and reporting its average fitness score (performance convergence). In addition, we show the effectiveness of our method by evaluating key hyperparameters, that is, population size, mutation rate, and the number of generations.
The Internet of Things (IoT) involves intelligent, heterogeneous, autonomous and often distributed things which interact and collaborate to achieve common goals. A useful concept for supporting this effort is Emergent Configuration (EC), which consists of a dynamic set of things, with their functionalities and services, that cooperate temporarily to achieve a goal. In this paper we introduce a commitment-based approach that exploits the concept of commitments to realize ECs. More specifically, (i) we present a conceptual model for commitment-based ECs, (ii) we use the smart meeting room scenario to illustrate how ECs are realized via commitments.
Mobile Stroke Units (MSUs) are specialized ambulances that can diagnose and treat stroke patients; hence, reducing the time to treatment for stroke patients. Optimal placement of MSUs in a geographic region enables to maximize access to treatment for stroke patients. We contribute a mathematical model to optimally place MSUs in a geographic region. The objective function of the model takes the tradeoff perspective, balancing between the efficiency and equity perspectives for the MSU placement. Solving the optimization problem enables to optimize the placement of MSUs for the chosen tradeoff between the efficiency and equity perspectives. We applied the model to the Blekinge and Kronoberg counties of Sweden to illustrate the applicability of our model. The experimental findings show both the correctness of the suggested model and the benefits of placing MSUs in the considered regions.
Constructing simulation models can be a complex and time-consuming task, in particular if the models are constructed from scratch or if a general-purpose simulation modeling tool is used. In this paper, we propose a model construction framework, which aims to simplify the process of constructing discrete event simulation models for emergency medical service (EMS) policy analysis. The main building blocks used in the framework are a set of general activities that can be used to represent different EMS care chains modeled as flowcharts. The framework allows to build models only by specifying input data, including demographic and statistical data, and providing a care chain of activities and decisions. In a case study, we evaluated the framework by using it to construct a model for the simulation of the EMS activities related to acute stroke. Our evaluation shows that the predefined activities included in the framework are sufficient to build a simulation model for the rather complex case of acute stroke.
A mobile stroke unit (MSU) is a special type of ambulance, where stroke patients can be diagnosed and provided intravenous treatment, hence allowing to cut down the time to treatment for stroke patients. We present a discrete event simulation (DES) model to study the potential benefits of using MSUs in the southern health care region of Sweden (SHR). We included the activities and actions used in the SHR for stroke patient transportation as events in the DES model, and we generated a synthetic set of stroke patients as input for the simulation model. In a scenario study, we compared two scenarios, including three MSUs each, with the current situation, having only regular ambulances. We also performed a sensitivity analysis to further evaluate the presented DES model. For both MSU scenarios, our simulation results indicate that the average time to treatment is expected to decrease for the whole region and for each municipality of SHR. For example, the average time to treatment in the SHR is reduced from 1.31h in the baseline scenario to 1.20h and 1.23h for the two MSU scenarios. In addition, the share of stroke patients who are expected to receive treatment within one hour is increased by a factor of about 3 for both MSU scenarios.
A mobile stroke unit (MSU) is a specialized ambulance, enabling to shorten the time to diagnosis and treatment for stroke patients. In the current paper, we present a simulation-based approach to study the potential impacts of collaborative use of regular ambulances and MSUs in prehospital transportation for stroke patients, denoted as co-dispatching. We integrated a co-dispatch policy in an existing modeling framework for constructing emergency medical services simulation models. In a case study, we applied the extended framework to southern Sweden to evaluate the effectiveness of using the co-dispatch policy for different types of stroke. The results indicate reduced time to diagnosis and treatment for stroke patients when using the co-dispatch policy compared to the situation where either a regular ambulance or an MSU is assigned for a stroke incident.
Pervasive technologies permeating our immediate surroundings provide a wide variety of means for sensing and actuating in our environment, having a great potential to impact the way we live, but also how we work. In this paper, we address the problem of activity recognition in office environments, as a means for inferring contextual information in order to automatically and proactively assists people in their daily activities. To this end we employ state-of-the-art image processing techniques and evaluate their capabilities in a real-world setup. Traditional machine learning is characterized by instances where both the training and test data share the same distribution. When this is not the case, the performance of the learned model is deteriorated. However, often times, the data is expensive or difficult to collect and label. It is therefore important to develop techniques that are able to make the best possible use of existing data sets from related domains, relative to the target domain. To this end, we further investigate in this work transfer learning techniques in deep learning architectures for the task of activity recognition in office settings. We provide herein a solution model that attains a 94% accuracy under the right conditions.
This study presents lessons learned based on practical experiences of connecting devices to internet-of-things platforms in the context of research and academic coursework. The experiences are gathered from six research projects, one undergraduate course, and a few undergraduate theses over a three-year period. The lessons learned include: the trade-off of rapid prototyping over security is very common, example source code is not up to production standards, adherence to standards speeds development, debugging support for IoT systems is lacking, open source licenses varies, poor platform interoperability, and the array of service fees among platform providers obstruct cost comparisons.
Deep learning (DL) models have emerged in recent years as the state-of-the-art technique across numerous machine learning application domains. In particular, image processing-related tasks have seen a significant improvement in terms of performance due to increased availability of large datasets and extensive growth of computing power. In this paper we investigate the problem of group activity recognition in office environments using a multimodal deep learning approach, by fusing audio and visual data from video. Group activity recognition is a complex classification task, given that it extends beyond identifying the activities of individuals, by focusing on the combinations of activities and the interactions between them. The proposed fusion network was trained based on the audio-visual stream from the AMI Corpus dataset. The procedure consists of two steps. First, we extract a joint audio-visual feature representation for activity recognition, and second, we account for the temporal dependencies in the video in order to complete the classification task. We provide a comprehensive set of experimental results showing that our proposed multimodal deep network architecture outperforms previous approaches, which have been designed for unimodal analysis, on the aforementioned AMI dataset.
In this paper, we study temporal logic for finite linear structures and surjective bounded morphisms between them. We give a characterisation of such structures by modal formulas and show that every pair of linear structures with a bounded morphism between them can be uniquely characterised by a temporal formula up to an isomorphism. As the main result, we prove Kripke completeness of the logic with respect to the class of finite linear structures with bounded morphisms between them.
We propose an optimization model to tackle the problem of determining how projects are assigned to student groups based on a bidding procedure. In order to improve student experience in project-based learning we resort to actively involving them in a transparent and unbiased project allocation process. To evaluate our work, we collected information about the students' own views on how our approach influenced their level of learning and overall learning experience and provide a detailed analysis of the results. The results of our evaluation show that the large majority of students (i.e., 91%) increased or maintained their satisfaction ratings with the proposed procedure after the assignment was concluded, as compared to their attitude towards the process before the project assignment occurred.
Object detection is a critical task in computer vision with applications across various domains, ranging from autonomous driving to surveillance systems. Despite extensive research on improving the performance of object detection systems, identifying all objects in different places remains a challenge. The traditional object detection approaches focus primarily on extracting and analyzing visual features without considering the contextual information about the places of objects. However, entities in many real-world scenarios closely relate to their surrounding environment, providing crucial contextual cues for accurate detection. This study investigates the importance and impact of places of images (indoor and outdoor) on object detection accuracy. To this purpose, we propose an approach that first categorizes images into two distinct categories: indoor and outdoor. We then train and evaluate three object detection models (indoor, outdoor, and general models) based on YOLOv5 and 19 classes of the PASCAL VOC dataset and 79 classes of COCO dataset that consider places. The experimental evaluations show that the specialized indoor and outdoor models have higher mAP (mean Average Precision) to detect objects in specific environments compared to the general model that detects objects found both indoors and outdoors. Indeed, the network can detect objects more accurately in similar places with common characteristics due to semantic relationships between objects and their surroundings, and the network’s misdetection is diminished. All the results were analyzed statistically with t-tests.
The digital media landscape has been exposed in recent years to an increasing number of deliberately misleading news and disinformation campaigns, a phenomenon popularly referred as fake news. In an effort to combat the dissemination of fake news, designing machine learning models that can classify text as fake or not has become an active line of research. While new models are continuously being developed, the focus so far has mainly been aimed at improving the accuracy of the models for given datasets. Hence, there is little research done in the direction of explainability of the deep learning (DL) models constructed for the task of fake news detection.In order to add a level of explainability, several aspects have to be taken into consideration. For instance, the pre-processing phase, or the length and complexity of the text play an important role in achieving a successful classification. These aspects need to be considered in conjunction with the model's architecture. All of these issues are addressed and analyzed in this paper. Visualizations are further employed to grasp a better understanding how different models distribute their attention when classifying fake news texts. In addition, statistical data is gathered to deepen the analysis and to provide insights with respect to the model's interpretability.
A Mobile stroke unit (MSU) is a type of ambulance deployed to promote the rapid delivery of stroke care. We present a computational study using a time to treatment estimation model to analyze the potential benefits of using MSUs in Sweden's Southern Health Care Region (SHR). In particular, we developed two scenarios (MSU1 and MSU2) each including three MSUs, which we compared with a baseline scenario containing only regular ambulances. For each MSU scenario, we assessed how much the expected time to treatment is estimated to decrease for the whole region and each subregion of SHR, and how the population is expected to benefit from the deployment of MSUs. For example, the average time to treatment in SHR was decreased with 20,4 and 15,6 minutes, respectively, in the two MSU scenarios. Moreover, our computational results show that the locations of the MSUs significantly influence what benefits can be expected. While MSU1 is expected to improve the situation for a higher share of the population, MSU2 is expected to have a higher impact on the patients who currently have the longest time to treatment.
A mobile stroke unit (MSU) is an ambulance, where stroke patients can be diagnosed and treated. Recently, placement of MSUs has been studied focusing on either maximum population coverage or equal service for all patients, termed efficiency and equity, respectively. In this study, we propose an unconstrained optimization model for the placement of MSUs, designed to introduce a tradeoff between efficiency and equity. The tradeoff is based on the concepts of weighted average time to treatment and the time difference between the expected time to treatment for different geographical areas. We conduct a case-study for Sweden’s Southern Health care Region (SHR), generating three scenarios (MSU1, MSU2, and MSU3) including 1, 2, and 3 MSUs, respectively. We show that our proposed optimization model can tune the tradeoff between the efficiency and equity perspectives for the MSU(s) allocation. This enables a high level of equal service for most inhabitants, as well as reducing the time to treatment for most inhabitants of a geographic region. In particular, placing three MSUs in the SHR with the proposed tradeoff, the share of inhabitants who are expected to receive treatment within an hour potentially improved by about a factor of 14 in our model.
Nearly every real-world deployment of machine learning models suffers from some form of shift in data distributions in relation to the data encountered in production. This aspect is particularly pronounced when dealing with streaming data or in dynamic settings (e.g. changes in data sources, behaviour and the environment). As a result, the performance of the models degrades during deployment. In order to account for these contextual changes, domain adaptation techniques have been designed for scenarios where the aim is to learn a model from a source data distribution, which can perform well on a different, but related target data distribution. In this paper we introduce a variational autoencoder-based multi-modal approach for the task of domain adaptation, that can be trained on a large amount of labelled data from the source domain, coupled with a comparably small amount of labelled data from the target domain. We demonstrate our approach in the context of human activity recognition using various IoT sensing modalities and report superior results when benchmarking against the effective mSDA method for domain adaptation.
Pervasive technologies permeating our immediate surroundings provide a wide variety of low-cost means of sensing and actuating in our environment. This paper presents an approach for leveraging insights onto the lifestyle and routines of the users in order to control heating in a smart home through the use of individual climate zones, while ensuring system efficiency at a grid-level scale. Organizing smart living spaces into controllable individual climate zones allows us to exert a more fine-grained level of control. Thus, the system can benefit from a higher degree of freedom to adjust the heat demand according to the system objectives. Whereas district heating planing is only concerned with balancing heat demand among buildings, we extend the reach of these systems inside the home through the use of pervasive sensing and actuation. That is to say, we bridge the gap between traditional district heating systems and pervasive technologies in the home designed to maintain the thermal comfort of the user, in order to increase efficiency. The objective is to automate heating based on the user's preferences and behavioral patterns. The control scheme proposed applies a learning algorithm to take advantage of the sensing data inside the home in combination with an optimization procedure designed to trade-off the discomfort undertaken by the user and heating supply costs. We report on preliminary simulation results showing the effectiveness of our approach and describe the setup of our forthcoming field study.
Recent proliferation of surveillance systems is mostly attributed to advances in both image-processing techniques and hardware enhancement of smart cameras, as well as the ubiquity of sensor-driven architectures. Owing to these capabilities, new aspects are coming to the forefront. This paper addresses the current state-of-the-art and provides researchers with an overview of existing surveillance solutions, analyzing their properties as a system and drawing attention to relevant challenges when developing, deploying and managing them. Also, some of the more prominent application domains are highlighted here. In an effort to understand the development of the advanced solutions, based on their most distinctive characteristics, we propose a taxonomy for surveillance systems to help classify them and reveal gaps in existing research. We conclude by identifying promising future research lines.
The ubiquity of sensor infrastructures in urban environments poses new challenges in managing the vast amount of data being generated and even more importantly, deriving insights that are relevant and actionable to its users and stakeholders. We argue that understanding the context in which people and things are connected and interacting is of key importance to this end. In this position paper, we present ongoing work in the design of a multiagent model based on immunity theory concepts with the scope of enhancing sensor-driven architectures with context-aware capabilities. We aim to demonstrate our approach in a real-world scenario for processing streams of sensor data in a smart building
With the steady rise of home and building automation management system, it is becoming paramount to gain access to information that reflects consumption patterns with devicelevel granularity. Various application-level services can then makes use of this data for monitoring and controlling purposes in an efficient manner. In this paper we report on the design and development of an Internet of Things (IoT) end-to-end solution for electric appliance recognition that can operate in real-time and entails low hardware cost. For the task of identifying various appliance signatures we also provide a comparative analysis, where on the one hand, we investigate the suitability of several machine learning approaches given publicly available datasets, that generally provide months worth of data with a relatively low sampling frequency. On the other hand, we proceed to evaluate their discriminative effectiveness for our particular scenario, where the goal is to provide rapid identification of the appliance signature in real-time based on a reduced training dataset (few-shot learning). This is particularly important in the context of appliance recognition, where due to the high variance in consumption patterns within each class, in order to achieve high accuracy, data points often need to be collected for each individual appliance or device that would need to be later identified. Clearly, this data collection process is often expensive and difficult to perform, especially in large-scale settings, hence few-shot learning is key. Besides presenting our end-to-end IoT solution that meets the abovementioned desiderata, the paper also provides an analysis of the computational demand of such an approach with regard to cost and real-time performance, which is often critical to low-powered IoT solutions. (C) 2020 The Authors. Published by Elsevier B.V.
In this paper we address the problem of automatic sensor composition for servicing human-interpretable high-level tasks. To this end, we introduce multi-level distributed intelligent virtual sensors (multi-level DIVS) as an overlay framework for a given mesh of physical and/or virtual sensors already deployed in the environment. The goal for multi-level DIVS is two-fold: (i) to provide a convenient way for the user to specify high-level sensing tasks; (ii) to construct the computational graph that provides the correct output given a specific sensing task. For (i) we resort to a conversational user interface, which is an intuitive and user-friendly manner in which the user can express the sensing problem, i.e., natural language queries, while for (ii) we propose a deep learning approach that establishes the correspondence between the natural language queries and their virtual sensor representation. Finally, we evaluate and demonstrate the feasibility of our approach in the context of a smart city setup.
In this work, we focus on one particular area of the smart grid, namely, the challenges faced by distribution network operators in securing the balance between supply and demand in the intraday market, as a growing number of load-controllable devices and small-scale, intermittent generators coming from renewables are expected to pervade the system. We introduce a multiagent design to facilitate coordinating the various actors in the grid. The underpinning of our approach consists of an online cooperation scheme, ECOOP, where agents learn a prediction model regarding potential coalition partners and so can respond in an agile manner to situations that are occurring in the grid, by means of negotiating and formulating speculative solutions, with respect to the estimated behavior of the system. We provide a computational characterization for our solution in terms of complexity, as well as an empirical analysis against real consumption data sets, based on the macro-model of the Australian energy market, showing a performance improvement of about 17%.
The recent advent of ’Internet of Things’ technologies is set to bring about a plethora of heterogeneous data sources to our immediate environment. In this work, we put forward a novel concept of dynamic intelligent virtual sensors (DIVS) in order to support the creation of services designed to tackle complex problems based on reasoning about various types of data. While in most of works presented in the literature virtual sensors are concerned with homogeneous data and/or static aggregation of data sources, we define DIVS to integrate heterogeneous and distributed sensors in a dynamic manner. This paper illustrates how to design and build such systems based on a smart building case study. Moreover, we propose a versatile framework that supports collaboration between DIVS, via a semantics- empowered search heuristic, aimed towards improving their performance.
The Internet of Things (IoT) is envisioned as a global net- work of connected things enabling ubiquitous machine-to-machine (M2M) communication. With estimations of billions of sensors and devices to be connected in the coming years, the IoT has been advocated as having a great potential to impact the way we live, but also how we work. How- ever, the connectivity aspect in itself only accounts for the underlying M2M infrastructure. In order to properly support engineering IoT sys- tems and applications, it is key to orchestrate heterogeneous ’things’ in a seamless, adaptive and dynamic manner, such that the system can ex- hibit a goal-directed behaviour and take appropriate actions. Yet, this form of interaction between things needs to take a user-centric approach and by no means elude the users’ requirements. To this end, contextu- alisation is an important feature of the system, allowing it to infer user activities and prompt the user with relevant information and interactions even in the absence of intentional commands. In this work we propose a role-based model for emergent configurations of connected systems as a means to model, manage, and reason about IoT systems including the user’s interaction with them. We put a special focus on integrating the user perspective in order to guide the emergent configurations such that systems goals are aligned with the users’ intentions. We discuss related scientific and technical challenges and provide several uses cases outlining the concept of emergent configurations.
Biometric solutions for access control is an active line of research. Specifically, when it comes to facial identification for access control, these systems can pose privacy concerns. For instance, identifying people that do not want to use the facial identification module. This work focuses on implementing an intent-aware system, which uses a hand gesture trigger to initiate the identification process. In order to evaluate the system, test cases were performed to verify accuracy of each hand gesture. Thereafter, a scenario was created to simulate an activation of the prototype system. The evaluation was used to determine the convenience and guidance when implementing intent-aware systems.
This paper concerns the novel concept of an Interactive Dynamic Intelligent Virtual Sensor (IDIVS), which extends virtual/soft sensors towards making use of user input through interactive learning (IML) and transfer learning. In research, many studies can be found on using machine learning in this domain, but not much on using IML. This paper contributes by highlighting how this can be done and the associated positive potential effects and challenges. An IDIVS provides a sensor-like output and achieves the output through the data fusion of sensor values or from the output values of other IDIVSs. We focus on settings where people are present in different roles: from basic service users in the environment being sensed to interactive service users supporting the learning of the IDIVS, as well as configurators of the IDIVS and explicit IDIVS teachers. The IDIVS aims at managing situations where sensors may disappear and reappear and be of heterogeneous types. We refer to and recap the major findings from related experiments and validation in complementing work. Further, we point at several application areas: smart building, smart mobility, smart learning, and smart health. The information properties and capabilities needed in the IDIVS, with extensions towards information security, are introduced and discussed.
As connectivity has been introduced to the car industry, automotive companies have in-use cars which are connected to the internet. A key concern in this context represents the difficulty of knowing how the connection quality changes over time and if there are associated issues. In this work we describe the use of CDR data from connected cars supplied by Volvo to build and study forecasting models that predict how relevant KPIs change over time. Our experiments show promising results for this predictive task, which can lead to improving user experience of connectivity in smart vehicles.
Although the availability of sensor data is becoming prevalent across many domains, it still remains a challenge to make sense of the sensor data in an efficient and effective manner in order to provide users with relevant services. The concept of virtual sensors provides a step towards this goal, however they are often used to denote homogeneous types of data, generally retrieved from a predetermined group of sensors. The DIVS (Dynamic Intelligent Virtual Sensors) concept was introduced in previous work to extend and generalize the notion of a virtual sensor to a dynamic setting with heterogenous sensors. This paper introduces a refined version of the DIVS concept by integrating an interactive machine learning mechanism, which enables the system to take input from both the user and the physical world. The paper empirically validates some of the properties of the DIVS concept. In particular, we are concerned with the distribution of different budget allocations for labelled data, as well as proactive labelling user strategies. We report on results suggesting that a relatively good accuracy can be achieved despite a limited budget in an environment with dynamic sensor availability, while proactive labeling ensures further improvements in performance.
This paper explores the problem of determining the time of an analogue wristwatch by developing two systems and conducting a comparative study. The first system uses OpenCV to find the watch hands and applies geometrical techniques to calculate the time. The second system uses Machine Learning by building a neural network to classify images in Tensorflow using a multi-labelling approach. The results show that in a set environment the geometric-based approach performs better than the Machine Learning model. The geometric system predicted time correctly with an accuracy of 80% whereas the best Machine Learning model only achieves 74%. Experiments show that the accuracy of the neural network model did increase when using data augmentation, however there was no significant improvement when adding synthetic data to our training set.