Connected smart homes (CSH) have benefited immensely from emerging Internet of Things (IoT) technology. CSH is intended to support everyday life in the private seclusion of the home, and typically covers the integration of smart devices such as smart meters, heating, ventilation, and air conditioning (HVAC), intelligent lightening, and voice-activated assistants among others. Nevertheless, the risks associated with CSH assets are often of high concern. For instance, energy consumption monitoring through smart meters can reveal sensitive information that may pose a privacy risk to home occupants if not properly managed. Existing risk assessment approaches for CSH tend to focus on qualitative risk assessment methodologies, such as operationally critical threat, asset, and vulnerability evaluation (OCTAVE). However, security risk assessment, particularly for IoT environments, demands both qualitative and quantitative risk assessment. This paper proposes assets-based risk assessment model that integrates both qualitative and quantitative risk assessment to determine the risk related to assets in CSH when a specific service is used. We apply fuzzy Analytic Hierarchy Process (fuzzy AHP) to address the subjective assessment of the IoT risk analysts and stakeholders. The applicability of the proposed model is illustrated through a use case that constitutes a scenario connected to service delivery in CSH. The proposed model provides a guideline to researchers and practitioners on how to quantify the risks targeting assets in CSH.
The international community has largely recognized that the Earth's climate is changing. Mitigating its global effects requires international actions. The European Union (EU) is leading several initiatives focused on reducing the problems. Specifically, the Climate Action tries to both decrease EU greenhouse gas emissions and improve energy efficiency by reducing the amount of primary energy consumed, and it has pointed to the development of efficient building energy management systems as key. In traditional buildings, households are responsible for continuously monitoring and controlling the installed Heating, Ventilation, and Air Conditioning (HVAC) system. Unnecessary energy consumption might occur due to, for example, forgetting devices turned on, which overwhelms users due to the need to tune the devices manually. Nowadays, smart buildings are automating this process by automatically tuning HVAC systems according to user preferences in order to improve user satisfaction and optimize energy consumption. Towards achieving this goal, in this paper, we compare 36 Machine Learning algorithms that could be used to forecast indoor temperature in a smart building. More specifically, we run experiments using real data to compare their accuracy in terms of R-coefficient and Root Mean Squared Error and their performance in terms of Friedman rank. The results reveal that the ExtraTrees regressor has obtained the highest average accuracy (0.97%) and performance (0,058%) over all horizons.
The integration of the Internet of Things (IoT) and Machine Learning (ML) technologies has opened up for the development of novel types of systems and services. Federated Learning (FL) has enabled the systems to collaboratively train their ML models while preserving the privacy of the data collected by their IoT devices and objects. Several FL frameworks have been developed, however, they do not enable FL in open, distributed, and heterogeneous IoT environments. Specifically, they do not support systems that collect similar data to dynamically discover each other, communicate, and negotiate about the training terms (e.g., accuracy, communication latency, and cost). Towards bridging this gap, we propose ART4FL, an end-to-end framework that enables FL in open IoT settings. The framework enables systems' users to configure agents that participate in FL on their behalf. Those agents negotiate and make commitments (i.e., contractual agreements) to dynamically form federations. To perform FL, the framework deploys the needed services dynamically, monitors the training rounds, and calculates agents' trust scores based on the established commitments. ART4FL exploits a blockchain network to maintain the trust scores, and it provides those scores to negotiating agents' during the federations' formation phase.
Internet of Things (IoT) environments encompass different types of devices and objects that offer a wide range of services. The dynamicity and uncertainty of those environments, including the mobility of users and devices, make it hard to foresee at design time available devices, objects, and services. For the users to benefit from such environments, they should be proposed services that are relevant to the specific context and can be provided by available things. Moreover, environments should be configured automatically based on users' preferences. To address these challenges, we propose an approach that leverages Artificial Intelligence techniques to recognize users' activities and provides relevant services to support users to perform their activities. Moreover, our approach learns users' preferences and configures their environments accordingly by dynamically forming, enacting, and adapting goal-driven IoT systems. In this paper, we present a conceptual model, a multi-tier architecture, and processes of our approach. Moreover, we report about how we validated the feasibility and evaluated the scalability of the approach through a prototype that we developed and used.
The Internet of Things (IoT) involves intelligent, heterogeneous, autonomous and often distributed things which interact and collaborate to achieve common goals. A useful concept for supporting this effort is Emergent Configuration (EC), which consists of a dynamic set of things, with their functionalities and services, that cooperate temporarily to achieve a goal. In this paper we introduce a commitment-based approach that exploits the concept of commitments to realize ECs. More specifically, (i) we present a conceptual model for commitment-based ECs, (ii) we use the smart meeting room scenario to illustrate how ECs are realized via commitments.
Multimodal journey planners are used worldwide to support travelers in planning and executing their journeys. Generated travel plans usually involve local mobility service providers, consider some travelers' preferences, and provide travelers information about the routes' current status and expected delays. However, those planners cannot fully consider the special situations of individual cities when providing travel planning services. Specifically, authorities of different cities might define customizable regulations or constraints of movements in the cities (e.g., due to construction works or pandemics). Moreover, with the transformation of traditional cities into smart cities, travel planners could leverage advanced monitoring features. Finally, most planners do not consider relevant information impacting travel plans, for instance, information that might be provided by travelers (e.g., a crowded square) or by mobility service providers (e.g., changing the timetable of a bus). To address the aforementioned shortcomings, in this paper, we propose ROUTE, a framework for customizable smart mobility planners that better serve the needs of travelers, local authorities, and mobility service providers in the dynamic ecosystem of smart cities. ROUTE is composed of an architecture, a process, and a prototype developed to validate the feasibility of the framework. Experiments' results show that the framework scales well in both centralized and distributed deployment settings.
The Internet of Things (IoT) pervades more and more aspects of our lives and often involves many types of smart connected objects and devices. User’s IoT environment changes dynamically, e.g., due to the mobility of the user and devices. Users can fully benefit from the IoT only when they can effortlessly interact with it. To accomplish this in a dynamic and heterogenous environment, we make use of Emergent Configurations (ECs), which consist of a set of things that connect and cooperate temporarily through their functionalities, applications, and services, to achieve a user goal. In this paper, we: (i) present the IoT-FED architectural approach to enable the automated formation and enactment of ECs. IoT-FED exploits heterogeneous and independently developed things, IoT services, and applications which are modeled as Domain Objects (DOs), a service-based formalism. Additionally, we (ii) discuss the prototype we developed and the experiments run in our IoT lab, for validation purposes.
Engineering Internet of Things (IoT) systems is a challenging task partly due to the dynamicity and uncertainty of the environment including the involvement of the human in the loop. Users should be able to achieve their goals seamlessly in different environments, and IoT systems should be able to cope with dynamic changes. Several approaches have been proposed to enable the automated formation, enactment, and self-adaptation of goal-driven IoT systems. However, they do not address deployment issues. In this paper, we propose a goal-driven approach for deploying self-adaptive IoT systems in the Edge-Cloud continuum. Our approach supports the systems to cope with the dynamicity and uncertainty of the environment including changes in their deployment topologies, i.e., the deployment nodes and their interconnections. We describe the architecture and processes of the approach and the simulations that we conducted to validate its feasibility. The results of the simulations show that the approach scales well when generating and adapting the deployment topologies of goal-driven IoT systems in smart homes and smart buildings.
The Internet of Things (IoT) has enabled physical objects and devices, often referred to as things, to connect and communicate. This has opened up for the development of novel types of services that improve the quality of our daily lives. The dynamicity and uncertainty of IoT environments, including the mobility of users and devices, make it hard to foresee at design time available things and services. Further, users should be able to achieve their goals seamlessly in arbitrary environments. To address these challenges, we exploit Artificial Intelligence (AI) to engineer smart IoT systems that can achieve user goals and cope with the dynamicity and uncertainty of their environments. More specifically, the main contribution of this paper is an approach that leverages the notion of Belief-Desire-Intention agents and Machine Learning (ML) techniques to realize Emergent Configurations (ECs) in the IoT. An EC is an IoT system composed of a dynamic set of things that connect and cooperate temporarily to achieve a user goal. The approach enables the distributed formation, enactment, adaptation of ECs, and conflict resolution among them. We present a conceptual model of the entities of the approach, its underlying processes, and the guidelines for using it. Moreover, we report about the simulations conducted to validate the feasibility of the approach and evaluate its scalability. View Full-Text
The Internet of Things (IoT) has a great potential to change our lives. Billions of heterogeneous, distributed, intelligent, and sometimes mobile devices, will be connected and offer new types of applications and ways to interact. The dynamic environment of the IoT, the involvement of the human in the loop, and the runtime interactions among devices and applications, put additional requirements on the systems' architecture. In this paper, we use the Emergent Configurations (ECs) concept as a way to engineer IoT systems and propose an architecture for ECs. More specifically, we discuss (i) how connected devices and applications form ECs to achieve users' goals and (ii) how applications are run and adapted in response to runtime context changes including, e.g., the sudden unavailability of devices, by exploiting the Smart Meeting Room case.
During the last decade, a large number of different definitions and taxonomies of Internet of Things (IoT) systems have been proposed. This has resulted in a fragmented picture and a lack of consensus about IoT systems and their constituents. To provide a better understanding of this issue and a way forward, we have conducted a Systematic Mapping Study (SMS) of existing IoT System taxonomies. In addition, we propose a characterization of IoT systems synthesized from the existing taxonomies, which provides a more holistic view of IoT systems than previous taxonomies. It includes seventeen characteristics, divided into two groups: elements and quality aspects. Finally, by analyzing the results of the SMS, we draw future research directions.
The rapid proliferation of the Internet of Things (IoT) is changing the way we live our everyday life and the society in general. New devices get connected to the Internet every day and, similarly, new IoT services and applications exploiting them are developed across a wide range of domains. The IoT environment typically is very dynamic, devices might suddenly become unavailable and new ones might appear. Similarly, users enter and/or leave the IoT environment while being interested in fulfilling their individual needs. These key aspects must be considered while designing and realizing IoT systems. In this paper we propose ECo-IoT, an architectural approach to enable the automated formation and adaptation of Emergent Configurations (ECs) in the IoT. An EC is formed by a set of things, with their services, functionalities, and applications, to realize a user goal. ECs are adapted in response to (un)foreseen context changes e.g., changes in available things or due to changing or evolving user goals. In the paper, we describe: (i) an architecture and a process for realizing ECs; and (ii) a prototype we implemented for (iii) the validation of ECo-IoT through an IoT scenario that we use throughout the paper.
Systems of Systems (SoS) and the Internet of Things (IoT) have many common characteristics. For example, their constituents are heterogeneous, often autonomous, and distributed. Moreover, both IoT systems and SoS achieve their intended goals by means of the dynamic collaboration and coordination among their constituents. In this paper, by using the notion of Emergent Configurations (ECs) as a means to engineer IoT systems, we show how ECs in the IoT can be regarded both as systems and SoS by exploiting two scenarios.
The Internet of Things (IoT) is revolutionizing our environments with novel types of services and applications by exploiting the large number of diverse connected things. One of the main challenges in the IoT is to engineer systems to support human users to achieve their goals in dynamic and uncertain environments. For instance, the mobility of both users and devices makes it infeasible to always foresee the available things in the users’ current environments. Moreover, users’ activities and/or goals might change suddenly. To support users in such environments, we developed an initial approach that exploits the notion of Emergent Configurations (ECs) and mixed initiative techniques to engineer self-configuring IoT systems. An EC is a goal-driven IoT system composed of a dynamic set of temporarily connecting and cooperating things. ECs are more flexible and usable than IoT systems whose constituents and interfaces are fully specified at design time
Systems of Systems(SoS) and theInternet of Things(IoT)have many common characteristics. For example, their constituents are heterogeneous, autonomous and often distributed. Moreover, both IoT and SoS achieve intended goals by means of the highly dynamic cooperation among their constituents. In this paper we study the relation between IoT and SoS. We discuss the characteristics of both concepts and highlight common aspects. Furthermore, we introduce the conceptSystem of Emergent Configurations (SoECs) to describe IoT-based SoS.
The rapidly evolving Internet of Things (IoT) includes applications which might generate a huge amount of data, this requires appropriate platforms and support methods. Cloud computing offers attractive computational and storage solutions to cope with these issues. However, sending to centralized servers all the data generated at the edge of the network causes latency, energy consumption, and high bandwidth demand. Performing some computations at the edge of the network, known as Edge computing, and using a hybrid Edge-Cloud architecture can help addressing these challenges. While such architecture may provide new opportunities to distribute IoT applications, making optimal decisions regarding where to deploy the different application components is not an easy and straightforward task for designers. Supporting designers’ decisions by considering key quality attributes impacting them in an Edge-Cloud architecture has not been investigated yet. In this paper, we: explore the importance of decision support for the designers, discuss how different attributes impact the decisions, and describe the required steps toward a decision support framework for IoT application designers.
Many Internet of Things (IoT) systems generate a massive amount of data needing to be processed and stored efficiently. Cloud computing solutions are often used to handle these tasks. However, the increasing availability of computational resources close to the edge has prompted the idea of using these for distributed computing and storage. Edge computing may help to improve IoT systems regarding important quality attributes like latency, energy consumption, privacy, and bandwidth utilization. However, deciding where to deploy the various application components is not a straightforward task. This is largely due to the trade-offs between the quality attributes relevant for the application. We have performed a systematic mapping study of 98 articles to investigate which quality attributes have been used in the literature for assessing IoT systems using edge computing. The analysis shows that time behavior and resource utilization are the most frequently used quality attributes; further, response time, turnaround time, and energy consumption are the most used metrics for quantifying these quality attributes. Moreover, simulation is the main tool used for the assessments, and the studied trade-offs are mainly between only two qualities. Finally, we identified a number of research gaps that need further study.
The deployment of Internet of Things (IoT) applications is complex since many quality characteristics should be taken into account, for example, performance, reliability, and security. In this study, we investigate to what extent the current edge computing simulators support the analysis of qualities that are relevant to IoT architects who are designing an IoT system. We first identify the quality characteristics and metrics that can be evaluated through simulation. Then, we study the available simulators in order to assess which of the identified qualities they support. The results show that while several simulation tools for edge computing have been proposed, they focus on a few qualities, such as time behavior and resource utilization. Most of the identified qualities are not considered and we suggest future directions for further investigation to provide appropriate support for IoT architects.
For the efficient execution of Deep Neural Networks (DNN) in the Internet of Things, computation tasks can be distributed and deployed on edge nodes. In contrast to deploying all computation to the cloud, the use of Distributed DNN (DDNN) often results in a reduced amount of data that is sent through the network and thus might increase the overall performance of the system. However, finding an appropriate deployment scenario is often a complex task and requires considering several criteria. In this paper, we introduce a multi-criteria decision-making method based on the Analytical Hierarchy Process for the comparison and selection of deployment alternatives. We use the RECAP simulation framework to model and simulate DDNN deployments on different scales to provide a comprehensive assessment of deployments to system designers. In a case study, we apply the method to a smart city scenario where different distributions and deployments of a DNN are analyzed and compared.
Although smart homes are tasked with an increasing number of everyday activities to keep users safe, healthy, and entertained, privacy concerns arise due to the large amount of personal data in flux. Privacy is widely acknowledged to be contextually dependent, however, the interrelated stakeholders involved in developing and delivering smart home services – IoT developers, companies, users, and lawmakers, to name a few – might approach the smart home context differently. This paper considers smart homes as digital ecosystems to support a contextual analysis of smart home privacy. A conceptual model and an ecosystem ontology are proposed through design science research methodology to systematize the analyses. Four privacy-oriented scenarios of surveillance in smart homes are discussed to demonstrate the utility of the digital ecosystem approach. The concerns pertain to power dynamics among users such as main users, smart home bystanders, parent-child dynamics, and intimate partner relationships and the responsibility of both companies and public organizations to ensure privacy and the ethical use of IoT devices over time. Continuous evaluation of the approach is encouraged to support the complex challenge of ensuring user privacy in smart homes.
This paper explores the topic of model credibility of Agent-based Models and how they should be evaluated prior to application in policy-making. Specifically, this involves analyzing bordering literature from different fields to: (1) establish a definition of model credibility -- a measure of confidence in the model's inferential capability -- and to (2) assess how model credibility can be strengthened through Verification, Validation, and Accreditation (VV&A) prior to application, as well as through post-application evaluation. Several studies have highlighted severe shortcomings in how V&V of Agent-based Models is performed and documented, and few public administrations have an established process for model accreditation. To address the first issue, we examine the literature on model V&V and, based on this review, introduce and outline the usage of a V&V plan. To address the second issue, we take inspiration from a practical use case of model accreditation applied by a government institution to propose a framework for the accreditation of ABMs for policy-making. The paper concludes with a discussion of the risks associated with improper assessments of model credibility.
The aim of this study is to analyze collaborations including agent-based modellers and policymakers to identify potential challenges that need to be overcome to facilitate simulation-based policy-making. To achieve this, we examined 18 publications reporting on joint projects where Agent-based modelling (ABM) was carried out in conjunction with modellers, policymakers, and other stakeholders to support policy-making. This study focuses on the challenges that modellers experienced during their collaboration e.g., disagreement about model specification, political obstacles, unrealistic expectations regarding the insights provided by ABM as well as the limitations of the models, and impatience of stakeholders when waiting for results. We identified and categorized these challenges into five themes: Challenges of Scope, Politics, Management, Understandability, and Credibility. These challenges were analyzed and used to formulate five recommendations, which are presented as a single approach that takes ethical considerations of policy modelling into account. So that these insights can be used to facilitate future simulation-based policy collaborations.
Social phenomena emerge from agent-environment interactions, rendering many statistical models unsuit-able. Agent-based Models (ABMs) offer a viable alternative for exploring policy implications. While recentcrises like the COVID-19 pandemic may have increased ABM awareness, their use in policy-making hasa long history. To better understand the potential challenges and opportunities of using ABMs to informpolicy-making, we conducted a systematic literature review and identified 34 articles describing the use ofABMs involving policymakers. This review revealed that ABMs have been implemented to support pol-icymakers across a range of policy areas, but also identified low levels of model traceability and formalcommunication. Moreover, the review showed that the model’s purpose and type tend to influence howvalidation is performed. The review concludes that models that have undergone little validation and lackproper documentation, while being informally communicated, may hinder policymakers from effectivelymotivating their decision-making.
Deep neural networks for positioning can improve accuracy by adapting to inhomogeneous environments. However, they are still susceptible to noisy data, often resulting in invalid positions. A related task, map matching, can be used for reducing geographical invalid positions by aligning observations to a model of the real world. In this paper, we propose an approach for positioning, enhanced with map matching, within a single deep neural network model. We introduce a novel way of reducing the number of invalid position estimates by adding map information to the input of the model and using a map-based loss function. Evaluating on real-world Received Signal Strength Indicator data from an asset tracking application, we show that our approach gives both increased position accuracy and a decrease of one order of magnitude in the number of invalid positions.
Deep neural networks have the ability to generalize beyond observed training data. However, for some applications they may produce output that apriori is known to be invalid. If prior knowledge of valid output regions is available, one way of imposing constraints on deep neural networks is by introducing these priors in a loss function. In this paper, we introduce a novel way of constraining neural network output by using encoded regions with a loss function based on gradient interpolation. We evaluate our method in a positioning task where a region map is used in order to reduce invalid position estimates. Results show that our approach is effective in decreasing invalid outputs for several geometrically complex environments.
Prototyping Futures gives you a glimpse of what collaborating with academia might look like. Medea and its co-partners share their stories about activities happening at the research centre – projects, methods, tools, and approaches – what challenges lie ahead, and how these can be tackled. Examples of highlighted topics include: What is a living lab and how does it work? What are the visions behind the Connectivity Lab at Medea? And, how can prototyping-methods be used when sketching scenarios for sustainable futures? Other topics are: What is the role of the body when designing technology? What is collaborative media and how can this concept help us understand contemporary media practices? Prototyping Futures also discusses the open-hardware platform Arduino, and the concepts of open data and the Internet of Things, raising questions on how digital media and connected devices can contribute to more sustainable lifestyles, and a better world.
This article provides an overview of recent research on edge-cloud architectures in hybrid energy management systems (HEMSs). It delves into the typical structure of an IoT system, consisting of three key layers: the perception layer, the network layer, and the application layer. The edge-cloud architecture adds two more layers: the middleware layer and the business layer. This article also addresses challenges in the proposed architecture, including standardization, scalability, security, privacy, regulatory compliance, and infrastructure maintenance. Privacy concerns can hinder the adoption of HEMS. Therefore, we also provide an overview of these concerns and recent research on edge-cloud solutions for HEMS that addresses them. This article concludes by discussing the future trends of edge-cloud architectures for HEMS. These trends include increased use of artificial intelligence on an edge level to improve the performance and reliability of HEMS and the use of blockchain to improve the security and privacy of edge-cloud computing systems.
This paper explores the potential of Tiny Machine Learning (TinyML) for privacy-preserving building energy management systems on mobile devices. While TinyML offers reduced latency and improved privacy, its effectiveness in predicting building energy consumption on mobile devices is not well studied. The proposed approach prioritizes user privacy by processing and storing energy data locally on users' mobile devices, leveraging smartphone, tablets, edge nodes, and secure cloud storage. This empowers users with control over their data and adheres to privacy regulations. Predicting building energy usage on mobile devices is crucial because it offers portability, accessibility, and privacy, as well as fosters user engagement. Mobile predictions allow users to conveniently monitor and regulate energy consumption, improving accessibility. Additionally, processing data locally ensures privacy by keeping sensitive information under user control. The paper also investigates the feasibility of converting a TensorFlow-based long short-term memory (LSTM) neural network model for energy prediction to a CoreML or TensorFlow Lite model for deployment on mobile devices. The results indicate a significant degradation in model accuracy after conversion to a CoreML and almost no degradation after conversion to a TensorFlow Lite model. Further research is recommended to explore optimization techniques for the conversion process and to compare models with other criteria.
This paper delves into the challenges encountered in decision-making processes within Hybrid Energy Systems (HES), placing a particular emphasis on the critical aspect of data integration. Decision-making processes in HES are inherently complex due to the diverse range of tasks involved in their management. We argue that to overcome these challenges, it is imperative to possess a comprehensive understanding of the HES architecture and how different processes and interaction layers synergistically operate to achieve the desired outcomes. These decision-making processes encompass a wealth of information and insights pertaining to the operation and performance of HES. Furthermore, these processes encompass systems for planning and management that facilitate decisions by providing a centralized platform for data collection, storage, and analysis. The success of HES largely hinges upon its capacity to receive and integrate various types of information. This includes real-time data on energy demand and supply, weather data, performance data derived from different system components, and historical data, all of which contribute to informed decision-making. The ability to accurately integrate and fuse this diverse range of data sources empowers HES to make intelligent decisions and accurate predictions. Consequently, this data integration capability allows HES to provide a multitude of services to customers. These services include valuable recommendations on demand response strategies, energy usage optimization, energy storage utilization, and much more. By leveraging the integrated data effectively, HES can deliver customized and tailored services to meet the specific needs and preferences of its customers.
This paper compares machine learning models for short-term heat demand forecasting in residential and multi-family buildings, evaluating model suitability, data impact on accuracy, computation time, and accuracy improvement methods. The findings are relevant for energy suppliers, researchers, and decision-makers in optimizing energy management and improving heat demand forecasting. The included models in the study are k-NN, Polynomial Regression, and LSTM with weather data, building type, and time index as input variables. Single-dimensional models (Autoregression, SARIMA, and Prophet) based on historical consumption are also studied. LSTM consistently outperforms other models in accuracy across different input variable combinations, measured using mean absolute percentage error (MAPE). The incorporation of historical consumption data improved the performance of k-NN and Polynomial Regression models. The paper also explores dataset volume impact on accuracy and compares training and prediction times. k-NN has the least prediction times, Polynomial Regression takes longer, and LSTM requires more time. All models exhibit acceptable prediction times for heat consumption. LSTM outperforms single-dimensional models in accuracy and has lower prediction times compared to AR, SARIMA, and Prophet models.
The home environment is rapidly becoming more complex with the introduction of numerous and heterogeneous Internet of Things devices. This development into smart connected homes brings with it challenges when it comes to gaining a deeper understanding of the home environment as a socio-technical system. A better understanding of the home is essential to build robust, resilient, and secure smart home systems. In this regard, we developed a novel method for classifying smart home devices in a logical and coherent manner according to their functionality. Unlike other approaches, we build the categorization empirically by mining the technical specifications of 1,193 commercial devices. Moreover, we identify twelve capabilities that can be used to characterize home devices. Alongside the classification, we also quantitatively analyze the entire spectrum of commercial smart home devices in accordance to their functionality and capabilities. Overall, the categorization and analysis provide a foundation for identifying opportunities of generalizations and common solutions for the smart home.
Smart connected homes are integrated with heterogeneous Internet-connected devices interacting with the physical environment and human users. While they have become an established research area, there is no common understanding of what composes such a pervasive environment making it challenging to perform a scientific analysis of the domain. This is especially evident when it comes to discourse about privacy threats. Recognizing this, we aim to describe a generic smart connected home, including the data it deals with in a novel privacy-centered system model. Such is done using concepts borrowed from the theory of Contextual Integrity. Furthermore, we represent privacy threats formally using the proposed model. To illustrate the usage of the model, we apply it to the design of an ambient-assisted living use-case and demonstrate how it can be used for identifying and analyzing the privacy threats directed to smart connected homes.
Smart connected home systems aim to enhance the comfort, convenience, security, entertainment, and health of the householders and their guests. Despite their advantages, their interconnected characteristics make smart home devices and services prone to various cybersecurity and privacy threats. In this paper, we analyze six classes of malicious threat agents for smart connected homes. We also identify four different motives and three distinct capability levels that can be used to group the different intruders. Based on this, we propose a new threat model that can be used for threat profiling. Both hypothetical and real-life examples of attacks are used throughout the paper. In reflecting on this work, we also observe motivations and agents that are not covered in standard agent taxonomies.
The increasing presence of heterogeneous Internet of Things devices inside the home brings with it added convenience and value to the householders. At the same time, these devices tend to be Internet-connected and continuously monitor and collect data about the residents and their daily lifestyle activities. Such data can be of a sensitive nature, given that the house is the place where privacy is naturally expected. To gain insight into this state of affairs, we empirically investigate the privacy policies of 87 different categories of commercial smart home devices in terms of data being collected. This is done using a combination of manual and data mining techniques. The overall contribution of this work is a model that identifies and categorizes smart connected home data in terms of its collection mode, collection method, and collection phase. Our findings bring up several implications for smart connected home privacy, which include the need for better security controls to safeguard the privacy of the householders.
Smart connected home systems bring different privacy challenges to residents. The contribution of this paper is a novel privacy grounded classification of smart connected home systems that is focused on personal data exposure. This classification is built empirically through k-means cluster analysis from the technical specification of 81 commercial Internet of Things (IoT) systems as featured in PrivacyNotIncluded – an online database of consumer IoT systems. The attained classification helps us better understand the privacy implications and what is at stake with different smart connected home systems. Furthermore, we survey the entire spectrum of analyzed systems for their data collection capabilities. Systems were classified into four tiers: app-based accessors, watchers, location harvesters, and listeners, based on the sensing data the systems collect. Our findings indicate that being surveilled inside your home is a realistic threat, particularly, as the majority of the surveyed in-home IoT systems are installed with cameras, microphones, and location trackers. Finally, we identify research directions and suggest some best practices to mitigate the threat of in-house surveillance.
Smart homes have become increasingly popular for IoT products and services with a lot of promises for improving the quality of life of individuals. Nevertheless, the heterogeneous, dynamic, and Internet-connected nature of this environment adds new concerns as private data becomes accessible, often without the householders’ awareness. This accessibility alongside with the rising risks of data security and privacy breaches, makes smart home security a critical topic that deserves scrutiny. In this paper, we present an overview of the privacy and security challenges directed towards the smart home domain. We also identify constraints, evaluate solutions, and discuss a number of challenges and research issues where further investigation is required.
Smart homes promise to improve the quality of life of residents. However, they collect vasts amounts of personal and sensitive data, making privacy protection critically important. We propose a framework, called PRASH, for modeling and analyzing the privacy risks of smart homes. It is composed of three modules: a system model, a threat model, and a set of privacy metrics, which together are used for calculating the privacy risk exposure of a smart home system. By representing a smart home through a formal specification, PRASH allows for early identification of threats, better planning for risk management scenarios, and mitigation of potential impacts caused by attacks before they compromise the lives of residents. To demonstrate the capabilities of PRASH, an executable version of the smart home system configuration was generated using the proposed formal specification, which was then analyzed to find potential attack paths while also mitigating the impacts of those attacks. Thereby, we add important contributions to the body of knowledge on the mitigations of threat agents violating the privacy of users in their homes. Overall, the use of PRASH will help residents to preserve their right to privacy in the face of the emerging challenges affecting smart homes.
We present one of the first actual applications of Multi Agent-Based Simulation (MABS) to the field of software process simulation modelling (SPSM). Although there are some recent attempts to do this, we argue that these fail to take full advantage of the agency paradigm. Our model of the software development process integrates individual-level performance, cognition and artefact quality models in a common simulation framework. In addition, this framework allows the implementation of both MABS and System Dynamics (SD) simulators using the same basic models. As SD is the dominating approach within SPSM, we are able to make relevant and unique comparisons between it and MABS. This enabled us to uncover quite interesting properties of these approaches, e.g., that MABS reflects the problem domain more realistically than SD.
Experiences from different applications of agent technology aiming to make transport and energy systems more efficient are presented. The examples will cover real-time applications on the operational level, as well as, support for long-term planning and decision-making on the strategic level. Some general reflections and insights from the work on these applications conclude the paper.
The recent years have witnessed an enormous growth of mobile services for energy management in buildings. However, these solutions are often proprietary, non-interoperable, and handle only a limited function, such as lighting, ventilation, or heating. To address these issues, we have developed an open platform that is an integrated energy management solution for buildings. It includes an ecosystem of mobile services and open APIs as well as protocols for the development of new services and products. Moreover, it has an adapter layer that enables the platform to interoperate with any building management system (BMS) or individual device. Thus, the platform makes it possible for third-party developers to produce mobile energy efficiency applications that will work independently of which BMS and devices are used in the building. To validate the platform, a number of services have been implemented and evaluated in existing buildings. This has been done in cooperation with energy companies and property owners, together with the residents and other users of the buildings. The platform, which we call Elis, has been made available as open source software under an MIT license. View Full-Text
The use of agreement technologies in the planning and execution of goods transports is analyzed. We have previously suggested an approach called Plug and Play Transport Chain Management (PnP TCM) that provides agent-based support for key tasks, such as, finding the best sequence of transport services for a particular goods transport, monitoring the execution of the transport, and managing the interaction between the involved actors. In this paper we analyze five agreement technologies in the context of PnP TCM, i.e., semantics, norms, organizations, argumentation and negotiation, and trust. We conclude that all five technologies play a critical role in the realization of PnP TCM.
The aim of this work is to develop a new type of service for predicting and communicating urban activity. This service provides short-term predictions (hours to days), which can be used as a basis for different types of resource allocation and planning, e.g. concerning public transport, personnel, or marketing. The core of the service consists of a forecasting engine that based on a prediction model processes data on different levels of detail and from various providers. This paper explores the requirements and features of the forecast engine. We conclude that agent-based modeling seems as the most promising approach to meet these requirements. Finally, some examples of potential applications are described along with analyses of scientific and engineering issues that need to be addressed.
We investigate the opportunities and challenges of the forth wave of digitalization, also referred to as the Internet of Things (IoT), with respect to public transport and how it can support sustainable development of society. Environmental, economical, and social perspectives are considered through analysis of the existing literature and explorative studies. We conclude that there are great opportunities for both transport operators and planners, as well as for the travelers. We describe and analyze a number of concrete opportunities for each of these actors. However, in order to realize these opportunities, there are also a number of challenges that needs to be addressed. There are both technical challenges, such as data collection issues, interoperability, scalability and information security, and non-technical challenges such as business models, usability, privacy issues, and deployment.
A novel approach to efficiently plan and execute effective transport solutions is presented. It provides agent-based support for key tasks, such as, finding the best sequence of transport services for a particular goods transport, monitoring the execution of the transport, as well as the interaction between the involved actors. The approach is based on the FREIGHTWISE framework in which a minimal set of information packages is defined. The purpose is to capture all the information that needs to be communicated between the actors involved in a transport, such as, transport users, transport providers, and infrastructure managers, during the complete process from planning to termination. The approach is inspired by the concepts of virtual enterprises and breeding environments. We analyse the requirements of such an approach and describe a multi-agent system architecture meeting these requirements.
Understanding and managing complex systems has become one of the biggest challenges for research, policy and industry. Modeling and simulation of complex systems promises to enable us to understand how a human nervous system and brain not just maintain the activities of a metabolism, but enable the production of intelligent behavior, how huge ecosystems adapt to changes, or what actually influences climatic changes. Also man-made systems are getting more complex and difficult, or even impossible, to grasp. Therefore we need methods and tools that can help us in, for example, estimating how different infrastructure investments will affect the transport system and understanding the behavior of large Internet-based systems in different situations. This type of system is becoming the focus of research and sustainable management as there are now techniques, tools and the computational resources available. This chapter discusses modeling and simulation of such complex systems. We will start by discussing what characterizes complex systems.
The Internet of Things has become a central and exciting research area encompassing many fields in information and communication technologies and adjacent domains. IoT systems involve interactions with heterogeneous, distributed, and intelligent things, both from the digital and physical worlds including the human in the loop. Thanks to the increasingly wide spectrum of applications and cheap availability of both network connectivity and devices, a number of different stakeholders from industry, academia, society and government are part of the IoT ecosystem.
A set of important criteria to consider when evaluating potential road user charging system (RUCS) are identified. These criteria are grouped into five categories: charging precision, system costs & societal benefits, flexibility & modifiability, operational aspects, and security & privacy. The criteria are then used in a comparative analysis of five RUCS candidates for heavy goods vehicles. Two solutions are position-based systems and one is based on tachographs. The two remaining solutions are based on fuel taxes. For each of the solutions we estimate how well it fulfils each of the criteria. One way of making general comparisons of the approaches is to give each of the criteria a specific weight corresponding to how important it is. We show that these weights heavily influence the outcome of the comparison. We conclude by pointing out a number of important issues needing attention in the process of developing RUCS.