Malmö University Publications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 22) Show all publications
Fabijan, A., Dmitriev, P., Olsson Holmström, H. & Bosch, J. (2020). The Online Controlled Experiment Lifecycle (ed.). IEEE Software, 37(2), 60-67
Open this publication in new window or tab >>The Online Controlled Experiment Lifecycle
2020 (English)In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 37, no 2, p. 60-67Article in journal (Refereed) Published
Abstract [en]

Online Controlled Experiments (OCEs) enable an accurate understanding of customer value and generate millions of dollars of additional revenue at Microsoft. Unlike other techniques for learning from customers, OCEs establish an accurate and causal relationship between a change and the impact observed. Although previous research describes technical and statistical dimensions, the key phases of online experimentation are not widely known, their impact and importance are obscure, and how to establish OCEs in an organization is underexplored. In this paper, using a longitudinal in-depth case study, we address this gap by (1) presenting the Experiment Lifecycle, and (2) demonstrating with four example experiments their profound impact. We show that OECs help optimize infrastructure needs and aid in project planning and measuring team efforts, in addition to their primary goal of accurately identifying what customers value. We conclude that product development should fully integrate the Experiment Lifecycle to benefit from the OCEs.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Measurement, Companies, Software, Computer science, Product development, Media, Planning, Data-driven development, A/B tests, Online Controlled Experiments, experiment lifecycle
National Category
Software Engineering
Identifiers
urn:nbn:se:mau:diva-2309 (URN)10.1109/MS.2018.2875842 (DOI)000520152900011 ()2-s2.0-85055195833 (Scopus ID)28040 (Local ID)28040 (Archive number)28040 (OAI)
Available from: 2020-02-27 Created: 2020-02-27 Last updated: 2024-04-04Bibliographically approved
Chrobak, M., Dürr, C., Fabijan, A. & Nilsson, B. J. (2019). Online Clique Clustering. Algorithmica, 82(4), 938-965
Open this publication in new window or tab >>Online Clique Clustering
2019 (English)In: Algorithmica, ISSN 0178-4617, E-ISSN 1432-0541, Vol. 82, no 4, p. 938-965Article in journal (Refereed) Published
Abstract [en]

Clique clustering is the problem of partitioning the vertices of a graph into disjoint clusters, where each cluster forms a clique in the graph, while optimizing some objective function. In online clustering, the input graph is given one vertex at a time, and any vertices that have previously been clustered together are not allowed to be separated. The goal is to maintain a clustering with an objective value close to the optimal solution. For the variant where we want to maximize the number of edges in the clusters, we propose an online algorithm based on the doubling technique. It has an asymptotic competitive ratio at most 15.646 and a strict competitive ratio at most 22.641. We also show that no deterministic algorithm can have an asymptotic competitive ratio better than 6. For the variant where we want to minimize the number of edges between clusters, we show that the deterministic competitive ratio of the problem is n−ω(1), where n is the number of vertices in the graph.

Place, publisher, year, edition, pages
Springer, 2019
National Category
Computer Sciences
Identifiers
urn:nbn:se:mau:diva-49617 (URN)10.1007/s00453-019-00625-1 (DOI)001027952200010 ()2-s2.0-85073948334 (Scopus ID)
Funder
Malmö University
Available from: 2022-01-24 Created: 2022-01-24 Last updated: 2024-04-04Bibliographically approved
Fabijan, A., Dmitriev, P., Olsson, H. H., Bosch, J., Vermeer, L. & Lewis, D. (2019). Three Key Checklists and Remedies for Trustworthy Analysis of Online Controlled Experiments at Scale. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP 2019). Paper presented at IEEE/ACM 41st International Conference on Software Engineering, 25-31 May 2019, Montreal, QC, Canada, Canada (pp. 1-10). IEEE
Open this publication in new window or tab >>Three Key Checklists and Remedies for Trustworthy Analysis of Online Controlled Experiments at Scale
Show others...
2019 (English)In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP 2019), IEEE, 2019, p. 1-10Conference paper, Published paper (Refereed)
Abstract [en]

Online Controlled Experiments (OCEs) are transforming the decision-making process of data-driven companies into an experimental laboratory. Despite their great power in identifying what customers actually value, experimentation is very sensitive to data loss, skipped checks, wrong designs, and many other 'hiccups' in the analysis process. For this purpose, experiment analysis has traditionally been done by experienced data analysts and scientists that closely monitored experiments throughout their lifecycle. Depending solely on scarce experts, however, is neither scalable nor bulletproof. To democratize experimentation, analysis should be streamlined and meticulously performed by engineers, managers, or others responsible for the development of a product. In this paper, based on synthesized experience of companies that run thousands of OCEs per year, we examined how experts inspect online experiments. We reveal that most of the experiment analysis happens before OCEs are even started, and we summarize the key analysis steps in three checklists. The value of the checklists is threefold. First, they can increase the accuracy of experiment setup and decision-making process. Second, checklists can enable novice data scientists and software engineers to become more autonomous in setting-up and analyzing experiments. Finally, they can serve as a base to develop trustworthy platforms and tools for OCE set-up and analysis.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Online Controlled Experiments, A/B testing, Experiment Checklists
National Category
Software Engineering
Identifiers
urn:nbn:se:mau:diva-39375 (URN)10.1109/ICSE-SEIP.2019.00009 (DOI)000503218300001 ()2-s2.0-85072103557 (Scopus ID)
Conference
IEEE/ACM 41st International Conference on Software Engineering, 25-31 May 2019, Montreal, QC, Canada, Canada
Available from: 2021-01-19 Created: 2021-01-19 Last updated: 2024-04-04Bibliographically approved
Mattos Issa, D., Dimitriev, P., Fabijan, A., Bosch, J. & Olsson Holmström, H. (2018). An Activity and Metric Model for Online Controlled Experiments (ed.). In: (Ed.), PROFES 2018: Product-Focused Software Process Improvement. Paper presented at International Conference of Product-focused Software Process Improvement, Wolfsburg, Germany (November 28-30) (pp. 182-198). Springer
Open this publication in new window or tab >>An Activity and Metric Model for Online Controlled Experiments
Show others...
2018 (English)In: PROFES 2018: Product-Focused Software Process Improvement, Springer, 2018, p. 182-198Conference paper, Published paper (Refereed)
Abstract [en]

Accurate prioritization of efforts in product and services development is critical to the success of every company. Online controlled experiments, also known as A/B tests, enable software companies to establish causal relationships between changes in their systems and the movements in the metrics. By experimenting, product development can be directed towards identifying and delivering value. Previous research stresses the need for data-driven development and experimentation. However, the level of granularity in which existing models explain the experimentation process is neither sufficient, in terms of details, nor scalable, in terms of how to increase number and run different types of experiments, in an online setting. Based on a case study of multiple products running online controlled experiments at Microsoft, we provide an experimentation framework composed of two detailed experimentation models focused on two main aspects; the experimentation activities and the experimentation metrics. This work intends to provide guidelines to companies and practitioners on how to set and organize experimentation activities for running trustworthy online controlled experiments.

Place, publisher, year, edition, pages
Springer, 2018
Series
Lecture Notes In Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 11271
Keywords
Data-driven development, A/B tests, Online controlled experiments
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mau:diva-12493 (URN)10.1007/978-3-030-03673-7_14 (DOI)000766909900014 ()2-s2.0-85057221259 (Scopus ID)28009 (Local ID)28009 (Archive number)28009 (OAI)
Conference
International Conference of Product-focused Software Process Improvement, Wolfsburg, Germany (November 28-30)
Available from: 2020-02-29 Created: 2020-02-29 Last updated: 2024-04-04Bibliographically approved
Fabijan, A. (2018). Data-Driven Software Development at Large Scale: from Ad-Hoc Data Collection to Trustworthy Experimentation (ed.). (Doctoral dissertation). Malmö university, Faculty of Technology and society
Open this publication in new window or tab >>Data-Driven Software Development at Large Scale: from Ad-Hoc Data Collection to Trustworthy Experimentation
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Accurately learning what customers value is critical for the success of every company. Despite the extensive research on identifying customer preferences, only a handful of software companies succeed in becoming truly data-driven at scale. Benefiting from novel approaches such as experimentation in addition to the traditional feedback collection is challenging, yet tremendously impactful when performed correctly. In this thesis, we explore how software companies evolve from data-collectors with ad-hoc benefits, to trustworthy data-driven decision makers at scale. We base our work on a 3.5-year longitudinal multiple-case study research with companies working in both embedded systems domain (e.g. engineering connected vehicles, surveillance systems, etc.) as well as in the online domain (e.g. developing search engines, mobile applications, etc.). The contribution of this thesis is three-fold. First, we present how software companies use data to learn from customers. Second, we show how to adopt and evolve controlled experimentation to become more accurate in learning what customers value. Finally, we provide detailed guidelines that can be used by companies to improve their experimentation capabilities. With our work, we aim to empower software companies to become truly data-driven at scale through trustworthy experimentation. Ultimately this should lead to better software products and services.

Place, publisher, year, edition, pages
Malmö university, Faculty of Technology and society, 2018. p. 357
Series
Studies in Computer Science ; 6
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mau:diva-7768 (URN)10.24834/2043/24873 (DOI)24873 (Local ID)9789171049186 (ISBN)9789171049193 (ISBN)24873 (Archive number)24873 (OAI)
Public defence
2018-06-15, NI:B0E07, Nordenskiöldsgatan 1, 13:00 (English)
Opponent
Note

In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Malmö University's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.

Available from: 2020-02-28 Created: 2020-02-28 Last updated: 2024-04-04Bibliographically approved
Fabijan, A., Dmitriev, P., Olsson, H. H. & Bosch, J. (2018). Effective Online Controlled Experiment Analysis at Large Scale (ed.). In: (Ed.), Proceedings of the EUROMICRO Conference: . Paper presented at 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Prague, Czech Republic (29-31 Aug. 2018) (pp. 64-67). IEEE
Open this publication in new window or tab >>Effective Online Controlled Experiment Analysis at Large Scale
2018 (English)In: Proceedings of the EUROMICRO Conference, IEEE, 2018, p. 64-67Conference paper, Published paper (Refereed)
Abstract [en]

Online Controlled Experiments (OCEs) are the norm in data-driven software companies because of the benefits they provide for building and deploying software. Product teams experiment to accurately learn whether the changes that they do to their products (e.g. adding new features) cause any impact (e.g. customers use them more frequently). Experiments also help reduce the risk from deploying software by minimizing the magnitude and duration of harm caused by software bugs, allowing software to be shipped more frequently. To make informed decisions in product development, experiment analysis needs to be granular with a large number of metrics over heterogeneous devices and audiences. Discovering experiment insights by hand, however, can be cumbersome. In this paper, and based on case study research at a large-scale software development company with a long tradition of experimentation, we (1) describe the standard process of experiment analysis, and (2) introduce an artifact to improve the effectiveness and comprehensiveness of this process.

Place, publisher, year, edition, pages
IEEE, 2018
Series
Proceedings of the Euromicro Conference, ISSN 1089-6503
Keywords
Online Controlled Experiments, A/B testing, Guided Experiment Analysis
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mau:diva-12777 (URN)10.1109/SEAA.2018.00020 (DOI)000450238900011 ()2-s2.0-85057181553 (Scopus ID)27274 (Local ID)27274 (Archive number)27274 (OAI)
Conference
44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Prague, Czech Republic (29-31 Aug. 2018)
Available from: 2020-02-29 Created: 2020-02-29 Last updated: 2024-04-04Bibliographically approved
Fabijan, A., Dmitriev, P., McFarland, C., Vermeer, L., Olsson Holmström, H. & Bosch, J. (2018). Experimentation growth: Evolving trustworthy A/B testing capabilities in online software companies (ed.). Journal of Software: Evolution and Process, 30(12), Article ID e2113.
Open this publication in new window or tab >>Experimentation growth: Evolving trustworthy A/B testing capabilities in online software companies
Show others...
2018 (English)In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 30, no 12, article id e2113Article in journal (Refereed) Published
Abstract [en]

Companies need to know how much value their ideas deliver to customers. One of the most powerful ways to accurately measure this is by conducting online controlled experiments (OCEs). To run experiments, however, companies need to develop strong experimentation practices as well as align their organization and culture to experimentation. The main objective of this paper is to demonstrate how to run OCEs at large scale using the experience of companies that succeeded in scaling. Based on case study research at Microsoft, Booking.com, Skyscanner, and Intuit, we present our main contribution—The Experiment Growth Model. This four‐stage model addresses the seven critical aspects of experimentation and can help companies to transform their organizations into learning laboratories where new ideas can be tested with scientific accuracy. Ultimately, this should lead to better products and services.

Place, publisher, year, edition, pages
John Wiley & Sons, 2018
Keywords
A/B testing, case study, experimentation growth model, online controlled experimentation
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mau:diva-2625 (URN)10.1002/smr.2113 (DOI)000453027700002 ()2-s2.0-85057183657 (Scopus ID)28039 (Local ID)28039 (Archive number)28039 (OAI)
Available from: 2020-02-27 Created: 2020-02-27 Last updated: 2024-04-04Bibliographically approved
Fabijan, A., Dmitriev, P., Olsson, H. H. & Bosch, J. (2018). Online Controlled Experimentation at Scale: An Empirical Survey on the Current State of A/B Testing (ed.). In: (Ed.), Proceedings of the EUROMICRO Conference: . Paper presented at 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Prague, Czech Republic (29-31 Aug. 2018) (pp. 68-72). IEEE
Open this publication in new window or tab >>Online Controlled Experimentation at Scale: An Empirical Survey on the Current State of A/B Testing
2018 (English)In: Proceedings of the EUROMICRO Conference, IEEE, 2018, p. 68-72Conference paper, Published paper (Refereed)
Abstract [en]

Online Controlled Experiments (OCEs, aka A/B tests) are one of the most powerful methods for measuring how much value new features and changes deployed to software products bring to users. Companies like Microsoft, Amazon, and Booking.com report the ability to conduct thousands of OCEs every year. However, the competences of the remainder of the online software industry remain unknown. The main objective of this paper is to reveal the current state of A/B testing maturity in the software industry based on a maturity model from our previous research. We base our findings on 44 responses from an online empirical survey. Our main contribution of this paper is the current state of experimentation maturity as operationalized by the ExG model for a convenience sample of companies doing online controlled experiments. Our findings show that, among others, companies typically develop in-house experimentation platforms, that these platforms are of various levels of maturity, and that designing key metrics - Overall Evaluation Criteria - remains the key challenge for successful experimentation.

Place, publisher, year, edition, pages
IEEE, 2018
Series
Proceedings of the Euromicro Conference, ISSN 1089-6503
Keywords
controlled experimentation, A/B testing, empirical survey, Experimentation Growth Model
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mau:diva-12624 (URN)10.1109/SEAA.2018.00021 (DOI)000450238900012 ()2-s2.0-85057169416 (Scopus ID)27273 (Local ID)27273 (Archive number)27273 (OAI)
Conference
44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Prague, Czech Republic (29-31 Aug. 2018)
Available from: 2020-02-29 Created: 2020-02-29 Last updated: 2024-04-04Bibliographically approved
Gupta, S., Ulanova, L., Bhardwaj, S., Dmitriev, P., Raff, P. & Fabijan, A. (2018). The Anatomy of a Large-Scale Experimentation Platform (ed.). In: (Ed.), (Ed.), 2018 IEEE International Conference on Software Architecture (ICSA): . Paper presented at IEEE International Conference on Software Architecture (ICSA), Seattle, WA, USA (2018). : IEEE
Open this publication in new window or tab >>The Anatomy of a Large-Scale Experimentation Platform
Show others...
2018 (English)In: 2018 IEEE International Conference on Software Architecture (ICSA), IEEE, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Online controlled experiments (e.g., A/B tests) are an integral part of successful data-driven companies. At Microsoft, supporting experimentation poses a unique challenge due to the wide variety of products being developed, along with the fact that experimentation capabilities had to be added to existing, mature products with codebases that go back decades. This paper describes the Microsoft ExP Platform (ExP for short) which enables trustworthy A/B experimentation at scale for products across Microsoft, from web properties (such as bing.com) to mobile apps to device drivers within the Windows operating system. The two core tenets of the platform are trustworthiness (an experiment is meaningful only if its results can be trusted) and scalability (we aspire to expose every single change in any product through an A/B experiment). Currently, over ten thousand experiments are run annually. In this paper, we describe the four core components of an A/B experimentation system: experimentation portal, experiment execution service, log processing service and analysis service, and explain the reasoning behind the design choices made. These four components work together to provide a system where ideas can turn into experiments within minutes and those experiments can provide initial trustworthy results within hours.

Place, publisher, year, edition, pages
IEEE, 2018
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mau:diva-12386 (URN)10.1109/ICSA.2018.00009 (DOI)000492762900001 ()2-s2.0-85051109145 (Scopus ID)28037 (Local ID)28037 (Archive number)28037 (OAI)
Conference
IEEE International Conference on Software Architecture (ICSA), Seattle, WA, USA (2018)
Available from: 2020-02-29 Created: 2020-02-29 Last updated: 2024-04-04Bibliographically approved
Fabijan, A., Olsson Holmström, H. & Bosh, J. (2017). Differentiating Feature Realization in Software Product Development (ed.). In: (Ed.), (Ed.), Product-Focused Software Process Improvement: Product-Focused Software Process Improvement. PROFES 2017.. Paper presented at Product-Focused Software Process Improvement (PROFES), Innsbruck, Austria (29 November - 01 December) (pp. 221-236). : Springer
Open this publication in new window or tab >>Differentiating Feature Realization in Software Product Development
2017 (English)In: Product-Focused Software Process Improvement: Product-Focused Software Process Improvement. PROFES 2017., Springer, 2017, p. 221-236Conference paper, Published paper (Refereed)
Abstract [en]

Software is no longer only supporting mechanical and electrical products. Today, it is becoming the main competitive advantage and an enabler of innovation. Not all software, however, has an equal impact on customers. Companies still struggle to differentiate between the features that are regularly used, there to be for sale, differentiating and that add value to customers, or which are regarded commodity. Goal: The aim of this paper is to (1) identify the different types of software features that we can find in software products today, and (2) recommend how to prioritize the development activities for each of them. Method: In this paper, we conduct a case study with five large-scale software intensive companies. Results: Our main result is a model in which we differentiate between four fundamentally different types of features (e.g. ‘Checkbox’, ‘Flow’, ‘Duty’ and ‘Wow’). Conclusions: Our model helps companies in (1) differentiating between the feature types, and (2) selecting an optimal methodology for their development (e.g. ‘Output-Driven’ vs. ‘Outcome-Driven’).

Place, publisher, year, edition, pages
Springer, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 10611
Keywords
Data, Feedback, Outcome-driven development, data-driven development, Goal-oriented development
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mau:diva-12579 (URN)10.1007/978-3-319-69926-4_16 (DOI)000439967400016 ()2-s2.0-85034596851 (Scopus ID)24152 (Local ID)24152 (Archive number)24152 (OAI)
Conference
Product-Focused Software Process Improvement (PROFES), Innsbruck, Austria (29 November - 01 December)
Available from: 2020-02-29 Created: 2020-02-29 Last updated: 2024-04-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4908-2708

Search in DiVA

Show all publications