Malmö University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Three Key Checklists and Remedies for Trustworthy Analysis of Online Controlled Experiments at Scale
Malmö University, Faculty of Technology and Society (TS), Department of Computer Science and Media Technology (DVMT).
Malmö University, Faculty of Technology and Society (TS), Department of Computer Science and Media Technology (DVMT).ORCID iD: 0000-0002-7700-1816
Show others and affiliations
2019 (English)In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP 2019), IEEE, 2019, p. 1-10Conference paper, Published paper (Refereed)
Abstract [en]

Online Controlled Experiments (OCEs) are transforming the decision-making process of data-driven companies into an experimental laboratory. Despite their great power in identifying what customers actually value, experimentation is very sensitive to data loss, skipped checks, wrong designs, and many other 'hiccups' in the analysis process. For this purpose, experiment analysis has traditionally been done by experienced data analysts and scientists that closely monitored experiments throughout their lifecycle. Depending solely on scarce experts, however, is neither scalable nor bulletproof. To democratize experimentation, analysis should be streamlined and meticulously performed by engineers, managers, or others responsible for the development of a product. In this paper, based on synthesized experience of companies that run thousands of OCEs per year, we examined how experts inspect online experiments. We reveal that most of the experiment analysis happens before OCEs are even started, and we summarize the key analysis steps in three checklists. The value of the checklists is threefold. First, they can increase the accuracy of experiment setup and decision-making process. Second, checklists can enable novice data scientists and software engineers to become more autonomous in setting-up and analyzing experiments. Finally, they can serve as a base to develop trustworthy platforms and tools for OCE set-up and analysis.

Place, publisher, year, edition, pages
IEEE, 2019. p. 1-10
Keywords [en]
Online Controlled Experiments, A/B testing, Experiment Checklists
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:mau:diva-39375DOI: 10.1109/ICSE-SEIP.2019.00009ISI: 000503218300001OAI: oai:DiVA.org:mau-39375DiVA, id: diva2:1519607
Conference
IEEE/ACM 41st International Conference on Software Engineering, 25-31 May 2019, Montreal, QC, Canada, Canada
Available from: 2021-01-19 Created: 2021-01-19 Last updated: 2022-04-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Fabijan, AleksanderOlsson, Helena Holmström

Search in DiVA

By author/editor
Fabijan, AleksanderOlsson, Helena Holmström
By organisation
Department of Computer Science and Media Technology (DVMT)
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 13 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf