Malmö University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Considering rigor and relevance when evaluating test driven development: A systematic review
Department of Computer Science, Lund University.ORCID iD: 0000-0001-9376-9844
Unicon Inc, AZ, USA.
School of Computing, Blekinge Institute of Technology.
2014 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 56, no 4, p. 375-394Article in journal (Refereed) Published
Abstract [en]

Context

Test driven development (TDD) has been extensively researched and compared to traditional approaches (test last development, TLD). Existing literature reviews show varying results for TDD.

Objective

This study investigates how the conclusions of existing literature reviews change when taking two study quality dimension into account, namely rigor and relevance.

Method

In this study a systematic literature review has been conducted and the results of the identified primary studies have been analyzed with respect to rigor and relevance scores using the assessment rubric proposed by Ivarsson and Gorschek 2011. Rigor and relevance are rated on a scale, which is explained in this paper. Four categories of studies were defined based on high/low rigor and relevance.

Results

We found that studies in the four categories come to different conclusions. In particular, studies with a high rigor and relevance scores show clear results for improvement in external quality, which seem to come with a loss of productivity. At the same time high rigor and relevance studies only investigate a small set of variables. Other categories contain many studies showing no difference, hence biasing the results negatively for the overall set of primary studies. Given the classification differences to previous literature reviews could be highlighted.

Conclusion

Strong indications are obtained that external quality is positively influenced, which has to be further substantiated by industry experiments and longitudinal case studies. Future studies in the high rigor and relevance category would contribute largely by focusing on a wider set of outcome variables (e.g. internal code quality). We also conclude that considering rigor and relevance in TDD evaluation is important given the differences in results between categories and in comparison to previous reviews.

Place, publisher, year, edition, pages
Springer, 2014. Vol. 56, no 4, p. 375-394
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:mau:diva-40816DOI: 10.1016/j.infsof.2014.01.002OAI: oai:DiVA.org:mau-40816DiVA, id: diva2:1530304
Available from: 2021-02-22 Created: 2021-02-22 Last updated: 2021-02-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Munir, Hussan

Search in DiVA

By author/editor
Munir, Hussan
In the same journal
Information and Software Technology
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 4 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf