Open this publication in new window or tab >>Show others...
2014 (English)Conference paper, Published paper (Refereed)
Abstract [en]
Evaluation is an open problem in procedural content generation research. The eld is now in a state where there is a glut of content generators, each serving di erent purposes
and using a variety of techniques. It is difficult to understand, quantitatively or qualitatively, what makes one generator di erent from another in terms of its output. To remedy this, we have conducted a large-scale comparative evaluation of level generators for the Mario AI Benchmark, a research-friendly clone of the classic platform game Super Mario Bros. In all, we compare the output of seven different level generators from the literature, based on different algorithmic methods, plus the levels from the original Super Mario Bros game. To compare them, we have de ned six expressivity metrics, of which two are novel contributions in this paper. These metrics are shown to provide interestingly di erent characterizations of the level generators. The results presented in this paper, and the accompanying source
code, is meant to become a benchmark against which to test new level generators and expressivity metrics.
Place, publisher, year, edition, pages
Society for the Advancement of the Science of Digital Games, 2014
Keywords
Procedural Content Generation, Level Generators
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mau:diva-16786 (URN)17062 (Local ID)17062 (Archive number)17062 (OAI)
Conference
Foundations of Digital Games 2014, Ft. Lauderdale, Florida, USA (2014)
2020-03-302020-03-302022-06-27Bibliographically approved