Abstract

When testing client-side web applications, it is important to consider different web-browser environments. Different properties of these environments such as web-browser types and underlying platforms may cause a web application to exhibit different types of failures. As web applications evolve, they must be regression tested across these different environments. Because there are many environments to consider this process can be expensive, resulting in delayed feedback about failures in applications. In this work, we propose six techniques for providing a developer with faster feedback on failures when regression testing web applications across different web-browser environments. Our techniques draw on methods used in test case prioritization; however, in our case we prioritize web-browser environments, based on information on recent and frequent failures. We evaluated our approach using four non-trivial and popular open-source web applications. Our results show that our techniques outperform two baseline methods, namely, no ordering and random ordering, in terms of the cost-effectiveness. The improvement rates ranged from -12.24\% to 39.05\% for no ordering, and from -0.04\% to 45.85\% for random ordering. 

 

Implementation and data

Implementation and data of this study is available at https://bitbucket.org/kaist-webeng/webenv-prioritization/src/master/

 

Contact

  • Name: Jung-Hyun Kwon
  • E-mail: junghyun.kwon at kaist.ac.kr