skip to main content
10.1145/2950290.2950361acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article
Public Access

An extensive study of static regression test selection in modern software evolution

Published: 01 November 2016 Publication History
  • Get Citation Alerts
  • Abstract

    Regression test selection (RTS) aims to reduce regression testing time by only re-running the tests affected by code changes. Prior research on RTS can be broadly split into dy namic and static techniques. A recently developed dynamic RTS technique called Ekstazi is gaining some adoption in practice, and its evaluation shows that selecting tests at a coarser, class-level granularity provides better results than selecting tests at a finer, method-level granularity. As dynamic RTS is gaining adoption, it is timely to also evaluate static RTS techniques, some of which were proposed over three decades ago but not extensively evaluated on modern software projects.
    This paper presents the first extensive study that evaluates the performance benefits of static RTS techniques and their safety; a technique is safe if it selects to run all tests that may be affected by code changes. We implemented two static RTS techniques, one class-level and one method-level, and compare several variants of these techniques. We also compare these static RTS techniques against Ekstazi, a state-of-the-art, class-level, dynamic RTS technique. The experimental results on 985 revisions of 22 open-source projects show that the class-level static RTS technique is comparable to Ekstazi, with similar performance benefits, but at the risk of being unsafe sometimes. In contrast, the method-level static RTS technique performs rather poorly.

    References

    [1]
    Apache Camel. http://camel.apache.org/.
    [2]
    Apache Commons Math. https://commons.apache.org/proper/commons-math/.
    [3]
    Apache CXF. https://cxf.apache.org/.
    [4]
    ASM. http://asm.ow2.org/.
    [5]
    L. Badri, M. Badri, and D. St-Yves. Supporting predictive change impact analysis: a control call graph based technique. In APSEC, pages 167–175, 2005.
    [6]
    J. Bell, G. E. Kaiser, E. Melski, and M. Dattatreya. Efficient dependency detection for safe Java test acceleration. In ESEC/FSE, pages 770–781, 2015.
    [7]
    M. Beller, G. Gousios, A. Panichella, and A. Zaidman. When, how, and why developers (do not) test in their IDEs. In ESEC/FSE, pages 179–190, 2015.
    [8]
    S. A. Bohner and R. Arnold. Software change impact analysis. Wiley Publishers, 1996.
    [9]
    H. Do and G. Rothermel. On the use of mutation faults in empirical assessments of test case prioritization techniques. TSE, 32(9):733–752, 2006.
    [10]
    S. Elbaum, G. Rothermel, and J. Penix. Techniques for improving regression testing in continuous integration development environments. In FSE, pages 235–245, 2014.
    [11]
    M. Gligoric. Regression Test Selection: Theory and Practice. PhD thesis, UIUC CS Dept., July 2015.
    [12]
    M. Gligoric, L. Eloussi, and D. Marinov. Ekstazi: Lightweight test selection. In ICSE Demo, pages 713–716, 2015.
    [13]
    M. Gligoric, L. Eloussi, and D. Marinov. Practical regression test selection with dynamic file dependencies. In ISSTA, pages 211–222, 2015.
    [14]
    M. Gligoric, S. Negara, O. Legunsen, and D. Marinov. An empirical evaluation and comparison of manual and automated test selection. In ASE, pages 361–372, 2014.
    [15]
    Testing at the speed and scale of Google, Jun 2011. http://goo.gl/2B5cyl.
    [16]
    Tools for continuous integration at Google scale, Jan 2011. https://goo.gl/Gqj7uL.
    [17]
    D. Grove and C. Chambers. A framework for call graph construction algorithms. TOPLAS, 23(6):685–746, 2001.
    [18]
    D. Hao, L. Zhang, X. Wu, H. Mei, and G. Rothermel. On-demand test suite reduction. In ICSE, pages 738–748, 2012.
    [19]
    D. Hao, L. Zhang, L. Zhang, G. Rothermel, and H. Mei. A unified test case prioritization approach. TOSEM, 24(2):10:1–10:31, 2014.
    [20]
    M. J. Harrold, J. A. Jones, T. Li, D. Liang, A. Orso, M. Pennings, S. Sinha, S. A. Spoon, and A. Gujarathi. Regression test selection for Java software. In OOPSLA, pages 312–326, 2001.
    [21]
    JGraphT. http://jgrapht.org/.
    [22]
    D. C. Kung, J. Gao, P. Hsia, J. Lin, and Y. Toyoshima. Class firewall, test order, and regression testing of object-oriented programs. JOOP, 8(2):51–65, 1995.
    [23]
    O. Legunsen, D. Marinov, and G. Rosu. Evolution-aware monitoring-oriented programming. In ICSE NIER, pages 615–618, 2015.
    [24]
    S. Lehnert. A review of software change impact analysis. Technical report, Ilmenau U. of Tech., 2011.
    [25]
    H. K. Leung and L. White. A study of integration testing and software regression at the integration level. In ICSM, pages 290–301, 1990.
    [26]
    B. Li, X. Sun, H. Leung, and S. Zhang. A survey of code-based change impact analysis techniques. STVR, 23(8):613–646, 2013.
    [27]
    A. Orso, N. Shi, and M. J. Harrold. Scaling regression testing to large software systems. In FSE, pages 241–251, 2004.
    [28]
    X. Ren, F. Shah, F. Tip, B. G. Ryder, and O. Chesley. Chianti: a tool for change impact analysis of Java programs. In OOPSLA, pages 432–448, 2004.
    [29]
    X. Ren, F. Shah, F. Tip, B. G. Ryder, O. Chesley, and J. Dolby. Chianti: A prototype change impact analysis tool for Java. Technical Report DCS-TR-533, Rutgers University CS Dept., 2003.
    [30]
    G. Rothermel and M. J. Harrold. A safe, efficient algorithm for regression test selection. In ICSM, pages 358–367, 1993.
    [31]
    G. Rothermel and M. J. Harrold. A safe, efficient regression test selection technique. TOSEM, 6(2):173–210, 1997.
    [32]
    G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold. Test case prioritization: An empirical study. In ICSM, pages 179–189, 1999.
    [33]
    B. G. Ryder and F. Tip. Change impact analysis for object-oriented programs. In PASTE, pages 46–53, 2001.
    [34]
    A. Shi, A. Gyori, M. Gligoric, A. Zaytsev, and D. Marinov. Balancing trade-offs in test-suite reduction. In FSE, pages 246–256, 2014.
    [35]
    A. Shi, T. Yung, A. Gyori, and D. Marinov. Comparing and combining test-suite reduction and regression test selection. In ESEC/FSE, pages 237–247, 2015.
    [36]
    M. Skoglund and P. Runeson. A case study of the class firewall regression test selection technique on a large scale distributed software system. In ESEM, pages 74–83, 2005.
    [37]
    M. Skoglund and P. Runeson. Improving class firewall regression test selection by removing the class firewall. JSEKE, 17(3):359–378, 2007.
    [38]
    S. H. Tan and A. Roychoudhury. relifix: Automated repair of software regressions. In ICSE, pages 471–482, 2015.
    [39]
    F. Tip and J. Palsberg. Scalable propagation-based call graph construction algorithms. In OOPSLA, pages 281–293, 2000.
    [40]
    IBM WALA. http://wala.sourceforge.net.
    [41]
    G. Xu and A. Rountev. Regression test selection for aspectj software. In ICSE, pages 65–74, 2007.
    [42]
    S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: a survey. STVR, 22(2):67–120, 2012.
    [43]
    K. Zhai, B. Jiang, and W. K. Chan. Prioritizing test cases for regression testing of location-based services: Metrics, techniques, and case study. TSC, 7(1):54–67, 2014.
    [44]
    L. Zhang, D. Hao, L. Zhang, G. Rothermel, and H. Mei. Bridging the gap between the total and additional test-case prioritization strategies. In ICSE, pages 192–201, 2013.
    [45]
    L. Zhang, M. Kim, and S. Khurshid. Localizing failure-inducing program edits based on spectrum information. In ICSM, pages 23–32, 2011.
    [46]
    L. Zhang, M. Kim, and S. Khurshid. FaultTracer: A change impact and regression fault analysis tool for evolving Java programs. In FSE Demo, pages 1–4, 2012.
    [47]
    L. Zhang, D. Marinov, L. Zhang, and S. Khurshid. An empirical study of JUnit test-suite reduction. In ISSRE, pages 170–179, 2011.
    [48]
    H. Zhong, L. Zhang, and H. Mei. An experimental study of four typical test suite reduction techniques. IST, 50(6):534–546, 2008.

    Cited By

    View all
    • (2023)Employer Responses to Poaching on Employee Productivity: The Mediating Role of Organizational Agility in Technology CompaniesSustainability10.3390/su1506536915:6(5369)Online publication date: 17-Mar-2023
    • (2023)An Approach to Regression Testing Selection based on Code Changes and SmellsProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624036(25-34)Online publication date: 25-Sep-2023
    • (2023)Robust Test Selection for Deep Neural NetworksIEEE Transactions on Software Engineering10.1109/TSE.2023.333098249:12(5250-5278)Online publication date: Dec-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    FSE 2016: Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering
    November 2016
    1156 pages
    ISBN:9781450342186
    DOI:10.1145/2950290
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 November 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. class firewall
    2. regression test selection
    3. static analysis

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    FSE'16
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 17 of 128 submissions, 13%

    Upcoming Conference

    FSE '24

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)132
    • Downloads (Last 6 weeks)18

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Employer Responses to Poaching on Employee Productivity: The Mediating Role of Organizational Agility in Technology CompaniesSustainability10.3390/su1506536915:6(5369)Online publication date: 17-Mar-2023
    • (2023)An Approach to Regression Testing Selection based on Code Changes and SmellsProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624036(25-34)Online publication date: 25-Sep-2023
    • (2023)Robust Test Selection for Deep Neural NetworksIEEE Transactions on Software Engineering10.1109/TSE.2023.333098249:12(5250-5278)Online publication date: Dec-2023
    • (2023)Detecting Android API Compatibility Issues With API DifferencesIEEE Transactions on Software Engineering10.1109/TSE.2023.327415349:7(3857-3871)Online publication date: Jul-2023
    • (2023)Static Class-Level Approach For Test Impact Analysis2023 20th International Joint Conference on Computer Science and Software Engineering (JCSSE)10.1109/JCSSE58229.2023.10202002(321-326)Online publication date: 28-Jun-2023
    • (2023)Semantic-based and Learning-based Regression Test Selection focusing on Test Objectives2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)10.1109/ICSTW58534.2023.00057(281-287)Online publication date: Apr-2023
    • (2023)DIRTS: Dependency Injection Aware Regression Test Selection2023 IEEE Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST57152.2023.00046(422-432)Online publication date: Apr-2023
    • (2023)BinaryRTS: Cross-language Regression Test Selection for C++ Binaries in CI2023 IEEE Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST57152.2023.00038(327-338)Online publication date: Apr-2023
    • (2023)Test Selection for Unified Regression Testing2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)10.1109/ICSE48619.2023.00145(1687-1699)Online publication date: May-2023
    • (2023)pytest-Inline: An Inline Testing Tool for PythonProceedings of the 45th International Conference on Software Engineering: Companion Proceedings10.1109/ICSE-Companion58688.2023.00046(161-164)Online publication date: 14-May-2023
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media