skip to main content
10.1145/3180155.3180198acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Public Access

Hybrid regression test selection

Published: 27 May 2018 Publication History

Abstract

Regression testing is crucial but can be extremely costly. Regression Test Selection (RTS) aims to reduce regression testing cost by only selecting and running the tests that may be affected by code changes. To date, various RTS techniques analyzing at different granularities (e.g., at the basic-block, method, and file levels) have been proposed. RTS techniques working on finer granularities may be more precise in selecting tests, while techniques working on coarser granularities may have lower overhead. According to a recent study, RTS at the file level (FRTS) can have less overall testing time compared with a finer grained technique at the method level, and represents state-of-the-art RTS. In this paper, we present the first hybrid RTS approach, HyRTS, that analyzes at multiple granularities to combine the strengths of traditional RTS techniques at different granularities. We implemented the basic HyRTS technique by combining the method and file granularity RTS. The experimental results on 2707 revisions of 32 projects, totalling over 124 Million LoC, demonstrate that HyRTS outperforms state-of-the-art FRTS significantly in terms of selected test ratio and the offline testing time. We also studied the impacts of each type of method-level changes, and further designed two new HyRTS variants based on the study results. Our additional experiments show that transforming instance method additions/deletions into file-level changes produces an even more effective HyRTS variant that can significantly outperform FRTS in both offline and online testing time.

References

[1]
SLOCCount. http://www.dwheeler.com/sloccount/.
[2]
HyRTS Homepage. http://hyrts.org/.
[3]
Java Agent. https://docs.oracle.com/javase/7/docs/api/java/lang/instrument/package-summary.html.
[4]
JDT home page. http://www.eclipse.org/jdt/.
[5]
Maven Failsafe Plugin. http://maven.apache.org/surefire/maven-failsafe-plugin/.
[6]
Maven Surefire Plugin. http://maven.apache.org/surefire/maven-surefire-plugin/.
[7]
Testing at the speed and scale of Google, Jun 2011. http://goo.gl/2B5cyl.
[8]
Tools for continuous integration at Google scale, Jan 2011. https://goo.gl/Gqj7uL.
[9]
Apache Camel. http://camel.apache.org/.
[10]
Apache Commons Math. https://commons.apache.org/proper/commons-math/.
[11]
Apache CXF. https://cxf.apache.org/.
[12]
ASM. http://asm.ow2.org/.
[13]
T. Ball. On the limit of control flow analysis for regression test selection. ACM SIGSOFT Software Engineering Notes, 23(2):134--142, 1998.
[14]
J. Bell, O. Legunsen, M. Hilton, L. Eloussi, T. Yung, and D. Marinov. DeFlaker: Automatically detecting flaky tests. In International Conference on Software Engineering, 2018. to appear.
[15]
L. C. Briand, Y. Labiche, and G. Soccar. Automating impact analysis and regression test selection based on uml designs. In International Conference on Software Maintenance and Evolution, pages 252--261, 2002.
[16]
A. Celik, M. Vasic, A. Milicevic, and M. Gligoric. Regression test selection across jvm boundaries. In Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 809--820, 2017.
[17]
J. Chen, Y. Bai, D. Hao, L. Zhang, L. Zhang, and B. Xie. How do assertions impact coverage-based test-suite reduction? In International Conference on Software Testing, Verification and Validation, pages 418--423, 2017.
[18]
D. Di Nardo, N. Alshahwan, L. Briand, and Y. Labiche. Coverage-based regression test case selection, minimization and prioritization: A case study on an industrial system. Software Testing, Verification and Reliability, 25(4):371--396, 2015.
[19]
H. Do and G. Rothermel. On the use of mutation faults in empirical assessments of test case prioritization techniques. Transactions on Software Engineering, 32(9):733--752, 2006.
[20]
S. Elbaum, G. Rothermel, and J. Penix. Techniques for improving regression testing in continuous integration development environments. In Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 235--245, 2014.
[21]
S. Fujiwara, G. v. Bochmann, F. Khendek, M. Amalou, and A. Ghedamsi. Test selection based on finite state models. IEEE Transactions on software engineering, 17(6):591--603, 1991.
[22]
M. Gligoric, L. Eloussi, and D. Marinov. Ekstazi: Lightweight test selection. In International Conference on Software Engineering, Tool Demonstration Track, pages 713--716, 2015.
[23]
M. Gligoric, L. Eloussi, and D. Marinov. Practical regression test selection with dynamic file dependencies. In International Symposium on Software Testing and Analysis, pages 211--222, 2015.
[24]
D. Hao, L. Zhang, X. Wu, H. Mei, and G. Rothermel. On-demand test suite reduction. In International Conference on Software Engineering, pages 738--748, 2012.
[25]
D. Hao, L. Zhang, L. Zhang, G. Rothermel, and H. Mei. A unified test case prioritization approach. Transactions on Software Engineering and Methodology, 24(2):10:1--10:31, 2014.
[26]
M. J. Harrold, J. A. Jones, T. Li, D. Liang, A. Orso, M. Pennings, S. Sinha, S. A. Spoon, and A. Gujarathi. Regression test selection for Java software. In International Conference on Object-Oriented Programming Systems, Languages, and Applications, pages 312--326, 2001.
[27]
H. Hemmati and L. Briand. An industrial investigation of similarity measures for model-based test case selection. In International Symposium on Software Reliability Engineering, pages 141--150, 2010.
[28]
J. A. Jones and M. J. Harrold. Test-suite reduction and prioritization for modified condition/decision coverage. IEEE Transactions on software Engineering, 29(3):195--209, 2003.
[29]
D. C. Kung, J. Gao, P. Hsia, J. Lin, and Y. Toyoshima. Class firewall, test order, and regression testing of object-oriented programs. Journal of Object-Oriented Programming, 8(2):51--65, 1995.
[30]
O. Legunsen, F. Hariri, A. Shi, Y. Lu, L. Zhang, and D. Marinov. An extensive study of static regression test selection in modern software evolution. In Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 583--594, 2016.
[31]
O. Legunsen, A. Shi, and D. Marinov. Starts: Static regression test selection. In International Conference on Automated Software Engineering, Tool Demonstration Track, pages 949--954, 2017.
[32]
Y. Lu, Y. Lou, S. Cheng, L. Zhang, D. Hao, Y. Zhou, and L. Zhang. How does regression test prioritization perform in real-world software evolution? In International Conference on Software Engineering, pages 535--546, 2016.
[33]
A. Memon, Z. Gao, B. Nguyen, S. Dhanda, E. Nickell, R. Siemborski, and J. Micco. Taming google-scale continuous testing. In International Conference on Software Engineering, Software Engineering in Practice Track, pages 233--242, 2017.
[34]
A. Orso, N. Shi, and M. J. Harrold. Scaling regression testing to large software systems. In Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 241--251, 2004.
[35]
X. Ren, F. Shah, F. Tip, B. G. Ryder, and O. Chesley. Chianti: a tool for change impact analysis of Java programs. In International Conference on Object-oriented Programming, Systems, Languages, and Applications, pages 432--448, 2004.
[36]
X. Ren, F. Shah, F. Tip, B. G. Ryder, O. Chesley, and J. Dolby. Chianti: A prototype change impact analysis tool for Java. Technical Report DCS-TR-533, Rutgers University CS Dept., 2003.
[37]
G. Rothermel and M. J. Harrold. A safe, efficient regression test selection technique. Transactions on Software Engineering and Methodology, 6(2):173--210, 1997.
[38]
G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold. Test case prioritization: An empirical study. In International Conference on Software Maintenance and Evolution, pages 179--189, 1999.
[39]
B. G. Ryder and F. Tip. Change impact analysis for object-oriented programs. In Workshop on Program Analysis for Software Tools and Engineering, pages 46--53, 2001.
[40]
R. K. Saha, L. Zhang, S. Khurshid, and D. E. Perry. An information retrieval approach for regression test prioritization based on program changes. In International Conference on Software Engineering, volume 1, pages 268--279, 2015.
[41]
A. Shi, A. Gyori, M. Gligoric, A. Zaytsev, and D. Marinov. Balancing trade-offs in test-suite reduction. In Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 246--256, 2014.
[42]
A. Shi, S. Thummalapenta, S. K. Lahiri, N. Bjorner, and J. Czerwonka. Optimizing test placement for module-level regression testing. In Proceedings of the 39th International Conference on Software Engineering, pages 689--699, 2017.
[43]
A. Shi, T. Yung, A. Gyori, and D. Marinov. Comparing and combining test-suite reduction and regression test selection. In Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 237--247, 2015.
[44]
M. Vasic, Z. Parvez, A. Milicevic, and M. Gligoric. File-level vs. module-level regression test selection for. net. In Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Industry Track, pages 848--853, 2017.
[45]
F. Wilcoxon. Individual comparisons by ranking methods. Biometrics bulletin, 1(6):80--83, 1945.
[46]
G. Xu and A. Rountev. Regression test selection for aspectj software. In International Conference on Software Engineering, pages 65--74, 2007.
[47]
S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability, 22(2):67--120, 2012.
[48]
K. Zhai, B. Jiang, and W. K. Chan. Prioritizing test cases for regression testing of location-based services: Metrics, techniques, and case study. Transactions on Services Computing, 7(1):54--67, 2014.
[49]
L. Zhang, D. Hao, L. Zhang, G. Rothermel, and H. Mei. Bridging the gap between the total and additional test-case prioritization strategies. In International Conference on Software Engineering, pages 192--201, 2013.
[50]
L. Zhang, M. Kim, and S. Khurshid. Localizing failure-inducing program edits based on spectrum information. In International Conference on Software Maintenance and Evolution, pages 23--32, 2011.
[51]
L. Zhang, D. Marinov, L. Zhang, and S. Khurshid. An empirical study of JUnit test-suite reduction. In International Symposium on Software Reliability Engineering, pages 170--179, 2011.
[52]
H. Zhong, L. Zhang, and H. Mei. An experimental study of four typical test suite reduction techniques. Information and Software Technology, 50(6):534--546, 2008.

Cited By

View all
  • (2024)Neuron Sensitivity-Guided Test Case SelectionACM Transactions on Software Engineering and Methodology10.1145/367245433:7(1-32)Online publication date: 12-Jun-2024
  • (2024)When Automated Program Repair Meets Regression Testing—An Extensive Study on Two Million PatchesACM Transactions on Software Engineering and Methodology10.1145/367245033:7(1-23)Online publication date: 13-Jun-2024
  • (2024)Hybrid Regression Test Selection by Synergizing File and Method Call DependencesCompanion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering10.1145/3663529.3663872(669-670)Online publication date: 10-Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE '18: Proceedings of the 40th International Conference on Software Engineering
May 2018
1307 pages
ISBN:9781450356381
DOI:10.1145/3180155
  • Conference Chair:
  • Michel Chaudron,
  • General Chair:
  • Ivica Crnkovic,
  • Program Chairs:
  • Marsha Chechik,
  • Mark Harman
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 May 2018

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Funding Sources

Conference

ICSE '18
Sponsor:

Acceptance Rates

Overall Acceptance Rate 276 of 1,856 submissions, 15%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)197
  • Downloads (Last 6 weeks)41
Reflects downloads up to 22 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Neuron Sensitivity-Guided Test Case SelectionACM Transactions on Software Engineering and Methodology10.1145/367245433:7(1-32)Online publication date: 12-Jun-2024
  • (2024)When Automated Program Repair Meets Regression Testing—An Extensive Study on Two Million PatchesACM Transactions on Software Engineering and Methodology10.1145/367245033:7(1-23)Online publication date: 13-Jun-2024
  • (2024)Hybrid Regression Test Selection by Synergizing File and Method Call DependencesCompanion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering10.1145/3663529.3663872(669-670)Online publication date: 10-Jul-2024
  • (2024)APICIA: An API Change Impact Analyzer for Android AppsProceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings10.1145/3639478.3640041(99-103)Online publication date: 14-Apr-2024
  • (2024)MicroFuzz: An Efficient Fuzzing Framework for MicroservicesProceedings of the 46th International Conference on Software Engineering: Software Engineering in Practice10.1145/3639477.3639723(216-227)Online publication date: 14-Apr-2024
  • (2024)PIPELINEASCODE: A CI/CD Workflow Management System through Configuration Files at ByteDance2024 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)10.1109/SANER60148.2024.00109(1011-1022)Online publication date: 12-Mar-2024
  • (2024)Optimizing test case prioritization through ranked NSGA-2 for enhanced fault sensitivity analysisInnovations in Systems and Software Engineering10.1007/s11334-024-00561-620:3(307-328)Online publication date: 1-Sep-2024
  • (2023)State of Practical Applicability of Regression Testing Research: A Live Systematic Literature ReviewACM Computing Surveys10.1145/357985155:13s(1-36)Online publication date: 13-Jul-2023
  • (2023)HybridCISave: A Combined Build and Test Selection Approach in Continuous IntegrationACM Transactions on Software Engineering and Methodology10.1145/357603832:4(1-39)Online publication date: 26-May-2023
  • (2023)Robust Test Selection for Deep Neural NetworksIEEE Transactions on Software Engineering10.1109/TSE.2023.333098249:12(5250-5278)Online publication date: 15-Nov-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media