Enhancing Software Testing Using AI and Graph Similarity

Category of Work

Conference

Title of Conference/Lecture Series

16th International Conference on Information and Communication Systems (ICICS)

Abstract

Software testing plays a vital role in the development lifecycle, ensuring the prevention of failures and the enhancement of software quality. Despite its importance, the testing phase is often resource-intensive, involving numerous test cases that can become redundant or overlapping over time-leading to increased complexity and prolonged testing durations. To address these inefficiencies, this paper proposes a novel approach that integrates graph similarity analysis with generative AI and deep learning to optimize test suites. By leveraging call graphs derived from test cases, the method identifies redundant and closely related test scenarios. A machine learning model is used to predict similarity scores between these call graphs, facilitating the classification and prioritization of test cases. Lower similarity scores correspond to test cases with more unique code coverage and are thus assigned higher priority. This prioritization enables test engineers to focus on a more diverse and effective subset of test cases, ensuring thorough code coverage while improving efficiency. The proposed framework ultimately reduces redundancy, lowers testing costs, and upholds high standards of software quality, offering a systematic solution for determining the optimal level of testing required to meet study objectives. While the current study experimentally validates the use of graph similarity metrics for test case prioritization, the application of generative AI models is proposed as part of future extensions.

DOI

https://doi.org/10.1109/ICICS65354.2025.11073112

Presentation Date

7-2025

Share

COinS