Accepted Paper at DeepTest Workshop at ICSE 2026
10. March 2026
We are happy to announce that our paper “Beyond Accuracy: Characterizing Code Comprehension Capabilities in (Large) Language Models” is accepted and will appear at the Seventh International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest 2026) during ICSE 2026.
Abstract: Beyond Accuracy: Characterizing Code Comprehension Capabilities in (Large) Language Models
Large Language Models (LLMs) are increasingly integrated into software engineering workflows, yet current benchmarks provide only coarse performance summaries that obscure the diverse capabilities and limitations of these models. This paper investigates whether LLMs’ code-comprehension performance aligns with traditional human-centric software metrics or instead reflects distinct, non-human regularities. We introduce a diagnostic framework that reframes code understanding as a binary input-output consistency task, enabling the evaluation of classification and generative models. Using a large-scale dataset, we correlate model performance with traditional, human-centric complexity metrics, such as lexical size, control-flow complexity, and abstract syntax tree structure. Our analyses reveal minimal correlation between human-defined metrics and LLM success (AUROC 0.63), while shadow models achieve substantially higher predictive performance (AUROC 0.86), capturing complex, partially predictable patterns beyond traditional software measures. These findings suggest that LLM comprehension reflects model-specific regularities only partially accessible through either human-designed or learned features, emphasizing the need for benchmark methodologies that move beyond aggregate accuracy and toward instance-level diagnostics, while acknowledging fundamental limits in predicting correct outcomes.
