Preprint / Version 1

AxiaBench: Verifying Effect-uncovering Methodologies in Artificial Intelligence for Sciences

Authors

Abstract

Uncovering the effects of a cause, i.e., an object or thing, plays a fundamental role in modern social and natural sciences and engineering. While effect-uncovering methodologies such as design of experiments (DoE), randomized controlled trials (RCTs), do-calculus, structural causal models (SCMs), quasi-experiments, and observational designs have shaped modern scientific and engineering practice, there has been little systematic inquiry into their validity and efficiency. In this article, we introduce a formal methodology and framework for verifying effect-uncovering methodologies, which we call AxiaBench. AxiaBench enables the first large-scale, quantitative comparison of nine widely used effect-uncovering methodologies across six representative application domains in Artificial Intelligence for Sciences. Our verification reveals a fundamental limitation: existing methodologies are valid only in certain scenarios, and only a few can achieve acceptable efficiency under the validity requirement.

Downloads

Additional Files

Posted

2026-04-30