Prof. Tim Menzies (UNSW, Com.Sci., 1995) is a Full
Professor of Computer Science at
NC State University and the director of the
Irrational Research Lab. His work focuses on
software engineering for AI, specifically building
data-driven, explainable, and intelligent software systems. As an ACM, IEEE, and ASE Fellow, and co-creator of
the
PROMISE repository, he helped establish modern empirical software engineering by
demonstrating that small, interpretable AI models can
often outperform larger, complex ones.
Dr. Menzies has published over 300 papers with more
than 24,000 citations (h-index=74) and has advised
24 Ph.D. students. He serves as the Editor-in-Chief of the
Automated Software Engineering journal and, from
2010 to 2026, as an
Associate Editor for IEEE TSE.
His research is supported by $19M in competitive
grants from both government and industry. Recent highlights include
over $2.5M from the NSF and
$1M from the NSA to advance the science of
trustworthy and compact AI. For more information, visit
timm.fyi.
Current Research
AI, for Less
I strongly advocate for "AI, for Less". My research
demonstrates that you do not always need massive computational
power; simpler, optimized models can yield
exceptional results while remaining fair, transparent, and
trustworthy. We emphasize
reproducibility and simplicity over "black box"
deep learning.
-
How Low Can You Go? The Data-Light SE Challenge
[pdf]
(Foundation of SE (FSE'26), 2026)
-
From Verification to Herding: Exploiting Sparsity
[pdf]
(VERIFAI-2026 Workshop, 2026)
-
The Case for Compact AI
[pdf]
(Communications of the ACM 68 (8), 2025)
AI for SE & SE for AI
AI software is still software, so faults in SE mean faults in AI. SE
teams often race to deliver AI-based solutions without first
checking for bias, optimality, or explainability. I
apply decades of Software Engineering wisdom to address these
problems in AI, utilizing analytics for defect prediction and effort
estimation.
-
From Brittle to Robust: Improving LLM Annotations
[pdf]
(Empirical Software Eng., 2026)
-
Beyond the Prompt: Assessing Domain Knowledge
[pdf]
(Mining Software Repositories, 2026)
-
SE Journals in 2036: Looking Back at the Future
[pdf]
(ICSE Future of SE, 2026)
-
Can LLMs Improve SE Active Learning via Warm-Starts?
[pdf]
(ACM Transactions on SE, 2026)
-
MOOT: a Repository of Many MOO Tasks
[pdf]
(Mining Software Repositories, 2026)
Trust, Causality
Ensuring AI systems in high-stakes environments are explainable, and
trustable. We use causal graphs and rigorous optimization to ensure
AI can be trusted by end-users and institutions.
-
AI in State Courts: Navigating Innovation & Ethics
[pdf]
(IEEE Software 42 (4), 2025)
-
Shaky Structures: The Wobbly World of Causal Graphs
[pdf]
(Empirical Software Eng., 2025)