Mess to insight. Days to seconds. Bucks to pennies.
I earned my Ph.D. in Computer Science from
UNSW
(1995) and am now a Full Professor at
NC State University
, where I direct the
Irrational Research Lab
. My work focuses on
software engineering for AI
, specifically building
data-driven, explainable, and intelligent software systems
. I am an
ACM, IEEE, and ASE Fellow
, and co-creator of the
PROMISE repository
, helping establish modern empirical software engineering by showing that
small, interpretable AI models
can often outperform larger, more complex ones.
I have published over
300 papers
with more than
24,000 citations
(h-index=74) and have advised
24 Ph.D. students
. I serve as the
Editor-in-Chief
of the
Automated Software Engineering
journal and, from
2010 to 2026
, as an
Associate Editor for IEEE TSE
.
My research is supported by
$19M
in competitive grants from government (
NASA
) and industry (
LexisNexis, Microsoft, Meta
). Recent highlights include over
$2.5M from the NSF
and
$1M from the NSA
to advance the science of trustworthy and compact AI. For more information, visit
timm.fyi
.
Current Research
AI, for Less
I strongly advocate for
"AI, for Less"
. My research demonstrates that you do not always need massive computational power;
simpler, optimized models
can yield exceptional results while remaining fair, transparent, and trustworthy. We emphasize
reproducibility and simplicity
over "black box" deep learning.
-
How Low Can You Go? The Data-Light SE Challenge
[pdf]
(Foundation of SE (FSE'26), 2026)
-
From Verification to Herding: Exploiting Sparsity
[pdf]
(VERIFAI-2026 Workshop, 2026)
-
The Case for Compact AI
[pdf]
(Communications of the ACM 68 (8), 2025)
AI for SE & SE for AI
AI software is still software, so faults in SE mean faults in AI. SE teams often race to deliver AI-based solutions without first checking for
bias, optimality, or explainability
. I apply decades of Software Engineering wisdom to address these problems in AI, utilizing analytics for defect prediction and effort estimation.
-
From Brittle to Robust: Improving LLM Annotations
[pdf]
(Empirical Software Eng., 2026)
-
Beyond the Prompt: Assessing Domain Knowledge
[pdf]
(Mining Software Repositories, 2026)
-
SE Journals in 2036: Looking Back at the Future
[pdf]
(ICSE Future of SE, 2026)
-
Can LLMs Improve SE Active Learning via Warm-Starts?
[pdf]
(ACM Transactions on SE, 2026)
-
MOOT: a Repository of Many MOO Tasks
[pdf]
(Mining Software Repositories, 2026)
Trust, Causality
Ensuring AI systems in high-stakes environments are explainable, and trustable. We use causal graphs and rigorous optimization to ensure AI can be trusted by end-users and institutions.
-
AI in State Courts: Navigating Innovation & Ethics
[pdf]
(IEEE Software 42 (4), 2025)
-
Shaky Structures: The Wobbly World of Causal Graphs
[pdf]
(Empirical Software Eng., 2025)