Small models. Sharp answers.

we want you

Join our semester-long reading group — the easiest way into the lab. Open to NC State students and industrial partners working on real-world AI/ML systems.

We study software engineering for AI — specifically, the uncomfortable finding that smaller, interpretable models repeatedly match or beat larger ones on real SE tasks. Rigorously empirical. Proudly irrational.

Directed by Tim Menzies, part of the Department of Computer Science at NC State University.

Current Projects

The Front Door · Reading Group

Every semester we run an open research reading group — the easiest way into the lab. Sessions mix recent SE-AI papers with live updates on active lab projects. No prerequisites beyond curiosity.

MOOT — Many Optimization Tasks

A curated benchmark of 120+ multi-objective optimization tasks from real software engineering and systems research. It replaces toy benchmarks with real high-dimensional problems — software configuration, cloud tuning, project health, hyperparameter optimization, process modelling — so optimization algorithms can be compared fairly and reproducibly.

Explanation — causality, stability, trust

Lead: Amirali Rayegan. Can causal methods make software analytics more stable, interpretable, and trustworthy than correlation-based ones? We're probing where causal reasoning buys you real robustness — and where it's just fragile structure wearing a better label.

Optimization — simple beats complex

Lead: Kishan Kumar Ganguly. Our thesis: simple, sample-efficient optimizers routinely beat complex ones on SE problems, because SE data collapses to a few buckets (the "BINGO" effect). The payoff is optimization that's cheaper, faster, and easier to audit.

Agentic Systems — LLMs as fast typists

Lead: Srinath Srinivasan. How far can we push lightweight, LLM-powered agents on SE tasks without drowning them in compute? Theme: keep the LLM the fast typist; keep the human the one who knows what's worth typing.

People

NC State · Computer Science

Director · Full Professor
AI, for Less. Empirical SE. EIC, ASE journal.
Ph.D. Candidate
Causal & explainable software analytics.
Ph.D. Student · 2027
Sample-efficient optimization for SE.
Ph.D. Student · 2028
Agentic LLM systems for SE.

International Collaborators

City Univ. of Hong Kong
Software analytics, effort estimation, empirical SE.
Univ. of Birmingham · IDEAS Lab
Self-adaptive systems, configuration performance, search-based SE.
Chalmers · EIC, EMSE
Software testing, requirements, AI in development, behavioural SE.
Notre Dame · Freimann Professor, CSE Chair
Drones & CPS, safety assurance, requirements traceability.

We build the small models that beat the big ones.
We publish the ones that don't.

IRL: research, in real life.