Exceeds

MARCH 2026

slac.stanford.edu Engineering AI Productivity Report

A focused summary of AI adoption, productivity lift, and code quality for the slac.stanford.edu engineering team.

See how AI-active teams rank this week on the Exceeds Leaderboards.

The slac.stanford.edu engineering team reports 72.2% AI adoption, 1.07× productivity lift, and 17.3% code quality across recent work.

These metrics track how AI integrates into delivery pipelines, how throughput changes when assistance is used, and the health of AI-supported code review outcomes.

What this report measures

We analyze commits and diffs to estimate AI adoption, productivity lift, and code quality for your engineering organization.

How to interpret these metrics

Use these signals to understand how AI assistance fits into day-to-day development, where enablement efforts drive throughput, and how review practices keep quality steady.

AI Adoption Rate

HIGH

72.2%

AI assistance is present in 72.2% of recent commits for slac.stanford.edu.

AI Productivity Lift

LOW

1.07×

AI-enabled workflows deliver an estimated 7% lift in throughput.

AI Code Quality

LOW

17.3%

Review insights show 17.3% overall code health on AI-supported changes.

How is the slac.stanford.edu team performing with AI?

The slac.stanford.edu engineering team reports 72.2% AI adoption, translating into 1.07× productivity lift while sustaining 17.3% code quality. These outcomes suggest AI-supported reviews are embedded in day-to-day delivery without trading off reliability.

Manager Questions Answered

Real questions engineering leaders ask about AI productivity, with live benchmarks and company-specific data.

What's a good company AI adoption rate?

slac.stanford.edu is at 72.2%. This is 28.3pp above the community median (43.8%)..

72.2%

Roughly in line43.8% Community Median

Spot squads sitting below the median and pair them with high-adoption champions to share workflows.

Does AI actually make developers faster?

slac.stanford.edu operates at 1.07×. This is 0.06× below the community median (1.13×)..

1.07×

Roughly in line1.13× Community Median

Instrument reviewer assignment and AI summaries to trim the slowest merge steps and edge past the median.

How does AI affect code quality?

slac.stanford.edu holds AI-assisted quality at 17.3%. This is 6.0pp below the community median (23.3%)..

17.3%

↓6.0pp below23.3% Community Median

Add structured AI code review rubrics and require human sign-off for critical surfaces.

How evenly is AI use distributed across our team?

45.8% of AI commits come from the most active contributors.

45.8%

Pair top AI practitioners with adjacent squads and capture their prompts/playbooks for reuse.

How can I prove AI ROI to executives?

To prove ROI, slac.stanford.edu needs steadier adoption, measurable lift, and consistent quality. The ingredients are forming but not yet executive-grade.

Start with a lighthouse project, measure cycle improvements end-to-end, and harden quality guardrails.

See how your full organization compares

Unlock personalized insights across all your repositories, teams, and contributors.

Securely connect Exceeds with your codebase to get commit-level insights on AI adoption and performance.

How Your Company Ranks

See how top engineering organizations compare across AI adoption, productivity lift, and code quality.

AI Adoption

% of commits with AI assistance

Companies in this quartile:

QU

quantstack.net

(87.6%)

ST

student.su

(87.6%)

DG

dglover.co

(21.5%)

MO

monade.li

(21.5%)

Top 25% of teams adopt AI in 65-75% of their commits.

Productivity Lift

Cycle-time improvement vs baseline

Companies in this quartile:

AC

acad.pucrs.br

(1.12×)

MC

mcornholio.ru

(1.12×)

HA

hassan.host

(1.01×)

MU

musicaloft.com

(1.01×)

Top performers sustain 1.5× cycle-time improvements over six months when embedding AI into workflows.

Code Quality

Post-merge defect rate

Companies in this quartile:

GZ

gzgz.dev

(20.0%)

GW

gwu.edu

(20.0%)

DR

draad.nl

(-82634.9%)

IN

inria.fr

(-2424.6%)

Top 25% maintain quality above 92% while expanding AI usage, pairing automation with rigorous guardrails.

Rankings based on aggregated Exceeds AI dataset of 1.2M commits across open-source and enterprise engineering teams (Q4 2025).

Top contributors

Top contributors combine high AI adoption and quality output. Encourage internal sharing of best practices.

AF

Agnès Ferté

Commits27
AI Usage89.6%
Productivity Lift1.97x
Code Quality20.0%
JM

Jeremy McCormick

Commits205
AI Usage86.3%
Productivity Lift1.82x
Code Quality20.0%
MO

Micha Okun

Commits8
AI Usage20.0%
Productivity Lift1.80x
Code Quality20.0%
MK

Michael Kelsey

Commits69
AI Usage92.0%
Productivity Lift1.56x
Code Quality20.0%
AS

Andy Salnikov

Commits62
AI Usage88.4%
Productivity Lift1.42x
Code Quality20.0%

Encourage knowledge transfer from top AI users to others through internal mentoring or recorded "AI coding walkthroughs." Balanced adoption across the team typically improves overall performance by 12-15%.

Cross-Organization Network

Shared Repositories

33

jgthayer

lsst-it/lsst-control

villarrealas

lsst/lsst-texmf

adambolton

lsst/lsst-texmf

fajpunk

lsst-sqre/phalanx

lsst/tutorial-notebooks

tcjennings

lsst-sqre/phalanx

lsst/analysis_tools

+2 more

JeremyMcCormick

lsst-sqre/phalanx

lsst/sdm_schemas

+2 more

Activity

715 Commits

Your Network

34 People
adambolton
Member
cafslac
Member
chrisvam
Member
YifanC
Member
fajpunk
Member
ddamiani
Member
sdehe
Member
gadorlhiac
Member
eigerx
Member

Why these metrics matter for engineering managers

Faster delivery

1.4x lift → predictable roadmaps

Safer velocity

93% quality → lower rollback risk

Equitable gains

AI less dependency on heroes

Governance

Depth monitoring audit-ready

ExceedsExceeds AI

Turns these insights into daily coaching and automatic alerts, helping managers balance speed with sustainability.

See the truth of AI impact

Adoption + lift + quality in one view

Learn more

Know where to act first

Repo and role level "lift potential"

Learn more

Prove ROI

Export executive snapshots and benchmarks

Learn more