Skip to content

How agentic is your software delivery model?

A 3-minute executive self-assessment

This short self-assessment is designed for executives and engineering leaders who want a fast, structured view of their organization’s current maturity in AI-native software delivery. It is not a technical quiz. It is a practical snapshot of how AI is actually used in your delivery model today.

Tip: Don’t answer based on your AI ambitions. Answer based on how your organization actually works today.

10 questions

1. How is AI used by your software teams today?

2. When an AI tool generates code, what usually happens next?

3. Which statement best describes your delivery model?

4. How comfortable would your organization be with an agent opening a pull request overnight?

5. What best describes your testing and verification posture?

6. When AI produces the wrong thing, what is the usual response?

7. Who owns the design of human + AI workflows in engineering?

8. Which statement best reflects your current governance model?

9. How do you think about value from AI in software delivery?

10. Over the next 12 months, what is your real ambition?

Your current result

0 / 40

Executive self-assessment score

Profile: Tooling Curious

AI is mostly an individual aid today; your delivery model itself has not shifted yet.

This result updates automatically as you answer the questions.

Your maturity profile

10–16 — Tooling Curious

AI is present, but mostly as an individual productivity aid. The delivery system itself has not changed yet.

Typical pattern

  • experimentation is real
  • practices are uneven
  • trust is low
  • value is local, not systemic

Next move
Focus on one bounded workflow with strong verification.

17–24 — Local Acceleration

Teams are getting value from AI, but mostly in fragmented ways. You are faster in places, but the operating model is still catching up.

Typical pattern

  • several teams use AI regularly
  • adoption is practical, but informal
  • governance is inconsistent
  • bottlenecks move rather than disappear

Next move
Shift from tool adoption to workflow design.

25–32 — Managed Agentic Delivery

AI is starting to participate in structured delivery workflows, with meaningful human oversight and stronger verification.

Typical pattern

  • bounded agent workflows exist
  • teams trust automation more
  • review and escalation patterns are emerging
  • parts of the SDLC are being redesigned

Next move
Industrialize the model: autonomy levels, harness patterns, shared metrics.

33–40 — AI-Native Delivery System

You are no longer treating AI as a side tool. You are actively redesigning software delivery around governed human-agent collaboration.

Typical pattern

  • AI participates in recurring workflows
  • validation is strong
  • governance is intentional
  • the operating model is becoming genuinely agentic

Next move
Focus on scale, consistency, and economics.

Want the deeper version?

This self-assessment is only a first snapshot. The full Agentic SDLC diagnostic goes deeper into delivery workflows, governance, verification, operating-model design, and organizational readiness.

Recommended next step: ask 3–5 leaders across engineering, product, and delivery to complete this page separately, compare the answers, and identify where your assumptions diverge.

That discussion is often more valuable than the score itself.

Contact me for the full diagnostic — DM on LinkedIn, for instance

Released under the MIT License.