Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
title: "Adopt AI without burning your team"
description: "Practical guidance for CTOs to adopt AI tools without hurting quality or morale, covering team dynamics, standards, and safe ways to start."
createdAt: 1769477952547
updatedAt: 1769477952547
authors: ["david"]
category: "AI"
editors: ["velimir"]
draft: false
abstract: "AI adoption is not an IDE preference. It changes how engineering work gets done. This piece helps CTOs set expectations, protect quality, and reduce resistance. It starts with low-risk use cases, shows how to build social proof, and keeps juniors learning while the team ships faster. The goal is sustainable velocity without burnout or morale damage."
image: "/images/agi-is-nowhere-near-llms-do-not-reason.png"
---

If you're a CTO trying (and failing) to get your engineering team to adopt AI, this article is for you.

If you have spent any time on X, LinkedIn, conferences, or talking to peers, you have heard that every team ships at lightning speed with Claude Code, OpenCode, or Cursor.

In practice many aren't, uhm, most aren't. Some teams are dabbling in ways that just aren't productive.

If that's your team, I feel your concern, velocity gaps do compound. Teams that figure this out early ship faster and take on projects others won't even think to start.

But beware, AI adoption creates a split in your team. Some engineers are eager adopters. Others quietly or quite loudly resist the change. The reasons are consistent between orgs and teams: fear of quality loss, more cleanup work, and a sense that the coding craft is eroding.

From the CTO chair, this whole situation is uncomfortable. You can see the upside, but also morale risk if you push too hard.

## Contents

## This is not a tooling change

AI-assisted development changes the shape of engineering work.

Teams spend less time on raw implementation and more time on:

- specifications and intent-setting
- system design & architecture
- code review & QA

That shift is real, every engineer feels it, and some love it, others not so much. You shouldn't ignore this, as it will backfire.

Set expectations for the timeline. Tools like Claude Code / OpenCode are less than a year old, and many senior engineers formed their instincts in different environments, IDEs, and processes. Being skeptical about such a monumental change makes total sense.

The biggest mistake that you can make here is to treat this like an IDE preference instead of a structural change in how software gets built.

## You can hold the line and still lead with empathy

As a CTO, you set direction. AI adoption should not be a committee vote, but direction without empathy will turn this into a compliance theater.

Start by acknowledging the tradeoffs openly. More code reviews and QA will be required. Bad prompting will produce bad code faster than a single dev could write, and AI has changed how the craft looks like.

If quality is the concern, address it systemically. Invest in standards, quality assurance, and review practices. Build internal tooling that defines what "good" means for your codebase. Be really clear that prompting, scoping, and evaluation are all new skills that should be learned.

## Remove pain before you sell upside

Most resistance you will find is defensive. Engineers avoid pain more than they chase novelty.

You should start by encouraging AI use in the parts of the job people already dislike:

- documentation
- changelogs
- repetitive refactors
- boilerplate tests

Give clear permission to use AI here. Call it cheating if you want. The risk for introducing AI in these four areas is quite low, and most engineers will be open to it.

## Reframe the value

Once the pain and frustration drop to an acceptable degree, talk about what changes with AI adoption.

You don't delegate reasoning to AI, it actually clears the way so you can do more problem-solving. Exploration becomes real cheap. You can try multiple architectures and approaches, iterate, and prototype without having to manually write code. Doing multiple PoCs for a big change became a norm.

For engineers who care about impact, faster shipping means more cycles of building things that help users. That motivates engineers across seniority levels.

## Be explicit about juniors

AI adoption impacts junior engineers the most. AI can accelerate learning or short-circuit it.

Juniors still need to learn fundamentals. You should be really clear that AI should not replace their thought process but shift where most of the effort is spent.

Set expectations clearly: use AI as an assistant, a tool. There should be a human that reasons about the code AI produces. Review their work closely and ask them to explain outputs. Make learning their explicit goal.

## Use social proof

Mandates rarely create adoption. Pick a curious senior engineer. Give them access to different AI models and protected time on a low-stakes project. Let them share what worked and what didn't. One internal success story does more than any policy doc or any all-hands meeting.

Adoption spreads the best through proximity.

## Create space to try

Truthfully, most teams are already overloaded. Telling engineers to "experiment with AI" doesn't bring any results. If you want this to stick, clear time. A full day or a disposable project. It's quite expensive, but stagnation costs more.

## Resistance turns to refusal

Sometimes, even with time and support, someone will refuse to engage. You should have a direct conversation about fit. The efficiency gains are real and compounding. Refusing AI tooling is closer to rejecting version control than disliking a formatter.

Going through retraining and learning something new was always part of the software engineering job. Indefinite refusal has steep costs.

## The craft is still there.

Across teams, enthusiasm levels vary. What's consistent is the shift in work: more intent and judgment.

**Clarity and taste matter more now than ever before.**

AI is an expertise force multiplicator, and poor inputs still yield poor outputs, just faster.

As a CTO, your goal shouldn't be to turn everyone into an AI evangelist. Set direction, protect quality, and create conditions for your team to discover the benefits for themselves. They don't need to love it overnight. They need to see the work they care about isn't being erased, just rebalanced.