On AI-Readying Engineering Organizations
Published on Mar 10, 2026, filed under management, development, ai. (Share this post, e.g. on Mastodon or on Bluesky.)
AI is here and AI is here to stay. While in general, we need criticism to train and use AI more responsibly and sustainably, we as engineering leaders specifically also face the challenge of AI-readying our organizations (on top of using the AI climate to set up our engineers for long-term success).
Here’s the problem—the increasing quality and proliferation of AI creates a productivity imperative:
AI tooling raises the expectations for engineering productivity.
At the same time, AI adoption and spending reduce traditional engineering roles and change the nature of attractive work opportunities.
Contents
- The Immediate Challenges
- The Elephant in the Room
- The More Obvious Answers
- The Less Obvious Stumbling Blocks
- Being Proactive About the Future
The Immediate Challenges
AI quality and proliferation carry multiple sub-challenges:
For organizations, if engineers don’t use AI tools and don’t use them effectively, productivity per engineer declines relative to the market.
For engineers, without training and tools to use AI effectively, career prospects diminish. (Organizations that don’t adopt AI may exist for the time being, but their number is going to shrink, and ultimately, they’re going to be less attractive as employers. *)
AI tooling is a moving target. The market is likely to take more time to solidify around mature solutions and best practices, requiring ongoing experimentation with multiple tools.
AI output quality still needs attention, starting with low-quality output that appears higher-quality. While engineering teams may not face typical slop, we must expect and prepare for some productivity losses, too (still the idea is that overall, productivity increases).
This leads us to the question how we could strengthen our organizations—how we can ready them for AI.
The Elephant in the Room
Before we sketch the main options, some may argue that AI adoption also carries real costs. For example, technical debt, skill atrophy, or over-reliance on low-quality output.
I believe we need to move through these concerns in order to move forward (which, again, still comes with an appeal for making AI more responsible and sustainable):
The overall idea is that use of AI can be learned and optimized.
As a learning process, it’s also clear that it comes with mistakes which have a cost—all learning has.
Only by open-minded adoption can we prepare well and get the most out of the shift that AI brings to our field.
As such, we will learn to detect and address AI-induced tech debt, we will identify what new skills are learned and what old skills should or should not be kept, and how the situation around low-quality output actually develops.
The More Obvious Answers
Where our field stands at the moment, the challenges allow us to draw some early conclusions:
Engineers need to use AI tooling. Thriving without AI in an AI-assisted world is difficult. Since we cannot and should not force adoption, our task is communicating the importance of AI proficiency. †Knowledge sharing and skill expectations can help in this area.
Engineers need to try different AI tools. While the market is in motion, both individuals and organizations benefit from evaluating different tools. This requires coordination—not everyone needs to test all tools, yet everyone should be encouraged to test different solutions and share their experience. Planned AI evaluations, perhaps with rotations, are an option here.
Engineers need to learn to make effective use of AI. This will likely only work through practice and iteration. This is why we must do more than merely suggest using AI. Usually available engineering metrics and AI literacy programs can help address this need.
Engineers and organizations need to identify methods and tests to tell AI tooling accuracy and efficacy. We need additional, tailored metrics to assess how well we use AI, how tools compare, and whether we produce better outcomes. This is well the biggest open issue.
The Less Obvious Stumbling Blocks
However, while these steps already need additional budget allocation, they aren’t easy:
Some engineers may not want to use AI. Others will resist trying different solutions. Yet others will need help organizing and maintaining their AI trials. Effectiveness may be in the eye of the beholder, or tied to an organization that itself is growing and learning. And not everyone needs to do all of this, all the time—AI adoption must be managed, and shielded against being inadvertently or intentionally sabotaged.
Being Proactive About the Future
From my perspective, there’s no way around prioritizing AI adoption in our organizations. AI helps us build better products faster, and smart use of it will be a determining factor for our careers going forward.
The path forward requires both urgency and care—we must move to adopt AI, while being deliberate about how we do so. Readying our organizations for AI goes beyond mandating tool use: It means building organizational muscle for continuous adaptation.
As outlined, we start building this muscle when we embrace using AI tooling, keep trying different AI tools, and monitor and ensure effective use of AI. As engineering leaders, we do this by driving adoption, coordinating evaluation, and managing for effective use.
* Organizations that reject AI may be interesting for AI-critical engineers, but these organizations and engineers end up in an increasingly dangerous position, the more the rest of the field adopts AI. Rejection of AI will not make the respective companies and their engineers more productive and therefore more competitive, let alone AI-literate. Note that this doesn’t carry any of my own preferences: Like you, I love engineering as a craft, and yet that doesn’t allow me to ignore the reality out there.
†I believe this communication works through genuine care for our peers’ careers. However, people may be skeptical and critical, or not concerned about their career. That’s fine—raising the topic a few times seems appropriate, but the decision is with each individual.
About Me
I’m Jens (long: Jens Oliver Meiert), and I’m an engineering lead, guerrilla philosopher, and indie publisher. I’ve worked as a technical lead and engineering manager for companies you use every day (like Google) and companies you’ve never heard of, I’m an occasional contributor to web standards (like HTML, CSS, WCAG), and I write and review books for O’Reilly and Frontend Dogma.
I love trying things, not only in web development and engineering management, but also with respect to politics and philosophy. Here on meiert.com I talk about some of my experiences and perspectives. (Please share feedback: Interpret charitably, but do be critical.)
