Adapting to AI: Reflections on Productivity

Adapting to AI: Reflections on Productivity

2025 was the most productive year of my career. I worked on the three largest projects I have ever worked on: migrating all services to a new cloud platform, developing a new edge computing platform, and building a distributed database. Completing just one of these projects would have made for a career-defining year, but circumstances dictated that all three projects be completed simultaneously. In addition, my team inherited three more critical projects, two of which had to be delivered by contractual deadlines for our largest customer to date, or we would face liquidated damages. On top of this, I somehow still found time to invest in Rust tooling to improve security and reliability, I gave three conference talks,[1] and I contributed to an industry working group.[2]

Staggered by the amount of work I did in 2025, I asked myself: How much of this “productivity” was from AI?

Some of it was from working more hours than I’ve ever worked, unsustainably so.[3] Some of it came from industry experience that allowed me to pick solutions without having to spend a lot of time doing research or experimentation. A huge part of it was from being surrounded by amazing people who led enormous parts of these projects.

In 2025, did AI generate a lot of my code, or written communication, like design documents? No. AI made prototyping easier, it accelerated routine tasks, and it helped me evaluate engineering trade-offs more efficiently, but I still authored the majority of my work.

In 2025, what AI did more than anything was act as an exceptional pairing partner who was available at any hour of the day, had access to a huge corpus of knowledge, and never got tired. AI kept me going when I otherwise would have been stuck, or when I was exhausted, one more prompt would lead to an exhilarating breakthrough, and I’d keep going.

With the release of the latest AI models near the end of 2025, software engineering as we knew it changed forever. In 2026, I expect AI will produce almost all of the code I write. I also expect to be regularly reevaluating how I approach my work to keep up with what is now possible.

Over the course of a few essays, starting with this one, I will detail how I’m experiencing and thinking about this historic shift.

Most People Have Yet to Adapt

The majority of people in the software industry have yet to realize the full potential of AI models and tools. Many people continue to work in the ways they are accustomed to. They use AI for some tasks and they are aware it is going to change their job, dramatically, they just aren’t sure how fast, or how much they will need to adapt.

Even for people who are eager to adopt AI, it is hard to get your head around exponential growth. To understand the full impact, you always need to be trying the latest models, and on broader and more difficult problems, including the full life-cycle of software engineering—product management, testing, deployment, operations, maintenance, compliance—not just programming.[4] If your organization hasn’t figured out some of the many challenges, like safety, security, and privacy, you may not even have access to all the latest tools or be able to use them to their full potential.

Personally, I was heads down at the end of 2025 trying to complete all the projects I mentioned above. I didn’t have time to experiment and adapt my ways of working as much as I would have liked. I’m fortunate to be surrounded by a few people on the bleeding edge of AI and they accelerated my learning. The majority of these people are investing their own time and money to use these tools outside of work, that’s why they are so adamant about what is possible. They also have a tendency to try the AI tools on much more ambitious problems than other people do, like completing an entire project, start to finish, rather than just using the AI as a tool to research trade-offs or relieve them of toilsome work. Having a few AI champions in your organization who can distill and share knowledge is invaluable for navigating this transformation.[5]

I Can’t Keep Up

Our ways of working are changing so fast and so radically, I can’t keep up with the rate of change. So many people are trying new things. No single person can keep track of all of it, and there certainly aren’t enough hours in the day to try everything. There is a trade-off between experimenting with ever-changing ways of working that might be even more productive, versus just getting things done with a certain set of skills and tools. I expect we will all be reevaluating this trade-off for some time as the models and tools continue to change.

Another reason I can’t keep up is that AI lets me try out so many more of my ideas, often in parallel. Inevitably, trying one of these ideas compounds into even more ideas. My mind racing with possibilities, it becomes hard to stop.

Someone once remarked that where I work is a hotdog eating contest with unlimited hotdogs.[6] His point was we needed to be focused on eating the most important hotdogs, while also realizing the hotdogs are unlimited, and we need to self-regulate and not get overwhelmed.[7] AI makes it easier to have a bigger and bigger appetite, and because AI never gets tired, it makes self-regulation even harder.

I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There’s a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.
Andrej Karpathy

Where Will We Find Flow?

Flow is a concept developed by Mihaly Csikszentmihaly. In his book Flow: The Psychology of Optimal Experience, he describes flow as a state where individuals are fully immersed in an activity, leading to deep enjoyment and creativity. Key characteristics of flow include: clear goals, concentration, immediate feedback, a sense of control, a loss of self-consciousness, and an altered sense of time. Flow is that feeling of being so deeply into writing or debugging code that hours can pass and you forget about being hungry or having to go to the bathroom. This enjoyable psychological experience is what attracts many people to programming. Programming is particularly amenable to flow since so much of the problem solving and goal seeking can be done individually, and the feedback is often immediate.[8]

If agents are going to write most or all of the code and tests, and handle most or all of the deployment and operations, where will we find flow?[9] I find it hard to believe that supervising a set of agents is going to lead to an optimal flow experience, because we are more passive, it doesn’t stretch our abilities in the same way, and it requires far less concentration.[10] Will we find flow elsewhere? Solving problems and delivering value will always be rewarding, but I wonder if the optimal flow experience offered by programming has, for the most part, disappeared forever, and many of us will simply find less enjoyment at work.

Context switching is antithetical to the optimal experience of flow and AI tends to encourage context switching. I feel my attention is more scattered recently and I find it harder to focus—writing this essay was significantly more difficult than normal—so AI is impacting my ability to focus in general, not just at work or when programming. Partly this is because everyone is producing more work, more quickly. There is more to read, more to review, more to learn, more to manage. It is also a product of the agents doing most of the work, with only periodic prompting, so it is tempting, even necessary, to run many agents in parallel, all working on different projects, to maximize productivity. To think deeply about things, it is more important than ever to be intentional with our focus.

I find LLMs to be more valuable in the small than in the large. So like, again, this kind of, I might, you know, hats off to people who want to spend their lives acting as a middle management for robots. But like, that’s not necessarily for me.
Bryan Cantrill

Unconscious Anxiety: Where is My Place?

I have seen a number of people walk away from their software engineering jobs in the past year. Most of these people are middle aged, they are financially secure, and they have been in intense work environments for a decade or more. It is reasonable to assume it is just time for a change, or a break, but I think there is more going on.

These people are all some of the most capable and experienced software engineers I’ve ever met. They have knowledge and skills that are in incredibly high demand at the moment, especially in the age of AI, like optimizing, securing, and operating critical infrastructure at scale. They are intrinsically motivated, curious, and creative, and for years they have demonstrated their ability to learn and adapt as the industry has changed. But something is different now. With how rapidly things are changing, there is an anxiety about where their place is and where their value lies. It disrupts purpose. The rate at which AI can produce work grates against their better judgment in understanding how the system works, from first principles, and in evaluating the quality of the work. It becomes disorienting.[11]

I don’t believe this experience is terribly conscious. Like many difficult experiences, a lot remains unconscious, even if certain rationalizations convince us otherwise. For someone who forged a decades-long career carefully crafting code, configuration, tests, documentation, debates over Vim or Emacs, IDEs, syntax, test-driven development, code comments, tabs or spaces, naming conventions, etc., all suddenly seem moot. Many of these people are having fun building with these AI tools, including me, but if AI is going to author, test, deploy, monitor, and maintain all of the code, the unconscious is asking: Where is my place? How and what will I contribute? What skills of mine will remain valued? I think younger people are feeling a similar discomfort, but being earlier in their career, and less financially secure, they are forced to adapt as they learn what it means to be a productive professional.

There are a lot of take-charge people with very strong egos who are still quite unconscious. I say this is the rule rather than the exception. You can be a leader, you can run things like a clock and know how to manage others. But if you don’t have the time or interest to introspect, to question yourself, you can’t claim to be conscious.
—Daryl Sharp, Getting to Know You: The Inside Out of Relationship

Conclusion

I want to say that software development will eventually settle down into some new and very effective patterns of working, but this feels years away given the magnitude of this shift and how the the capabilities and ways of interacting with AI continue to expand exponentially. These tools are only going to get better—this is as bad as they will ever be. What it means to be productive in this industry has changed forever and continues to change rapidly.[12]


  1. See my talks It’s Not as Simple as “Use A Memory-Safe Language” and Rust Is Not as Safe as You Think It Is: Improving Safety and Reliability in Rust, and the related blog articles: 1) Making Unsafe Rust a Little Safer: Tools for Verifying Unsafe Code, Including Libraries in C and C++, 2) Making Even Safe Rust a Little Safer: Model Checking Safe and Unsafe Code, and 3) Making Unsafe Rust a Little Safer: Find Memory Errors in Production with GWP-ASan. See also my talk Predicting the Future of Distributed Systems and my blog article with the same title. ↩︎

  2. I'm a member of Edge Monsters, a working group focused on sharing expertise in edge computing. ↩︎

  3. I worked over 4000 hours in 2025, but I was having a lot of fun on these very rewarding, once-in-a-lifetime projects. I may have also been running from my midlife feelings. ↩︎

  4. See the essay Trusting LLM's with Root Access for a thought experiment about AI and operations. ↩︎

  5. My takeaways from working with these people are: To appreciate the full potential of AI you need to: 1) use the latest models, 2) give the AI a way to autonomously verify the correctness of its work so you can get out of its way and let it work without you, and 3) you need to give them harder and harder problems in order to see the improvements to the latest models. ↩︎

  6. This quote is from Drew Baglino. ↩︎

  7. A couple of years ago, James Hamilton remarked to me that he is so excited to keep working because there are so many interesting problems to solve. I asked him how he approaches major initiatives in a large organization, especially when people need convincing, or he needs alignment across teams. He said he always likes to have many projects on the go so that if he can land just a few of them in a given year, he is happy. ↩︎

  8. Feedback loops are a lot more expensive when coordination is needed among people, or when testing is difficult and slow. This tends to reduce agency and arrest any optimal flow experience. ↩︎

  9. Will the agents experience flow? ↩︎

  10. Claude Code has a "Learning" output style where: "Claude will not only share 'Insights' while coding, but also ask you to contribute small, strategic pieces of code yourself. Claude Code will add TODO(human) markers in your code for you to implement." I think this output style should be called "Scraps" rather than "Learning", because I cynically envision the agents condescendingly handing the humans a few scraps to keep their skills "sharp", all the while just slowing productivity down burning these human tokens. ↩︎

  11. Two complex problems AI solved quickly that really impressed me and changed my thinking about what is possible: 1) a performance problem in an OPC UA server that only occurred at scale—the AI wrote a test to reproduce the issue then tried different library versions until it isolated and resolved the issue and 2) a production incident impacting customers where it identified a DNS change affecting a database cluster by analyzing application logs—logs that humans had already looked at without finding the issue. ↩︎

  12. I did not use AI to write or edit this essay. I just wrote it old-school, and I tried to find some flow. ↩︎