AI From the Trenches

AUTHOR: Gautam SampathkumarPUBLISHED: Feb 17, 2026
AI EngineeringMental ModelsSoftware DevelopmentProduct EngineeringSystems Design
Article image

A hands-on view of the current state of AI in software development.

Some mornings, I sit down, ask Claude Code to implement part of a feature, look at the result, and realize how much software development has changed in a very short time. Tasks that would have taken days a couple of years ago now often take hours. Sometimes less.

At the same time, I’m regularly reminded that while we’re getting very close to something that looks like autonomous engineering, we’re not quite there yet.

Over the last couple of years, AI-assisted coding has evolved rapidly. We started with basic autocomplete and small code snippets. Then came boilerplate generation. Then full methods. Then full scripts. Today, in many cases, models can plan, implement, and test fairly complex features on their own.

It often feels like working with a senior engineer who never gets tired and can move extremely fast.

But speed is not the same as autonomy. In practice, there is still friction. There is still supervision. In my judgement, we are close, perhaps ninety or ninety-five percent of the way towards truly autonomous software development, but that last stretch is not going to be straightforward.

> Driving the Tool Versus Letting It Run

One of the biggest differences I see between people who get real leverage from AI and those who struggle is how they relate to the tool.

Some developers actively drive it. They think through the problem, break it down, give precise instructions, inspect the output carefully, and correct mistakes early. In this mode, AI becomes an amplifier. Productivity increases are substantial.

Others use AI more passively. They write a loose prompt, wait for the output, and accept most of it at face value. This style, often called “vibe coding,” usually produces fragile systems. The code may look fine initially, but it tends to hide bugs, incorrect assumptions, and awkward abstractions that surface later.

When things go wrong, it is tempting to blame the model. More often, the real issue is lack of direction. AI works best when it is being actively steered.

> Why Architecture Still Requires Human Judgment

One area where AI remains somewhat weak is system design.

It can implement what you describe. It is much less reliable at anticipating what will break.

One underrated skill in this new environment is the ability to have a real design and architecture discussion with the AI before writing any code. Reasoning through trade-offs, exploring alternatives, and brainstorming approaches often produces better outcomes than jumping straight into implementation. In many cases, this phase takes longer than the coding itself, and it is usually time well spent.

Consider a data ingestion and analytics pipeline. You may be pulling information from historical archives, real-time streams, WebSockets, and REST APIs. Each source has different latency, reliability, and formatting. Some arrive late or out-of-order. Some arrive twice. Some fail silently.

You have to decide how backfills interact with live data, which source takes precedence, how normalization works, how errors are surfaced, and whether partial failures are visible. If these questions are not answered carefully, problems accumulate quietly.

Once you scale, complexity increases further. Multiple machines, parallel workers, retries, catch-up mechanisms, and backpressure all introduce new failure modes.

Experienced system designers tend to think about these issues instinctively. AI can help implement solutions, but it does not reliably reason about these risks on its own. For now, architectural judgment remains a human responsibility.

> Processes That Improves Outcomes

Multi-stage AI review: A practical habit that has improved my results is forcing models to review their own work in stages.

After implementing a major feature, I usually run three passes. First, I ask the model to check the implementation against the original requirements and identify anything missing. Second, I ask it to look specifically for logical and edge-case bugs. Third, I ask it to perform a deep, line-by-line audit and fix whatever remains.

Prompt AI to reference OSS: Another useful practice is asking the model to study strong open-source implementations and pattern its solution after them, especially for API integrations. Many edge cases have already been discovered by others. Reusing that knowledge is often more effective than reinventing it.

> Why Fully Asynchronous Development Is Still Limited

The current thinking on the frontier of AI-driven development is that we should be able to write a specification, let AI work overnight, and wake up to a finished feature.

In practice, this only works under fairly narrow conditions.

You need very strong specifications, comprehensive tests, and clear acceptance criteria. Writing those well is difficult. It also works best in stable, well-understood domains.

In more experimental products, assumptions are frequently wrong. Systems behave differently than expected. Latency matters more than planned. APIs change. Integration details turn out to be subtle, undocumented behaviors surface, and small mismatches compound into larger problems.

Current models are still weak at challenging flawed premises on their own. They tend to accept assumptions and build on them. True asynchronous development will likely require multiple agents that implement, verify, and challenge each other.

We are close, but not fully there. That final gap is harder than it looks.

> Patience as a Moat

Many people worry about where their skills will matter if AI can do so much of the work. In practice, AI does not eliminate difficulty. It shifts where difficulty appears.

Instead of fighting syntax and boilerplate, you spend more time dealing with hidden regressions, subtle misinterpretations, and unexpected edge cases. Undoing AI-generated mistakes can sometimes take longer than writing code manually.

On difficult problems, this happens frequently.

If you are willing to work through these periods, it becomes a real moat. Many people lose momentum at this stage and give up.

If you are high-agency but impatient, and expect AI to quickly build a full-fledged product, it is worth recalibrating those expectations. It can get you there, but it wont be fast.

Persistence and Patience now plays a much larger role in long-term success.

> Why Narrow Products Still Win

AI has made it easy to build something. It has not made it easy to build something good.

The strongest products I see today tend to be narrow and deep. They focus on one problem and solve it exceptionally well. They reflect deep understanding of user workflows, domain constraints, and failure modes, as well as strong product judgment and taste.

Generic tools are easy to generate. High-quality, domain-specific systems are not.

> Keeping Up Without Burning Out

The pace of new tools and models is intense. Every week there seems to be another “AGI is here” proclamation. It is easy to feel permanently behind.

I try to approach this deliberately. I filter aggressively and avoid chasing every new release. Every week or two, I experiment with something new and see whether it fits my workflow. Some experiments fail. A few lead to meaningful improvements.

On the rare occasions when there is a genuine breakthrough, I pause normal development for a while to upgrade my stack. Each time I have done this, the leverage has been 4-5x. I have never regretted the time spent here.

> My Current Setup

At the moment, my stack is fairly minimal. I use Lovable for UI experimentation, Claude Code for most development work, Cursor occasionally, and the command line for orchestration.

I spend less time reading raw code than I used to. Most changes happen through prompting, inspection, and iteration. It is a different way of working, but it has proven effective.

> In Closing

If you are a product person, learn engineering. If you are an engineer, learn product development. The ability to operate across both is becoming the defining skill.

For entrepreneurs, builders, and high-agency people, there has never been a better time to build.

The people who benefit most are those who can still think clearly, design systems, exercise judgment, and stay patient when things get messy.

We are in a transitional period. It is uneven, noisy, and often overhyped. But for people willing to engage seriously with these tools, the leverage is real.

> About Tatv

Tatv is a collection of essays on markets, systems, and execution in the age of crypto and AI. The focus is on structure over narrative, process over prediction, and building tools that operate within markets rather than merely commenting on them.

If this piece resonated, you can follow our work:

- Tatv essays and frameworks: https://tatv.ai

- Airavat — execution and trading systems: https://airavat.xyz

- Founder on X: https://x.com/gautam_airavat