
A great day with the Knoware team at Les Sorbiers
May 16, 2026From SDLC to AI-DLC
What we learned using AI coding agents on real software projects
On 24 April 2026, we held an internal Knoware seminar for our software engineers on a topic that is quickly becoming part of day-to-day software delivery: the use of AI coding agents on real projects. The goal was not to present AI as a magic wand, nor to suggest that software can now be “vibe coded” into production. The goal was more practical: share concrete lessons from the field, based on real cases where AI agents were used to accelerate analysis, documentation, development, testing and migration work.
The seminar was built around several project examples, deliberately discussed internally with enough technical detail for developers to learn from them. For external communication, the names of customers and projects are not relevant. What matters are the patterns we observed.
In one case, we had to replace an old algorithm implemented in a legacy technology by a modern Java implementation. The existing logic was poorly documented, complex, and business-critical. Instead of asking an AI agent to “rewrite the code”, we first used it to help us understand the legacy logic, generate structured documentation, produce design diagrams, propose a phased implementation plan, and create detailed task lists. Only after this specification work did we move to code generation. The result was not just faster development; it was a more disciplined development process. The new implementation was delivered on time, showed very strong performance improvements, and remained under human control throughout the process.
In another case, we tested AI agents on a migration from a legacy JavaScript front-end framework to Angular. This kind of work is typically difficult to start because it combines old technology, architectural uncertainty and a lot of repetitive refactoring. Here again, the AI agent was useful, but not by replacing engineering judgement. The useful part was its ability to help generate a migration plan, reason about coexistence between old and new screens, and produce modern front-end code faster than a developer unfamiliar with the target stack could have done alone. It did not work perfectly on the first attempt. It required review, correction, stopping at the right moment, and making architectural decisions ourselves.
A third case focused on adding a small but real feature to an existing application using AI assistance inside a traditional IDE. The work was split into steps: first ask for a plan, review the plan, ask the AI to check for gaps, execute part of the plan, review the changes, test the application, and then fix the problems found. This is a very different way of working from simply prompting “add feature X”. It is closer to pair programming with a very fast junior developer who has read a lot, never gets tired, but still needs direction, boundaries and review.
From SDLC to AI-DLC
These experiments are pushing us to evolve our internal software delivery practices. We are not abandoning the Software Development Life Cycle. We are extending it into what we increasingly call an AI-DLC: an AI-assisted Development Life Cycle.
In a classical SDLC, we move through requirements, design, development, testing, release and maintenance. In an AI-DLC, these steps remain, but AI agents can assist in each of them:
- understanding legacy systems;
- generating or improving technical documentation;
- creating design alternatives;
- producing diagrams and task breakdowns;
- generating code;
- creating unit tests and UI tests;
- running builds and tests through tools;
- helping with vulnerability fixes and refactoring;
- documenting implementation decisions.
The important word is assist. AI does not remove the need for engineering discipline. It increases the need for it.
The more powerful the tool becomes, the more dangerous it is to use it casually. An AI agent can generate a lot of plausible code very quickly. Some of it will be good. Some of it will be subtly wrong. In one of our internal examples, the AI misunderstood a comment in an old script and turned that misunderstanding into a false assumption in the generated specification. Because the specification looked clean and convincing, the risk was not obvious immediately. The issue was only found by confronting the result with real test data.
That is exactly why humans must stay in the loop.
We are not trying to “vibe code” applications
There is a big difference between using AI professionally and “vibe coding”.
Vibe coding is the idea that one can describe an application loosely, let the AI generate large parts of it, and keep prompting until something seems to work. That may be acceptable for prototypes, demos or experiments. It is not an acceptable model for complex, long-lived, secure and maintainable business applications.
Our approach is different. We use AI coding agents in a controlled engineering process. That means:
- specifications before code;
- small iterations instead of uncontrolled large changes;
- code review by engineers;
- tests and regression tests;
- architecture ownership by humans;
- security and maintainability checks;
- traceability of important design decisions;
- refusal to accept AI output just because it “looks right”.
Good AI-assisted development is not less disciplined than traditional development. It is more disciplined, because the speed of generation makes weak processes fail faster.
Learning to use AI agents is now an engineering skill
One of the strongest lessons from the seminar is that using AI coding agents well must be learned.
It is easy to be impressed by the first demo. It is also easy to fail after that. AI agents can create the illusion of progress: files change, code appears, tests may even pass, but the design can drift, edge cases can be missed, and maintainability can suffer.
Getting good results requires new habits:
- knowing how to provide context;
- asking for a plan before implementation;
- forcing the agent to work in small steps;
- giving it access to the right documentation when the technology is niche or legacy;
- checking assumptions in generated specifications;
- understanding which model or agent is better suited for which kind of task;
- deciding when to stop the agent and take over manually.
In one internal example, an AI agent struggled to fix a legacy UI bug because the problem was not only in one framework, but in the unusual combination of several technologies. The model tried solutions that were valid for one technology in isolation, but not for the actual stack. The lesson was clear: when the context is rare, the agent needs better grounding, for example through documentation or retrieval-augmented generation. Otherwise, it may confidently solve the wrong problem.
This is not a failure of AI. It is a reminder that AI-assisted engineering is still engineering.
Software engineers will not disappear
The conclusion from our experience is not that software engineers will become less important. It is the opposite.
The role of the software engineer is changing from writing every line manually to directing, validating and integrating AI-generated work. Engineers will spend more time on architecture, specifications, test strategy, security, code review, integration, maintainability and understanding the business problem.
This matches what we see in the broader market. Thoughtworks warns against exaggerated productivity claims and estimates that real delivery gains from coding assistants may be closer to 10–15% in many contexts, while still being significant and cost-effective (see https://www.thoughtworks.com/insights/blog/generative-ai/how-faster-coding-assistants-software-delivery).
Our own experience sits somewhere between the hype and the skepticism. We do not believe that AI automatically doubles the productivity of a software team. We also do not believe that the effect is marginal. On the right tasks, with the right engineering discipline, a productivity gain around 30% is a realistic ambition. Sometimes it will be less. Sometimes, on very specific tasks such as documentation generation, test creation or repetitive migration work, it can be much more.
But the gain is not free. It requires investment in tools, training, practices and review.
AI is no magic, but it is becoming part of serious software engineering
The most important message from our internal seminar was simple: AI coding agents are becoming real engineering tools. They are useful today, not only for experiments, but for complex software applications. However, they must be used with professional discipline.
For Knoware, the move from SDLC to AI-DLC is not about replacing developers. It is about helping good developers deliver better software faster, while keeping control over quality, security and maintainability.
That means we will continue to experiment, measure, learn and adapt. We will use AI where it brings value. We will keep humans in the loop where judgement matters. And we will keep treating software engineering as what it is: a disciplined craft, now supported by a new generation of very powerful tools.

