The Agentic Engineering Loop
AI coding assistants like Cursor, Windsurf, and GitHub Copilot are revolutionizing software development — but not always in ways developers anticipated. While these tools successfully accelerate code production, they’re creating an unexpected shift in developer responsibilities.
Today’s developers increasingly find themselves testing AI-generated code rather than writing code themselves. This creates a troubling dynamic: developers remain accountable for code they didn’t write, and must dedicate growing portion of their workday to testing. As AI generates code at ever-increasing speeds, human developers, and testers, are becoming the bottleneck in the development process.
AI will do soon 80% of the work — humans are the execution handlers.
The 80/20 Future of Development
Soon, only 20% of coding will be done by exceptional programmers — those rare individuals who can understand, review, refactor, and create novel implementations with an expert eye for robustness, maintainability, and efficiency in real-world conditions. These are the engineers who will continue to develop code that matters.
Those aspiring developers who lack exceptional skills will find their opportunities diminishing as AI generates code better, faster, and far cheaper. Non-technical management will have fewer people to manage, and architects who simply recommend well-known design patterns will become obsolete. Only truly exceptional programmers and architects who create novel approaches and can effectively review machine-generated work will remain essential.
The 80/20 Future of Testing
Similarly, only 20% of testing activity will be performed by humans. Even now, human test teams struggle to deliver adequate testing coverage. With AI generating more code faster than ever, manual approaches to testing will quickly become unsustainable.
The good news is that most testing is repetitive and follows well-established patterns — boundary values, negative testing, happy paths, security checks, and performance evaluations are either well-quantified or obvious. Today’s testers rarely have time for truly innovative and creative testing because they’re overwhelmed by the near-infinite amount of routine verification work. AI Testing Agents will mean they have the time to do work that is obviously impactful and appreciated.
The Rise of AI-to-AI Collaboration
The game-changer is the emergence of direct AI-to-AI collaboration. Rather than humans serving as intermediaries, AI Testing Agents can now communicate directly with AI Coding Agents. When a test fails, the Testing Agent provides specific feedback to the AI Coding Agent, which then generates targeted fixes. This autonomous feedback loop dramatically accelerates development cycles by removing both developers and testrs from routine debugging and verification tasks.
This machine-to-machine collaboration creates a self-improving system where code quality improves with each iteration — all without human intervention until a genuinely complex issue arises.
The New Agentic Engineering Flow
This new communication between AI Coding and AI Testing Agents creates a new “Agentic Engineering Flow” where:
- AI Coding Agents generate initial implementations based on requirements
- AI Testing Agents automatically verify functionality and identify issues
- The two agent types communicate directly to resolve routine problems
- Exceptional human engineers focus exclusively on novel implementations, architectural decisions, edge cases, and aligning technology with business objectives
In this brave new world, exceptional developers and testers do what they do best — handle the interesting, non-obvious, and most creative work that requires a larger and richer context than can be passed into, or learned by, a machine.
The Future Is Already Here
This new Agentic Engineering Loop where AI Testing agents work directly with AI coding agents isn’t theoretical — it’s here today. Teams of AI Testing Agents stand ready to test your applications, with basic testing services even available for free.
Here is a walkthrough of the AI Testing Agents from testers.ai integrated into Cursor via MCP on MacOS.
Get Early access to the AI Testing Agents, and let them work with Cursor, Windsurf and CoPilot while you grab a coffee.
Signup for Early Access to save yourself from routine testing, and help shape this new Agentic Engineering Loop!
— -Jason Arbon
CEO @ testers.ai