Escaping the boilerplate trap: how AI is helping Nine Engineers focus on problem-solving
Written by Juliane Djidi, Product Manager, Streaming, Nandhini Venkataraman, Engineering Manager, Streaming, Jim Wild, Senior Engineering Manager, 9Now
Engineers: Andres Vaquero, Anmol Walia, Dat Tran, Rubenee Somasundaram
Every engineer at Nine knows the scenario: a beautiful design is handed over, and the next few days are spent on the repetitive, low-value work of translating that design into code. We write Application Programming Interface (API) specifications, set up server entry points, create configuration files, and build the initial UI skeleton—all before solving the first real business problem.
This tedious, boilerplate-heavy process is exactly what AI should be disrupting. Our team recently ran an End-to-End AI module trial to see whether modern AI tools could help us bypass this friction and let engineers focus on solving complex problems rather than managing syntax. The immediate speed gain is undeniable. But this raises a critical question: What happens when your AI isn’t up to your standards, and the code you’re generating has issues that shouldn’t be there?
Scratching the surface: acknowledging the complexity
A modern end-to-end project isn’t a single application; it’s a massive, interconnected system involving dozens of components, teams, and processes. It would be disingenuous to claim we’ve found a “solution” to this complexity. Instead, our goal was to identify points of maximum friction and apply AI to gain velocity across the entire value chain.
After spending a few weeks looking at it, we can honestly say that we have only scratched the surface. The findings below are intended to highlight what we’ve learned so far, what we think our next steps should be, and the existing gaps and what we still need to explore. This is endless work, an ever-changing, moving piece.
The great AI tool trial: bypassing the low-value work
We tried various AI-powered tools to accelerate specific stages of the Software Development Life Cycle (SDLC). Here’s an honest look at what delivers real speed and what still requires a human driver.
1. UI design: translating vision to code in minutes
The problem is that engineers spend too much time translating designs into boilerplate code rather than solving problems. AI solves this by rapidly generating initial UI, API scaffolding, and tests, using Context Engineering to align the output with Nine’s standards.
But this was only a starting point for us, and we quickly leaned on the findings from the Design Module team instead.
2. API development scaffolding a production-ready backend in one hour
In API development, engineers spend days on setup, specifications, and configurations rather than on core domain logic. GitHub Copilot solved this by eliminating the “blank page” problem, generating a fully functional Node.js API foundation in one hour, allowing engineers to immediately focus on optimisation and problem-solving. The new challenge is governing the quality and adherence to Nine’s specific standards in the generated output.
3. End-to-end testing: fast, but “happy path” focused
Setting up end-to-end tests used to be slow because we had to manually build test tools and write tests that work across different web browsers. The Playwright Model Context Protocol (MCP) Server fixed this. It quickly created a stable testing setup and working tests for a key task in about an hour, saving time and supporting all major browsers.
However, a new problem has emerged: the AI-made tests usually only cover the “happy path”—what happens when everything goes right. For complicated websites, we often need to run the tests multiple times to make them reliable.
This means AI is great for starting the test setup, but we still need human experts in quality assurance (QA). We must actively give the AI negative or unusual scenarios (edge cases) to test, ensuring we truly check everything, not just the successful outcomes.

Bringing it together: the shift from prompt to context
The common thread across all tools is this: AI gives us speed, but it lacks context about Nine’s standards.
This is the most significant gap in the End-to-End lifecycle. We don’t want to spend time correcting an AI that suggests an anti-pattern or ignores our internal design system.
The solution isn’t writing longer, more explicit prompts – it’s building a system that provides the context for us.
The Role of the Context Definition Framework (CDF)
The CDF is our framework for setting guardrails and baselines for our standards. It is an idea at an early stage that shifts our focus from prompt engineering to context engineering.
- What it should be: A structured, reusable asset where we define our coding standards, best practices, security anti-patterns, and required output formats.
- How it works: We are creating a Minimum Viable Demonstration (MVD) by storing our standards (e.g., the “React Component Standard”) using a “Context as Code” approach in VSCode Prompt Files. By storing standards as free-form Markdown in the repository, we create a single source of truth that serves a dual purpose: readable documentation for humans and structured context for the AI. This injects our standards directly into the developer’s workflow, ensuring the AI is grounded in the specific task at hand and that the developer is in full control of the context given to the AI for any given prompt.
- The measurable value: We prove it works by comparing a “Baseline” output (AI unguided) to a “Framework” output (AI guided by our context), and we make continuous improvements to achieve better outcomes by testing against the set of boundaries and rules defined by each context definition.
By structuring our context, we take control of the AI’s output, ensuring the code it generates meets our standards, needs, and requirements.
What’s next?
The exploration clearly demonstrated that AI tools are powerful accelerators. They do the tedious, repetitive work of generating initial scaffolding, boilerplate code, and basic tests. This allows our engineers to finally focus on problem-solving, deep architectural decisions, and custom domain logic.
Our next steps should be focusing on:
- Moving CDF into deep integration: The long-term vision is to scale the Context Library across the business and integrate it into our tools and workflows.
- Continuous refinement: We’ll use the “Baseline vs. Framework” test as a continuous feedback loop. If the AI fails a rule, we update the Context Definition in the next version, proving that our framework is a living system for continuous, measurable improvement.
The future of the Software Development Life Cycle at Nine is not about replacing the developers, but amplifying their ability to innovate and deliver value faster. We are moving beyond augmentation. We are engineering the future of the developer experience. Join us at the forefront of this shift.