The traditional agency workflow for prototyping follows a predictable path: receive the brief, spend a week on design exploration, spend two weeks building, then iterate for another week based on client feedback. A branded prototype lands on the client's desk three to four weeks after kickoff.
That timeline has been the industry standard for years. Clients budget for it. Agencies staff around it. Nobody questions it because there wasn't a realistic alternative.
AI-assisted coding has changed the constraint. The slow part of building a prototype is no longer the implementation — it's ensuring the output actually reflects the client's brand rather than looking like every other AI-generated interface. The bottleneck shifted from "can we build it?" to "does it look like their product?"
Agencies that have adapted their workflow around this shift are delivering branded prototypes in a fraction of the original timeline. Here's how that workflow actually works, what it gets right, and where the limitations are.
The Core Problem for Agencies
For a product company, you build one design system and use it forever. For an agency, every new client means a new visual identity. Client A is a luxury fintech startup. Client B is a healthcare SaaS platform. Client C is a B2B marketplace. Each one needs components that feel like their brand — their colors, their spacing philosophy, their visual weight.
In the traditional workflow, this means a designer spends 5-10 hours creating mood boards, choosing palettes, establishing spacing rules, and documenting the visual direction before a single component gets built. That work is necessary, but most of it comes down to a handful of concrete decisions about style axes: how rounded are the corners, how heavy are the borders, how dense is the spacing, what shadow style do the cards use.
The new workflow automates the discovery phase. If the client already has a website or existing product (and most do), you can extract their current design decisions programmatically rather than reverse-engineering them by eye.
The Extraction Step (and What It Actually Does)
MatchKit's URL extraction tool scans a website's CSS — stylesheets, inline styles, Tailwind utility classes — and maps what it finds to the 11 design axes that define a visual style. It does this through CSS analysis, not screenshots or visual AI.
What it reliably extracts:
- Colors: Primary, secondary, and accent colors based on frequency and saturation clustering
- Border radius patterns: Whether the site favors sharp corners, subtle rounding, or pill shapes
- Shadow styles: Whether elements use soft shadows, hard offsets, or no shadows at all
- Spacing density: Whether component padding is compact, standard, or spacious
- Border weight: Hairline, thin, or medium borders
- Font personality: Whether the loaded fonts are geometric (Inter, Roboto), rounded (Nunito, Quicksand), or humanist (Open Sans, Lato)
- Font weight bias: Whether the site leans toward light, regular, medium, or bold weights
- Motion speed: Whether transitions are snappy or smooth
What it does not extract is the specific font stack to use in your output. MatchKit classifies the font personality (geometric vs. rounded vs. humanist) and uses that to select appropriate fonts from the preset's type system. If the client's site uses Poppins, MatchKit detects "geometric personality" and configures the system accordingly — but it doesn't transplant Poppins into the generated design system. The same applies to exact spacing pixel values: it detects the density pattern, not a pixel-perfect scale.
The output is a set of axis positions with confidence scores:
Accent Color: #1e40af (clustered from 14 blue occurrences)
Radius: lg (confidence: 0.72) — median 10px from 23 samples
Shadow: subtle (confidence: 0.58) — avg blur 6px
Font Personality: geometric (confidence: 0.70) — matched "Inter"
Spacing: default (confidence: 0.55) — median padding 16px
Border Weight: thin (confidence: 0.65)
Color Temperature: cool (confidence: 0.62)
Confidence scores matter. A score below 0.5 means the extraction found limited signal — you should manually verify that axis. A score above 0.7 means there was strong, consistent evidence in the CSS.
This step takes about two minutes. You get back a structured starting point, not a finished design system.
Configuring the Design System (Minutes 5-15)
You take the extracted values into the MatchKit configurator, which pre-fills the 11 design decisions based on what was found. This is where client collaboration happens.
Walk the client through the decisions: "The extraction found you're using rounded-lg corners and a blue accent. Does that match your intent, or should we adjust?" Most of the time, the extracted values are close enough. Sometimes the client wants to shift direction — "We're rebranding, make it warmer" or "We want something sharper than what we have now."
The configurator shows a live component preview as you adjust each axis. The client can see how buttons, cards, inputs, and data tables will look with each combination. This front-loads the design review. Instead of building screens and then iterating on the visual direction, you're aligning on the system before any screens exist.
Ten minutes of collaborative decision-making replaces what used to be multiple rounds of design revision.
Installing and Building (Minutes 15 to Hour 2)
The configurator generates a complete package: CSS variables, a component library styled to the configured tokens, and a rules file (SKILL.md for Claude Code, .cursorrules for Cursor) that encodes the design system's constraints.
You drop these into the project repository. Your AI coding tool reads the rules file and now understands the design system — not just the colors, but the compositional rules: how much padding a card gets, what shadow style to use, how to size typography, when to use which button variant.
From here, you brief the AI on what to build. "Build the dashboard homepage. Use the design system. Follow the spacing scale. Use the accent color for interactive elements." The AI generates screens that are constrained by the design system, so the output is consistent with the established brand direction rather than defaulting to generic styling.
The constraint is what makes this work. Without a configured design system, AI-generated interfaces look like every other AI-generated interface. With one, the output is specific enough that a client recognizes it as theirs.
You iterate as needed — "more whitespace between sections," "darken the sidebar" — and the design system keeps the changes consistent. Two hours in, you have a branded prototype that would have previously taken two to three weeks.
Per-Project Economics
The math changes substantially.
Traditional workflow:
- Design exploration (mood board, palette, typography, spacing): 8-12 hours
- Build (implement design, create components): 16-20 hours
- Client iteration (feedback rounds, refinement): 8-12 hours
- Total: 32-44 billable hours
At $150/hour: $4,800-$6,600 per prototype.
Extraction-assisted workflow:
- Extract and review brand tokens: 30 minutes
- Configure design system with client: 30 minutes-1 hour
- Install and project setup: 30 minutes
- AI-assisted build including iteration: 1.5-2 hours
- Total: 3-4 billable hours
At $150/hour: $450-$600 per prototype.
How you use that gap is a business decision. You can pass the savings to the client and compete on speed, compress your timelines and increase throughput, or maintain your existing pricing and capture the margin. Most agencies end up doing a mix: slightly lower prices, much faster delivery, and significantly higher per-hour profitability.
For a 20-project-per-year agency, the throughput gain is roughly 600-800 hours of freed capacity annually.
Managing Multiple Client Systems
The practical question for agencies running five concurrent projects: how do you keep client design systems from bleeding into each other?
Each client project gets its own matchkit.json configuration file in its project directory. That file contains the client's specific axis positions, accent color, and any overrides. When your AI tool reads the rules file, it reads the context for that project only. There's no cross-contamination as long as each project directory has its own design system files.
For agencies that want standardized processes, you can create base configurations as templates — a "fintech starter" or "healthcare SaaS starter" — and customize per client. Fifteen minutes of adjustment per new project rather than building from scratch.
MatchKit's Team plan (€29/month, 5 seats) supports this with shared organization configs and locked axes. Your team saves a client configuration once, and every team member pulls that configuration into their local environment.
Where This Workflow Falls Short
It's worth being honest about the limitations.
The extraction works best on sites with substantial CSS — marketing pages, dashboards, web apps with real stylesheets. Single-page apps that render everything via JavaScript with minimal CSS may not yield useful results. Sites behind authentication can't be scanned. Very minimal sites (a single landing page with three elements) may not provide enough signal for confident axis mapping.
The "two hours to prototype" figure assumes you're building standard dashboard-style interfaces. If the client needs complex custom visualizations, unusual interaction patterns, or highly specialized layouts, the AI-assisted build phase takes longer. The design system still helps — it handles the 80% of standard UI — but the remaining 20% still requires manual work.
And the design system doesn't replace design thinking. It automates the mechanical parts — token extraction, component styling, consistency enforcement — but someone still needs to make judgment calls about information architecture, user flows, and which components belong on which screens. The agency's value shifts from "we can make buttons look good" to "we know what to build and how to organize it."
Frequently Asked Questions
Q: How do agencies handle different design systems per client?
Each client project gets its own matchkit.json configuration file with unique design tokens, presets, and brand colors. The configurator saves configurations per project. When switching between client projects, the AI reads the project-specific SKILL.md and tokens. There's no cross-contamination between client styles as long as each project directory has its own design system files.
Q: How much time does the AI coding workflow save per agency project?
The savings depend on project complexity, but for standard dashboard prototypes: traditional workflow runs 32-44 hours, extraction-assisted workflow runs 3-4 hours. At typical agency rates, that's several thousand dollars of recaptured capacity per project. The savings compound across projects — a 20-project agency recovers 600-800 hours annually.
Q: Can clients review and approve the design system before building?
Yes. The configurator shows a live preview of components as you adjust design decisions. Share the configurator URL with the client, walk them through the axes, and get approval on the visual direction before writing a single line of code. This front-loads the design review instead of iterating after screens are built.
Getting Started
MatchKit's Team plan (€29/month, 5 seats) is designed for agencies running multiple concurrent projects. It includes shared organization configurations, multiple API keys for different tools, and URL extraction for brand token recognition.
The workflow is straightforward: extract, configure, install, build. The hardest part is the first project — after that, the process becomes muscle memory.