Using Chrome DevTools MCP to close the visual feedback loop
Our agents ship front-end changes all day.
The build passes, the types check, but nobody saw the page.
We gave the agent a browser and told it to look before committing.

An agent can implement a feature in minutes. But "implemented" and "works" are different things. The agent edits the right files, the types check, the build passes. None of that tells you whether the button actually appears where it should, whether the tree expands when you click it, or whether the page looks right after a refactor.
The manual feedback loop looks like this:
Every pink box is a developer context switch. The agent is idle, waiting for human eyes.
Direct browser access for coding agents.
Chrome DevTools MCP is an MCP server by Google that gives coding agents direct access to a live Chrome browser via the Chrome DevTools Protocol. It can navigate pages, take screenshots and accessibility snapshots, click elements, fill forms, read console output, inspect network requests, run Lighthouse audits, and record performance traces.
This turns the feedback loop from a back-and-forth conversation into something the agent handles on its own:
Every purple box is the agent working autonomously. The developer writes the prompt and reviews the result. Everything in between happens without switching windows.
Connect Claude Code to your running Chrome session.
The following setup reuses your existing Chrome instance, including cookies and login state. Useful for testing authenticated flows without extra configuration.
Open chrome://inspect/#remote-debugging in Chrome and toggle on "Allow remote debugging for this browser instance".
Install the MCP server globally in Claude Code:
claude mcp add chrome-devtools --scope user -- npx -y chrome-devtools-mcp@latest --autoConnect~/.claude/settings.json to allow MCP tool calls without per-call approval prompts:{
"permissions": {
"allow": ["mcp__chrome-devtools__*"]
}
}Chrome must be running before starting Claude Code. The --autoConnect flag connects to the existing instance but won't launch one.
The browser is now part of the agent's toolkit.
The workflow for a typical front-end change starts with the agent implementing the feature, then verifying its own work through the browser before writing a test that codifies the interaction.
The agent doesn't write the test from imagination. It writes the code, opens the page, and interacts with it. It takes a snapshot, sees the tree rendered with the right nodes, clicks a node, confirms the detail panel updated. If something is wrong, it goes back and fixes the code. Once the UI behaves correctly, it translates that exact interaction into a Playwright test.
The test it writes isn't speculative. It's a codification of an interaction that just succeeded.
Debugging console errors on a live page:
"List all console error messages on this page. For each, read the stack trace and find the source file in this codebase. Then fix the bugs."
Performance profiling:
"Start a performance trace with reload on http://localhost:3000. Analyze the insights, check LCP breakdown and document latency. Suggest concrete code changes."
Testing responsive design while logged in:
"I'm logged into the admin dashboard. Screenshot the current viewport. Emulate iPhone 12 (390x844, 3x, touch). Screenshot again. Compare what breaks."
Tell us what you're building. We'll tell you how we'd approach it, what it takes, and how fast we can move.
We'll tell you honestly if we're the right fit. And if we're not, we'll point you to someone who is.