Update from the Agentic AI Dev Team
- Nick Beaugeard
- May 29
- 2 min read

Making Agentic AI Work for Web UX: A Brutal Reality Check
We're deep into the weeds of building an Agentic AI that can handle web user experience (UX) tasks—think analysing interfaces, navigating design systems, and optimising flows. It sounds futuristic, but here's the unvarnished truth: it's bloody hard.
The Problem: AI Can’t See What You See
At the issue's core is this: our AI can't truly "see" the web like a human does. It doesn't grasp visual hierarchy, spacing, or the subtle cues that make a design intuitive. It processes the DOM, maybe some screenshots, but lacks visual perception. This makes tasks like evaluating UX or suggesting design improvements a massive challenge.
Prompt Engineering: A Game of Trial and Error
Crafting effective prompts is more art than science. A slight tweak in wording can lead to vastly different outcomes, and there's no guarantee of consistency. It's a constant cycle of:
Designing a prompt
Testing the output
Analysing the results
Refining the prompt
This iterative process is time-consuming and often frustrating, but it's the only way forward.
The Silver Lining: Incremental Progress
Despite the challenges, we're making headway. Each iteration brings us closer to an AI that can understand and improve web UX. It's not about giant leaps but steady, incremental progress. We're learning what works, what doesn't, and how to guide the AI more effectively.
Looking Ahead
We're not there yet, but the path is becoming clearer. With continued effort and refinement, we're optimistic about developing an Agentic AI that can genuinely enhance web user experiences. It's a tough journey, but one worth undertaking.
Stay tuned for more updates from the trenches.
Comments