2024
Visualizing AI Gateway Configurations — Making LLM Deployment Effortless
DELIVERABLES
UX strategy
User Flows
Wireframes
User Experience
User Interface
TIMELINE
3 months
Designing an Intuitive Interface That Turns Complex AI Configurations Into Clicks, Not Code
As organizations accelerate deployment of large‐language models (LLMs), platform complexity and security demands rise sharply. Leveraging the F5 AI Gateway — which routes, secures and observes generative-AI traffic across clouds and data centres.
My project introduced a visualised configuration code-editor tool designed specifically for customers unfamiliar with LLM configuration syntax: one that surfaces configuration options in an intuitive UI, translates into YAML/JSON for the Gateway, and ensures safe, policy-compliant deployment.
The Complexity Barrier
Non-technical users struggle with LLM configuration
F5 AI Gateway is designed to route, protect and observe generative-AI traffic (i.e., prompts and responses flowing between clients and large-language-model back-ends).
It inspects inbound prompts and outbound responses, prevents malicious or accidental data exposure (such as prompt injection, model-theft, PII leaks), enforces policies, and also optimises traffic (rate limiting, semantic caching, routing) to handle high volumes of LLM interactions at enterprise scale.
Pain Points
Enterprises adopting F5 AI Gateway face a steep learning curve.
Non-technical Users
Many of our customers — the ones I worked with — are non-technical users or business teams who were expected to configure the AI Gateway.
LLM Models Config
AI Gateway (routes, policies, backends, processors) can be configured via raw code (YAML/JSON).
Mismatch
strong product capabilities + non-technical user base
Core Challange
Unless we simplify the configuration, powerful tooling will remain under-used or mis-configured.
DEFINING SCOPE
Deep Dive into YAML and LLM Flow
Translating code logic into a shared mental model
Before designing, I needed to understand how the AI Gateway actually thinks. And to design something usable, I first needed to speak the language of the system.
I spent the first phase learning the YAML configuration structure of the F5 AI Gateway — exploring how routes, processors, policies, and backends interacted in real deployments.
I manually built sample configs, deploying small test cases to trace how each parameter shaped the LLM workflow:
-
How prompts were routed to different model backends
-
How processors filtered or transformed requests
-
How response policies handled token limits, latency, and security rules
To make sense of this complexity, I manually reconstructed the relationships and built a concept diagram that visualized the configuration flow:
Route → Policy → Profile → Processor → Service
This diagram (shown here) became the foundation for my later UI design. It transformed lines of YAML into a clear system map that everyone — designers, engineers, and PMs — could understand and discuss.
By deeply learning the YAML schema and visualizing it step by step, I turned the configuration file into a living workflow: one that could later be abstracted into an interactive visual editor where users “see” how their LLM pipelines connect, rather than read hundreds of lines of code.
Raw YAML configuration for LLM routes, processors, and policies
Concept flow diagram showing logical dependencies and execution order
Design Process
Visualizing the AI Workflow
Direction 1: Individual LLM Route
Once I mapped how each YAML element connected — from Route to Policy, Profile, Processor, and finally Service — the next challenge was expressing that same logic visually. Each configuration type carried its own behavior, dependencies, and parameters. My goal was to translate these layers into a coherent system of components that could communicate hierarchy and relationships at a glance, without overwhelming users.
I started by building a consistent visual grammar. Routes anchored the top as entry points, while policies and profiles formed the logical middle layers, representing the rules and collections that guided model behavior. Processors became the operational building blocks, each labeled by type, and services grounded the flow at the bottom as final destinations or model back-ends. Through shape, color, and hierarchy, I created a visual language that let users see the logic unfold — transforming a dense YAML structure into something that felt more like a map than a form.
Direction 2: Simplified LLM Map
Next, I focused on encoding relationships. Connections between entities became directional arrows that revealed how data moved through the system. Subtle cues in line weight, spacing, and indentation conveyed nesting depth, while small labels such as “Input,” “Selector,” and “Executor” helped users connect the visual flow with the underlying terminology used by engineers. While simplifying the YAML syntax, I made sure not to lose fidelity. Each visual node remained transparent — hovering over an element revealed its corresponding YAML snippet and parameters. This balance of simplicity and technical truth helped non-technical users trust the interface, while engineers could still verify accuracy.
Final Direction: AI Route Map with Drill-in Configurations
The final simplification distilled these layers into a global map where each node represented a higher-level component. This approach preserved system fidelity while allowing users to zoom out, scan complex configurations, and still trust that every visual relationship matched the underlying YAML.
By evolving from exhaustive detail to structured abstraction, the visualization transformed from a static diagram into a scalable mental model — a way to see complexity, not be buried by it.
THE DESIGN
F5 AI Gateway UI
Key Features:
Dual-View Editor — Combines a visual flow map and live YAML editor, allowing users to switch seamlessly between graphical and code views.
Real-Time Code Generation — Every change in the visual map instantly updates the YAML configuration, ensuring accuracy and transparency.
Inline Validation — Schema-aware checks surface errors before deployment, reducing misconfiguration risks.
Scalable System Map — The simplified flow view supports hundreds of services, giving users a clear overview without visual overload.
Search & Filtering — Users can locate routes, processors, or services instantly within large configurations.
Contextual Detail Panels — Hover and click interactions reveal configuration parameters, dependencies, and endpoint details without leaving the main view.
Route Configuration
Route — entry endpoint for AI traffic
A “Route” defines a unique URI path exposed by the AI Gateway that serves as the entry point for incoming requests. It maps the request to a specific “Policy” which then dictates how traffic will be processed, routed and secured. Each route configuration typically includes the endpoint path, the schema (e.g., v1/chat_completions), a reference to the policy to apply, and optional parameters such as timeout constraints.
Policy Configuration
Policy — rule set that directs model traffic and access
A “Policy” defines the logic that determines which “Profile” (and thus which processing chain) a given request should follow. Each policy includes authentication settings (such as JWT validation) and a list of profiles with selectors that map requests based on headers, tokens or other criteria.
It essentially acts as the “switchboard” — once a request hits a “Route”, the policy evaluates how to handle it and which downstream profile, processors, and service should be invoked.
Profile Configuration
Profile — defines the request and response pipeline
A Profile configures how prompts and responses are handled end to end. It connects input processors (which inspect or transform prompts) to an LLM Service, then attaches output processors that evaluate the response. Each Profile maps to one or more Services and can apply model-specific rules or selectors. In the visual editor, the Profile serves as the container for the entire workflow.
Processor Configuration
Processor — modular stage for data transformation
Processors are reusable logic units that inspect, modify, or filter data as it moves through the Gateway. They can run before an LLM call (to detect prompt injection or redact PII) or after it (to validate or summarize responses). Processors can be stacked in sequence and customized with parameters — making them the building blocks of secure, policy-compliant AI pipelines.
Services – large language models
Service — connection to LLM back-ends
A Service defines the target model or API endpoint that processes the prompt — for example, OpenAI GPT-4, Azure OpenAI, or Anthropic Claude. It includes credentials, endpoint URLs, and performance parameters such as rate limits or caching. In the visual map, Services sit at the bottom of the flow, representing the final stage where data leaves the Gateway for model inference.
NEXT Phase
Expanding configurability and interaction
The next phase of the editor focuses on adding form controls as an alternate way to configure code. While the current visual and YAML views support different user types, form-based input will bridge the gap — giving users a guided, field-by-field way to edit configuration parameters with built-in validation and auto-suggested values.
Looking ahead, the goal is to make the visual map draggable and editable, allowing users to create or modify configuration flows directly on the canvas. This evolution will turn the static visualization into a fully interactive workspace — where every node and connection can be configured, validated, and deployed in place.
Together, these enhancements move the tool closer to a truly bidirectional design experience: code that informs visuals, and visuals that write code.
COLLABORATION
Building alignment across disciplines
This project was highly cross-functional from the start. I partnered closely with solution architects to understand the YAML schema, system dependencies, and performance constraints behind F5 AI Gateway. Working side-by-side with frontend engineers, I translated the concept diagrams into a React-based editor with live schema validation and code synchronization.
Beyond product and engineering, I collaborated with the Go-to-Market and Customer Success teams to ensure the tool aligned with real client needs — simplifying the configuration experience for enterprise customers deploying LLM workflows at scale.
This tight loop between design, architecture, and market execution ensured the solution wasn’t just functional, but launch-ready, scalable, and deeply grounded in customer value.
LAUCH & IMPACT
Driving adoption and trust across enterprise users
F5 AI Gateway Launched in June 2025.
Accelerated Enterprise Adoption — Simplified onboarding for teams managing hundreds of AI routes, reducing setup time and YAML dependency.
Increased Product Adoption — Became a differentiator for F5 AI Gateway, helping acquire multiple new enterprise clients in the AI security and observability space.
Reduced Configuration Errors — Inline validation and visual guidance minimized misconfigurations and deployment risks.
Cross-Product Influence — Established a scalable design framework now reused across other F5 configuration and policy tools.
Enhanced User Confidence — Turned a technical process into a transparent, trustworthy experience that empowered non-technical users.
TAKEAWAYS
Lessons Learned
Deep technical immersion builds design credibility — Learning the YAML schema firsthand helped me design with precision and win trust from engineering.
Scalability matters as much as usability — What works for one flow can collapse under enterprise-scale complexity; designing for both is key.
Abstraction isn’t simplification — The goal is to make complexity visible, not disappear — clarity must coexist with accuracy.
Shared mental models accelerate collaboration — The concept map bridged teams, aligning designers, engineers, and PMs around one visual language.
Progressive simplification drives adoption — Iterating from detailed flow to scalable system view proved that less visual noise leads to higher comprehension and confidence.