Generative User Interfaces

May 6, 2025

Way back in December 2023, Google posted a video that still feels ahead of its time to me.

In the demo, the user asks for animal-themed birthday party ideas for his daughter. The chatbot replies with an interactive widget with ideas ranging from Under the Sea to Unicorns. Nothing too crazy on the surface.

But as the demo continues and shows how this works under the hood, you can see that the widget was not a "canned" component. Rather, the application is going through a series of steps that are essentially the classic software development process—it identifies the user's problem, defines requirements for solution, generates the ui, and links it to structured data.

Today, a year and a half later, I think this idea still has a lot of territory left to uncover, and I wanted to explore it on my own. If I can use an LLM to create a user interface based on plain language requirements, and I can also use an LLM to generate those requirements, couldn't I just connect those two steps together and generate a UI on the fly?

To see this idea in action, I picked a simple concept—build a component that collects product reviews. But instead of a generic review form, we'll use an LLM to create a custom form on the fly based on the product description. The form for a car should ask about the gas mileage, but the form for a sweater should ask about the fit.

I wrote up a fairly simple prompt describing my idea. The Replit Agent had a working version in one-shot, and with a few iterations we had a very presentable version.

An animated gif showing a product review form for a DeLorean sports car generated in real time, as described in the essay.

Here, you can see the form to review a "DeLorean Sports Car" generated in real-time. There are some general questions followed by car-specific topics like acceleration, handling, comfort, etc.

The form for "coffee maker" asks about brew quality, ease of cleaning, and if there is a timer function. The form for "sofa" asks about comfort, durability, and style.

An even more interesting version of this approach could incorporate some aspect of the user's context to the form—for example, if we know the user has owned the car for three years, we could also ask about maintenance history.

I'm curious how this pattern might evolve beyond simple forms into more complex interfaces, or even entire applications. The greatest strength of chatbots is their open-ended, adaptable nature. But this can also make it harder for users to discover what's possible and how to get the output they need. Dynamic UI generation could potentially give us the best of both worlds: interfaces that are simple and discoverable, but expand their complexity when and where the user requires it.

Of course, there are challenges. Generating interfaces that are consistent, accessible, and truly intuitive requires more than just connecting a few LLM components. In particular, this approach may create challenges around security and creating interfaces that don't feel chaotic or unpredictable to users.

But the potential is enormous. Generative UI may fundamentally change how we think about software interfaces—moving from manually designed components to dynamically generated experiences that adapt to each user's unique context and needs.

The source code for this project is available on Github. I'd love to hear your thoughts about other ways this pattern could be applied.