Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the more critical thing is an open UI protocol.

Let me explain - imagine generative AI is good enough we can just generate a UI on the fly. Take to its extreme every user can have a personal block of UI components catered to their preferences (dark mode, blue color scheme, bigger fonts etc). Then instead of every business designing their own UI for their website they just send over the information and the UI is compiled for every user baed on their own personal set of blocks.

We would very quickly need to have some sort of standard protocol to make this work. I think that would be a way more efficient world because companies can focus on the content and not on tweaking design. And every user has a lot more control over their user experience.

Of course a lot of companies may want to control the experience themselves so maybe its not one way or the other but a good chunk of websites cna use this pattern and in time it may actually become an advantage as users expect you to follow this UI protocol.



Saw a talk from a dev at Amazon along these lines recently.

The general concept is called “server-driven UI” (SDUI) and they talked about experimenting with a completely AI/LLM-powered frontend. It has too many problems today for practical use (LLM FE sucks with accessibility, not to mention the overall cost!) so they instead tried a half measure.

Their FE team makes a series of generic components (“primitives”) and the AI then picks among them to “build” the FE on demand. That’s the “control the experience” thing you’re getting at.

They then (hand wave) allowed the LLM access to a customer data DB.

This unused experiment would let customers search things like “what movies will I like?” and get a cogent FE despite no engineer shipping that specifically.


Wouldn't the ability to style the existing HTML-native elements and user stylesheets handle most of this ask? It seems that the former is a major goal of this initiative.


Very happy to see someone else thought of this too.

I see the endgame as one in which services just expose documentation to their APIs and the AI figures out, based on your request, what to call and how to present the results to you based on pre-set preferences.

The responsibility of discoverability also would shift from the UI/UX person to the AI.

The potential obstacle here is that a lot of companies make their money from the UI/UX used to deliver their service on top of the service itself e.g by adding dark patterns, visual cues, collecting usage pattern data and showing you ads.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: