Why can’t we tell AI what we want?
Jun 2025
AI can generate content, images, and code, but everything looks the same. Why is there no personal style?
The obvious answer is that the models aren’t creative. They follow patterns in their training data. So when you ask e.g. Lovable for a “professional dashboard,” you’ll get the most typical dashboard it has seen.
But here’s what’s interesting: if the model just follows probabilities, then the similarity isn’t really an AI problem. It’s a prompting problem. We’re not giving AI enough information to deviate from the statistical average.
Most people don’t know how to express their intent clearly, and current AI interfaces don’t help. When I ask Lovable to generate a dashboard to track my evening runs, I get something that looks like every other dashboard. This isn’t necessarily bad, but what if I had something specific in mind? The problem is, I probably don’t actually know what I had in mind until I see what I don’t want.
Let’s break this down. Most AI interfaces today rely on free-form chat, which seems natural but creates a hidden problem. It assumes we already know exactly what we want. And that we can express it perfectly. In reality, we rarely do. The open-endedness requires self-awareness and precise articulation of intent. For everything we don’t mention, AI fills in the blanks with the most probable options.
So how can we design better human-AI co-creation systems? We can look at human-human co-creation to find the answer. When humans work together, we have natural mechanisms for clarifying intent. If I tell a designer to “make it more elegant,” they’ll ask me follow-up questions or show me some options. The conversation itself helps us discover what we actually want. Current AI interfaces skip this step.
You might think iteration solves this. Get a first version, see what’s wrong, then revise. But that’s not true: when you prompt for changes, the AI often changes things you didn’t want changed. You ask it to move a button, and suddenly, your fonts are different. People usually twist their prompts to mitigate this: “Make no other changes besides what I just described.” But that cannot be the right solution. And a more sophisticated chat history won’t help with this, either. We might prefer the sidebar style from the two versions earlier. But if we reload that version, we will revert all other changes, too.
It’s also tempting to think that better models will eventually solve this problem. But I’m not so sure. If we don’t understand what an AI is capable of, we don’t know how to ask for what we want. And if we can’t clearly express what we wish, the tools have no choice but to fall back on patterns it has already seen.
What would an interface look like that actually supported this? I think it would feel less like commanding and more like thinking out loud with a capable partner. The UI itself would become part of the creative process, not just a way to issue instructions.
Throughout history, the best tools have been extensions of human thinking, not replacements for it. The pencil doesn’t know what to draw, but it makes the act of drawing feel like pure thought. Maybe the best AI interfaces won’t be about better prompting. They’ll be about helping humans discover their own intentions through the process of creation itself. But if that’s true, what would the best UI to co-create with AI actually look like? Interesting question. I should think about that.
AI can generate content, images, and code, but everything looks the same. Why is there no personal style?
The obvious answer is that the models aren’t creative. They follow patterns in their training data. So when you ask e.g. Lovable for a “professional dashboard,” you’ll get the most typical dashboard it has seen.
But here’s what’s interesting: if the model just follows probabilities, then the similarity isn’t really an AI problem. It’s a prompting problem. We’re not giving AI enough information to deviate from the statistical average.
Most people don’t know how to express their intent clearly, and current AI interfaces don’t help. When I ask Lovable to generate a dashboard to track my evening runs, I get something that looks like every other dashboard. This isn’t necessarily bad, but what if I had something specific in mind? The problem is, I probably don’t actually know what I had in mind until I see what I don’t want.
Let’s break this down. Most AI interfaces today rely on free-form chat, which seems natural but creates a hidden problem. It assumes we already know exactly what we want. And that we can express it perfectly. In reality, we rarely do. The open-endedness requires self-awareness and precise articulation of intent. For everything we don’t mention, AI fills in the blanks with the most probable options.
So how can we design better human-AI co-creation systems? We can look at human-human co-creation to find the answer. When humans work together, we have natural mechanisms for clarifying intent. If I tell a designer to “make it more elegant,” they’ll ask me follow-up questions or show me some options. The conversation itself helps us discover what we actually want. Current AI interfaces skip this step.
You might think iteration solves this. Get a first version, see what’s wrong, then revise. But that’s not true: when you prompt for changes, the AI often changes things you didn’t want changed. You ask it to move a button, and suddenly, your fonts are different. People usually twist their prompts to mitigate this: “Make no other changes besides what I just described.” But that cannot be the right solution. And a more sophisticated chat history won’t help with this, either. We might prefer the sidebar style from the two versions earlier. But if we reload that version, we will revert all other changes, too.
It’s also tempting to think that better models will eventually solve this problem. But I’m not so sure. If we don’t understand what an AI is capable of, we don’t know how to ask for what we want. And if we can’t clearly express what we wish, the tools have no choice but to fall back on patterns it has already seen.
What would an interface look like that actually supported this? I think it would feel less like commanding and more like thinking out loud with a capable partner. The UI itself would become part of the creative process, not just a way to issue instructions.
Throughout history, the best tools have been extensions of human thinking, not replacements for it. The pencil doesn’t know what to draw, but it makes the act of drawing feel like pure thought. Maybe the best AI interfaces won’t be about better prompting. They’ll be about helping humans discover their own intentions through the process of creation itself. But if that’s true, what would the best UI to co-create with AI actually look like? Interesting question. I should think about that.