Slop didn't start with AI
I've been sitting with this tweet for a few days now. Andre Landgraf's original point is sharp, and I think it extends well beyond code into design, copy, illustration, and pretty much everything else made on a screen.
The assumption underneath a lot of AI criticism is that human-made work has some inherent quality guarantee. That before AI, the default was craft. And now that AI is involved, the default is slop. I don't think that holds up when you actually look at the volume and quality of work that was being produced before any of this existed.
The record was already full of slop
Twenty years in design and development gives you a long view. And here's what that view actually looks like: an enormous amount of human-made work has always been low-effort, derivative, and forgettable.
Think about the template abuse era. Designers copying ThemeForest templates wholesale, swapping logos and calling it a brand. Stock photo websites full of identical layouts, the same hero section with a smiling team in a glass office. Client work rushed out the door because the brief was vague, nobody pushed back, and the deadline was yesterday. Spec work that was really just recycling whatever won the last award show.
None of that required AI. Humans did that entirely on their own.
The underlying thing here is that slop is the output of low intent, not the output of any particular tool. When someone doesn't care about the work, doesn't have strong taste, or is just trying to hit a number, the result reflects that. That was true with Photoshop. It was true with WordPress themes. It was true with every stock illustration pack ever made.
Why we don't see human slop as clearly
So why does the "AI = slop" assumption feel so intuitive? I think there are three honest reasons worth naming.
The first is romanticism about craft. We've built up a cultural story that struggle equals value — that if something was hard to make, it must be worth something. Hand-coded means care. Drawn from scratch means soul. There's something real in that story, but it's not universally true. Plenty of hand-crafted things are mediocre. Plenty of template-built things are thoughtful. The effort doesn't guarantee the outcome.
The second is survivorship bias. We mostly see the human work that made it: the award-winning campaigns, the praised portfolios, the sites that got featured. The vast sea of forgettable work just quietly disappears. It doesn't get shared or celebrated, so we don't have a vivid picture of how much of it existed. AI-generated slop, by contrast, is very visible right now because people are actively pointing it out.
The third is something more uncomfortable: fear. A lot of the "AI makes slop" narrative is a defensive posture. If AI only produces low-quality work, then the skills people have spent years building are still safe. I get that. The anxiety is real. But building the argument on a shaky premise doesn't do anyone any favors in the long run.
The tool doesn't determine quality, the person does
Here's my actual take on AI output quality right now: it raises the floor, but the ceiling still requires someone with judgment behind it.
And more than that, the quality of what you get out of AI is a direct reflection of the quality of what you've put into the system around it. Not just the prompt in the moment, but the context you've built over time. When I'm working in Claude Code, the outputs are meaningfully better because of the groundwork: a CLAUDE.md that defines how the project works, skills files that encode specific patterns and preferences, a system shaped by considered decisions upstream.
That's not really different from any other creative or technical workflow. A good design brief produces better work than a vague one. A well-structured content strategy produces better writing than "just write something." Garbage in, garbage out has always been true. AI just makes the gap more visible, faster.
All that to say: if you hand AI a weak input with no context and no taste applied, you'll get weak output. If you hand a junior designer a weak brief with no direction, you'll get weak output. The tool changes the speed and the scale, but the underlying dynamic is the same.
What actually matters
I'm not saying AI can't be used to produce slop at scale. It absolutely can, and the volume of low-effort AI content is genuinely a problem worth paying attention to. But that problem isn't new and it isn't unique to AI. It's a people problem dressed up as a technology problem.
What I keep coming back to is this: quality has always been downstream of intent. The people who cared about the work — who had taste, who pushed back on bad briefs, who iterated past the first acceptable version — produced good work regardless of what tools they were using. That's still true.
The question worth asking isn't "was this made by a human or an AI?" It's "did someone with genuine intent and good judgment make the decisions that shaped this?" Sometimes that person is writing every line by hand. Sometimes they're building careful context around an AI system and directing its output with a clear eye.
The tool has changed. The thing that makes work good hasn't.
❖
P.S. If this post made you think — if it felt useful, considered, not like slop — here's the honest irony: Claude helped me write every word of it. But not before I defined the idea, formed the opinion, and brought the context and intent entirely from my own experience. I've struggled to put pen to paper for years out of a fear of perfectionism. Claude didn't give me the thought. It helped me examine it, expand on it, and finally get it out of my head. Which is, I think, exactly the point.
Subscribe
Honored you're into it
TBD on a newsletter, etc.

