
TL;DR
Adrian Krebs scored 500 Show HN landing pages against 15 AI design patterns. 21% were heavy slop, 46% mild, 33% clean. Here is the pattern list, the method, and why it matters even when you are the one shipping.
If you have been browsing Show HN for the past six months you have felt this without being able to name it. The pages look fine. They are coherent. They are polished. And they all somehow look like the same page.
Adrian Krebs gave the feeling a name and a measurement. His Scoring Show HN submissions for AI design patterns sits near the top of HN tonight at 277 points and 205 comments. He ran 500 of the latest Show HN landing pages through Playwright, scored each one against fifteen DOM and CSS patterns that designers he talked to described as tells, and binned the results.
The numbers:
Two-thirds of what you see on Show HN right now has a visual fingerprint that says "generated by a chat interface without an opinion." That is a lot. It is also why the Show HN stream has started to feel samey. The generator is the same. The defaults leak through.
Krebs grouped the tells into four buckets. This is the full list, because if you are shipping with Claude Code or Cursor right now, this is the checklist you should be running your own landing page against.
Inter is a wonderful typeface. It has also become the Helvetica of the LLM era. Every generated landing page defaults to it unless you specifically ask for something else. If you want to stand out, start by not using Inter.
Dark mode with purple accents is the default aesthetic the LLMs reach for when you do not specify one. It feels "modern" in a way that is so universal it has become invisible. The contrast issue is the biggest functional problem - generated dark themes routinely ship body text that fails WCAG AA.
The colored-left-border card is the most specific tell in the list. A designer Krebs quoted said "colored left borders are almost as reliable a sign of AI-generated design as em-dashes for text." Once you notice it you cannot stop noticing it.
The two dominant CSS fingerprints are shadcn/ui defaults and glassmorphism. shadcn in particular is a library that is explicitly designed to be copy-pasted by AI agents, which means every AI-generated landing page without stylistic intervention converges on the shadcn visual. Glassmorphism is the frosted-glass card treatment that had a moment in 2022 and has been the LLM default ever since.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
The part of Krebs' write-up I want to highlight is the scoring method. It is cheap, reproducible, and something you could run against your own site this afternoon.
The last point is important. Letting an LLM grade AI slop by eye would introduce the exact bias you are trying to measure. Deterministic checks against computed styles take the LLM out of the scoring loop. Krebs reports 5-10 percent false positives on manual QA, which is tolerable for bucketing.
If you want to adapt this for your own internal use, the checklist is small enough to implement in a few hours. Write fifteen functions that each answer "does this page trigger this pattern." Run them against your homepage, your pricing page, your docs. Bucket the result. Decide where you want to be.
A few counter-arguments to head off before they come up.
"But my landing page works." Yes. That is Krebs' read too. He explicitly says AI design slop is not bad, just uninspired. Validating a business was never about fancy design. The pre-LLM equivalent was everyone using Bootstrap. The practical failure mode is not that slop pages do not convert, it is that they stop standing out in a sea of identical slop pages. Differentiation gets more expensive, not less, as the defaults improve.
"I care about shipping, not design." Then ship ugly on purpose rather than ship slop by accident. An ugly page with a clear point of view is more memorable than a generic page with no point of view. If you are resource-constrained, the cheapest way to stand out is to pick a single strong opinion (a loud color, a bold type choice, an uncommon layout) and commit to it. A slop page is the expensive option, because it uses up design budget without giving you distinctive assets at the end.
"This is just taste-gatekeeping." It is and it isn't. The patterns on Krebs' list are measurable. They are the output of a generator with known biases. Noticing them and making deliberate choices against them is not gatekeeping, it is taste calibration in an era where the default aesthetic is being mass-produced. You can still choose shadcn and a purple accent. Just do it because you want to, not because that is what the model gave you.
Krebs does not go deep on what separates the clean third from the slop-heavy fifth, but the pattern is consistent across sites I have audited with the same checklist. Clean sites do three things.
They pick a color palette that is not the LLM default. Warm earth tones, or high-contrast black-and-a-single-bright, or a Gumroad-ish cream-and-pink, or a Stripe-ish grey-and-blue. Anything with a point of view. Explicitly not the default lavender.
They pick a type system that is not Inter. Geist, Haas Grotesk, Untitled Sans, Söhne, Inktrap, Migra, anything else. Pair it with a body font that is not also Inter. The contrast wakes the page up.
They use one strong layout primitive and repeat it. Not seven feature cards with seven different icon treatments. Not three stat banners and four step sequences and a sidebar with emojis. One primitive, repeated until it becomes the site's visual signature. This is the single highest-leverage discipline on the list.
Krebs teased a potential open source of the scoring code and said "let me know if there is interest." This is worth asking for. A small CLI that runs a Playwright scoring pass over any URL and returns a slop score is a useful piece of infrastructure. It belongs next to Lighthouse in the pre-launch checklist.
If he ships it, great. If he does not, it is a weekend project for someone else to build. The primitives exist. The scoring rubric is public. The market is every single developer who just shipped a landing page this week with Cursor and is wondering if the reason their launch tweet fell flat is that they accidentally shipped slop.
Put differently: you can now measure the visual output of your AI stack against a fifteen-item checklist. The measurement is cheap. The fix is mostly just making deliberate choices. That is a better loop than hoping your design instincts have survived a year of chat-interface defaults.
The full essay with screenshots is at adriankrebs.ch/blog/design-slop. It takes ten minutes and it will permanently change how you read a Show HN stream.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Anthropic's agentic coding CLI. Runs in your terminal, edits files autonomously, spawns sub-agents, and maintains memory...
View ToolAI-native code editor forked from VS Code. Composer mode rewrites multiple files at once. Tab autocomplete predicts your...

New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Install the dd CLI and scaffold your first AI-powered app in under a minute.
Getting StartedWhat MCP servers are, how they work, and how to build your own in 5 minutes.
AI AgentsInstall Ollama and LM Studio, pull your first model, and run AI locally for coding, chat, and automation - with zero cloud dependency.
Getting Started
Check out Magic Patterns: https://magicpatterns.1stcollab.com/developersdigest_4 In this video, I share insights inspired by the CEO of Figma on the importance of design in the age of AI-generated...

Check out Magic Patterns; https://magicpatterns.1stcollab.com/developersdigest_3 In this video, explore Magic Patterns, a platform that allows you to create front-end prototypes using natural...

Check out Magic Patterns here: https://magicpatterns.1stcollab.com/developersdigest_2 This video provides an in-depth look at the custom components library feature of Magic Patterns, a design...
Martin Fowler reframes AI-era debt into three layers - technical, cognitive, and intent. The third one is the one most t...

A new study from nrehiew quantifies a problem every Claude Code, Cursor, and Codex user has felt: models making huge dif...

Zed shipped a Threads Sidebar that runs multiple agents in one window, isolated per-worktree, with per-thread agent sele...