Hacker News .hnnew | past | comments | ask | show | jobs | submit | more codegladiator's commentslogin

Why does LLM generated websites feel so "LLM generated".

Its like a bootstrap css just dropped. People still giving "minimum effort" into their vibe code/eng projects but slap a domain on top. Is this to save token cost ?


To be honest if it were my software I'd probably give it a "Prof. Dr." style page and call it a day, then get called out on Hackernews with "haven't you heard of CSS? It's 2025, you actually want to entice people to use your software, don't you?" or similar.

Apparently Prof. Dr. style is the aesthetic name for pages with no custom styling of any kind, common at universities in the 01990s: https://contemporary-home-computing.org/prof-dr-style/


they feel like that because people building them are not generally designers and they don't care about novelty or even functionality as long as it looks pleasing to the eye. Most of them probably include "make it look pretty" etc in the prompt and LLMs will naturally converege to a common idea for what is "pretty" and apparently that is purple gradients in everything and if you don't have taste, you can't tell any better, beacuse you're doing a job you don't fundamentally understand.


This skill demonstrates how to tell an agent to make a non-generic website [1].

These are the money lines:

    NEVER use generic AI-generated aesthetics like overused font families
    (Inter, Roboto, Arial, system fonts), cliched color schemes
    (particularly purple gradients on white backgrounds), predictable
    layouts and component patterns, and cookie-cutter design that lacks
    context-specific character.
    
    Interpret creatively and make unexpected choices that feel genuinely
    designed for the context. No design should be the same. Vary between
    light and dark themes, different fonts, different aesthetics. NEVER
    converge on common choices (Space Grotesk, for example) across
    generations.


[1]: https://github.com/anthropics/claude-code/blob/main/plugins/...


Because their goal isn’t to build a website, but to promote and share their product. Why would anyone invest more time than necessary in a tangential part of the project?


Fair enough. I did not see this as a promotion of the product and more of as a show experimental side project. But if they really want to promote the product, the llm design isnt helping giving any confidence. A blog post would have sufficed.


It's because LLM tools have design guidelines as a part of the system prompt which makes everything look the same unless you explicitly tell it otherwise.

To give an example that annoys me to no end, Google's Antigravity insists on making everything "anthropomorphic", which gets interpreted as overtly rounded corners, way too much padding everywhere, and sometimes even text gradients (huge no-no in design). So, unless you instruct it otherwise, every webpage it creates looks like a lame attempt at this: https://m3.material.io/


I'd hazard a guess that it's based on what the LLM can "design" without actually being able to see it or have taste and it still reliably look fine to humans.


And now Perplexity has mailed all of those "free" users to add a "card" (wont charge now) to continue with its free pro offer. Apart from airtel perplexity ran a lot of college based programs where students were basically referring each other for money.


They also have 12 months free when subscribing through Paypal. There's almost zero chance I remain a customer after those 12 months are over, since I find ChatGPT way more valuable.


They dont need to stay. They just need you as a user long enough to sell off to some dumb investor or another company that will find dumb investors.


vlc


Well the "software" folks started it, I met a full stack engineer the other day, that word used to have some meaning as well.


"Full stack" grinds my gears, too. Do they really work from sand to human factors?


I hate sand


Is this LaTeX Quine ?


llm generated comment


I am sure your comment is almost always wrong whenever you use it.


https://github.com/josdejong/jsonrepair

might be useful ( i am not the author )


How did chrome webstore team approve use of eval/new function in chrome plugin ? Isn't that against their tos ?

  Execute JavaScript code in the context of the current page


Not having looked at the extension, I would assume they use the chrome.scripting API in MV3.

https://developer.chrome.com/docs/extensions/reference/api/s...

https://developer.chrome.com/blog/crx-scripting-api


No, this can't be used for remote code. Only existing local code.


Thanks for clarifying. It looks like I needed to refresh my memory of the browser APIs.

Reading further, this API only works remotely for CSS via chrome.scripting.insertCSS. For JS, however, the chrome.scripting.executeScript JS needs to be packaged locally with the extension, as you said.

It seems the advanced method is to use chrome.userScripts, which allows for arbitrary script injection, but requires the user be in Dev Mode and have an extra flag enabled for permission. This API enables extensions like TamperMonkey.

Since the Claude extension doesn't seem to require this extra permission flag, I'm curious what method they're using in this case. Browser extensions are de facto visible-source, so it should be possible to figure out with a little review.


Doesn’t basically every Chrome extension execute JavaScript in the context of the page?


That's the javascript included in the plugin crx. This is about code retrieved over API being executed (so that code being run cannot be approved by chrome webstore team)


I don't think they mean executing locally JS code generated server-side.


Its a "tool call" definition in their code named 'execute_javascript', which takes in a "code" parameter and executes it. The code here being provided by the LLM which is not sitting locally. So that code is not present "in the plugin binary" at the time when chrome store team is reviewing it.


I'd very curious to know how they managed to deal with this then. There's always the option of embedding quickjs-vm within the addon (as a wasm module), but that would not allow the executed code to access the document.


It seems like they are using the debugger.


> With Opus 4.5, Claude Code feels like having a god-level engineer beside you. Opinionated but friendly. Zero ego.

Who keeps forgetting variable names and function calling conventions it used 4 seconds ago while using 136 GBs of ram for the cli causing you to frequently force quit the whole terminal. Its not even human level.


And then hallucinating APIs that don't exist, breaking all the unit tests and giving up saying they're an "implementation detail", and over engineering a horrific class that makes Enterprise Fizzbuzz look reasonable


Except a god-level engineer wouldn't write unit tests that pass but don't actually test anything because it mocked the responses instead of testing the _actual_ responses, so your app is still broken despite tests passing and "victory!" claims by the "engineer".

Just one example of many personal experiences.

It is helpful, and very very fast at looking things up, sifting through logs and documentation to figure out a bug, writing ad-hoc scripts, researching solutions; but definitely junior-level when it comes to reasoning, you really have to keep your thinking cap on and guide it.


I've been running claude code on a 13 year old potato and it's never used 136GB of RAM - possibly because I only have 8GB.


Its vram or something makes the OS completely busy even I have only 32 gb ram. task manager shows 100+ gbs forcing to terminate


is that vram on your GPU? I don't think claude code uses that.


Not on GPU, I think it's just paged memory. You are right claude-code isn't running the model locally. Today I've had to kill it 5 times till now.

edit: https://ibb.co/Fbn8Q3pb

that's the 6th


Why do you think it's Claude and not iTerm?


been using iterm for 10 years. Didn't update recently. claude code is the only new factor in my setup. I can visibly predict as i am using claude code when its about to happen (when conversation goes above 200 messages and then uses sub agents leading to somehow infinite rerendering of the message timeline and they seemingly use a html to bash rendering thing because ... ) so yeah maybe you are right iterm is not able to handle those rerendering or maybe the monitor is broken.


I use xterm, and the visual glitch doesn't crash anything, so maybe try that? I suspect though maybe you're using much longer sessions than I do, with the talk of sub agents and all.

I've mostly just been using it for single features and then often just quitting it until I have the next dumb idea to try out.


Context is garbage in, garbage out.


My entire codebase is in a certain style that's very easy to infer from just looking around in the same file, yet Claude Code routinely makes up its own preferences and doesn't respect the style even given an instruction in CLAUDE.md. Claude Code brings its own garbage even when there's plenty of my own garbage to glean from. That's not what GIGO is supposed to be.


There is a reasonable argument that your question is at least NP, and plausibly NP-hard or harder depending on how you formalize the verification oracle.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: