Hacker News .hnnew | past | comments | ask | show | jobs | submit | prymitive's commentslogin

Sounds like somebody needs good numbers for IPO

They own Eufy which sells cameras with main feature being “no subscription needed”, that are very unreliable and full of ads (which isn’t being advertised as much as lack of subscription). They do also go big on labelling a lot of simple features as AI where in reality it’s something as simple as “detect a person in a photo”. I have Eufy cameras and it’s complete garbage, sadly competition is also mostly garbage. Bold unsustainable claims at st the core of their business, it’s not just thumbnails.

That’s my daily experience too. There are a few more behaviours that really annoys me, like: - it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened - or it wants to run some a command, I click the “nope” button and it just outputs “the user didn’t approve my command, I need to try again” and I need to click “nope” 10 more times or yell at it to stop - and the absolute best is when instead of just editing 20 lines one after another it decides to use a script to save 3 nanoseconds, and it always results in some hot mess of botched edits that it then wants to revert by running git reset —hard and starting from zero. I’ve learned that it usually saves me time if I never let it run scripts.

> it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened

Reminds us of the most important button the "AI" has, over the similarly bad human employee.

'X'

Until, of course, we pass resposibility for that button to an "AI".


The other day Codex on Mac gained the ability to control the UI. Will it close itself if instructed though? Maybe test that and make a benchmark. Closebench.

My point was more: will it stop the user closing it?

As ASAP As Possible


As asap as possible or you can say rip in peace to yourself


Sounds like a job for an agentic tool that can produce human like sentences on interval …


> they're doing as well as professionals do without oversight on production environments

The difference is that if a human does it there usually is done accountability, you’ll be asked how it happened and expected to learn from it. And if you do it again your social score goes down, nobody will trust you and you’ll be consider a liability. If a cli tool does it the outcome is different, you might stop saying the tool or you might blame yourself for not giving the tool enough context. And if it does it again you might just shrug it off with “well of course, it’s just a tool”.


Accountability according to reputation is exactly what is happening for AI providers. All these articles about Claude destroying systems makes people trust Claude less, and maybe even “fire” Claude by choosing another AI provider with better safeguards or low privileges built in.


There’s also entitlement from just using it: my org uses your software and there’s what we consider is a bug so you MUST fix it as asap as possible and in the future don’t release buggy software because it costs us time and money.


My biggest annoyance with gitlab is that the UI is one huge block of text on white background with absolutely no distinction between user content (like commit messages) and the interface itself, plus you have action buttons buried in the middle of the page. And everything loads asynchronous causing blocks to dance around when the one above loads …


> My biggest annoyance with girls

Pardon?


There are still a few things missing from all models: taste, shame and ambition. Yes they can write code, but they have no idea what needs does that code solve, what a good UX looks like and what not to ship. Not to mention that they all eventually go down rabbit holes of imaginary problems that cannot be solve (because they’re not real), and do where they will spend eternity unless w human says stop it right now.


While I agree, this is also true of many engineers I’ve met.


They have a severe lack of wisdom, as well.


Adding more LSP features to the jinja linter for saltstack that I wrote, so you can see all the bugs in your templates from VSCode (rather than waiting for CI) and do things like “rename this jinja variable everywhere it’s being used”.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: