Hacker News .hnnew | past | comments | ask | show | jobs | submit | maplethorpe's commentslogin

Are there any companies that aren't AI-pilled at this point?

Odoo, Belgium, cloud ERP. Not very AI pilled, even if AI is considered and used somehow

Odoo suffers from others issues though. Not sure if this is still the case, but the mix of inline Python 2 Flask + XML was basically tech debt-as-a-service.

Also the very ugly death they gave OpenERP/Odoo on-premise.


If you make a hundred dollars in 0.1 seconds, you could say your annualized revenue is $100 / 0.1 * 60 * 60 * 24 * 365 = -$30 billion.

That said, most people would use a monthly or quarterly period to estimate ARR. I'm not sure what Anthropic used. Probably monthly.


Isn't the technology that enabled the deception noteworthy? Presumably this person wouldn't have been able to do this before AI.

Hypothetically, if a hacking tool was released that let non-technical people hack into sensitive databases, and then a journalist wrote the headline "local man hacks IRS", without any mention of the tool, wouldn't that be a bit irresponsible, to purposely leave that information out?


> Presumably this person wouldn't have been able to do this before AI.

Photoshop? I don't think you need much skill.


To make a shooped image good enough to fool the police into think they're looking at a completely real picture, you'd think it would take a reasonable amount of skill. If nothing else you need an exact match picture in terms of lighting and perspective.

I guess people here are too young to remember things like the WTC plane guy. Half the people online thought it was genuine, while he did it for the lulz in a few minutes. Nobody cared about inconsistent lighting and perspective. Same way most people don't care about the obvious hallmarks of diffusion model generated pictures today.

I'm not too young. I can't remember if I thought it was real at the time, but if I did, I give myself a pass since I was probably viewing it on a 15 inch CRT at 1024x768.

Because we're talking about the ease of Photoshopping a wolf into a scene, I think it's also worth pointing out that floating objects are a lot easier to work with than grounded objects, since cast shadows and bounce lighting are less of an issue. Having said that, it would still require some basic skill to achieve the WTC image which I think you're discounting. You'd need a working knowledge of layers, masks, and the lasso tool, which already would have placed it out of reach for most people at the time. Online resources were much more scarce, so I wouldn't be surprised if this guy was a hobbyist photographer or graphic designer. It definitely wouldn't have been achievable in a few minutes for the average person, and doing the same thing with a wolf would have been far more difficult, and well outside the realm of possibility for anyone who wasn't an expert.


The picture in the article also doesn't look very high res. So it's actually the exact same circumstances as WTC guy, except the police actually cared enough to act on the picture but not enough to verify it. You could take all these arguments here and apply them 1:1 to photoshop in the late 90s / early 2000s. Back then it was also easier than ever before to manipulate images and non-experts could suddenly do what only professional forgers could before. AI has merely slightly lowered the bar further to the point that even people who have trouble turning on a PC can do it now.

> AI has merely slightly lowered the bar

I guess we're not going to agree on just how far that bar has fallen. Learning Photoshop as a teen got me my first job. The only reason I had one at all was because most people couldn't do a very convincing job of it. Now even my mom, a person who struggles to open her email, can do a better photoshop than me.


A person who had a Photoshop licence, had played around with layers and colour balance before and was sufficiently motivated to make it look convincing to spend a bit of time tidying it up, sure they could. But I'm not sure that necessarily applies to random people making funny memes of the wolf in their neighbourhood...

Creating a photorealistic mashup in Photoshop, without AI, takes a lot of skill. Just getting the shadows looking correct takes enough skill in itself, and that's only part of it.

Have you used Photoshop before? You come across as commenting on something you don't understand.


People have lied to the authorities without AI.

I believe they've stated that it would be too dangerous to release.

Just like OpenAI said GPT 2 was too dangerous to release?

There was just an article on this phenomenon today: https://hackernews.hn/item?id=47890235


They released a system card talking about how powerful it was. I don't think OpenAI did that with GPT 2.

I mean, that's just part of the marketing too. OpenAI would've absolutely added a system card, they just weren't invented back in the GPT 2 era.

too uneconomical to run*

I think the unfortunate fact is that most jobs in the world do not require accuracy, so an inaccurate result has a negligible impact over an accurate one.

I used to feel job safety in the knowledge that AI labs weren't likely to solve the hallucination problem. Then it dawned on me that they don't need to — they just need to reduce our collective expectations.


I predict that this illusion of "(in)accurate enough" will last long enough to trigger a cascading avalanche of failures across all fields of human endeavours, an I'd be pretty cautious to bet on quick recovery or even survival of this civilization after that.

Isn't that entirely analogous to our evolved and lived experiences?

We've never had to act with surgical precision except in matters of math/science/engineering.

Like how you fill up your coffee cup up to a level probably +/- 50ml each morning.


No. For most of human evolution, we were hunter-gatherers. Imagine trying to hunt game with the accuracy of LLMs. You'll starve. Picking edible fruits from plants also requires precision, both in terms of the hand/eye coordination of actually picking it as well as in terms of knowing what's edible and what's poisonous.

When you fill up your coffee cup in the morning, I sure hope you aim accurately and don't pour half of it all over your desk. And don't even get me started on the process of making coffee that isn't completely unpalatable.


> Side projects are typically time constrained - if AI saves you time, why wouldn't you use it?

It depends what your goals are. All of my side projects were started because I wanted to learn something. Using a "skip to the end" button wouldn't really make sense for me.


The difference between people who want to learn things versus people who just want a finished product is going to be a big dividing line in the post AI world

Learn what though? Is knowing CSS at all relevant to making a site all about, say, every type of cheese? If I have, say, 6 hours to build that site, does learning about CSS make the site better, or does learnin about the history of rennet make the site better? The assumption that using AI to replace learning about CSS is replaced by being a drooling moron with the time saved instead, is unfounded. The AI is a fountain of knowledge (that you have to double check). That people choose to not to learn about topics they don't find interesting because they'd rather learn about topics they do find interesting, doesn't automatically make them dumber than you.

> If I have, say, 6 hours to build that site

Then chances are it’ll be subpar either way. Every type of cheese, in six hours? The CSS isn’t the bottleneck there, it’s information hierarchy and the information itself. You can’t possibly learn about the history of cheeses and summarise it and organise it for a website in that amount of time. Writing the website code isn’t the lengthy part.

> That people choose to not to learn about topics they don't find interesting because they'd rather learn about topics they do find interesting, doesn't automatically make them dumber than you.

Why so rough? I don’t see any judgement of character or intelligence in the comment you’re replying to.


It's also a nice opportunity to learn while getting something out!

The person’s goals might not be spending a lot of time on CSS. Because a person who just does everything from scratch may find themselves learning about what FlexBox is or why your z-index isn’t working.

Going off of the two screenshots in the OP, neither of those were about frontend.

So if the choice is spending time designing a more human frontend or spending more time on the core product, I don’t fault people for choosing the latter.

Now if the core product also stinks, that’s a different issue.


I know on its face this looks like stealing, but this would likely fall under fair use, since the original work is being transformed.

Not a lawyer, here, of course, but does "transformed" cover making a functional copy? Artistically, "transformed" means something is related, but different. In the case of software, this transformation is to code that actually does the same thing as the original. Is that "transformed"? I apologize if that comes off as pugnacious - I'm trying to learn, not poke holes in your argument, but I couldn't figure out a better way to phrase it and still retain the question.

Functionality isn't covered by copyright; it needs a patent. You could even have identical files, if it's the simplest way to do something: https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_....

I've seen anti-AI comments here disappear within minutes of posting. I'm honestly surprised to see one at the top of this thread.

What causes comments to disappear? Is that what flagging does?


You probably see that because many are low effort Reddit level comments. I’ve seen lots of long AI skeptic threads and people talking about the likely negatives of AI.

showdead=no in user settings hides flagged & moderator killed posts

I tried setting showdead=yes but two comments I remember seeing earlier today (as replies to one of my comments) are still gone. Does anyone what else might have happened to them?

Maybe the posters might have deleted the comments themselves?

I often post comments on HN, just to delete them 5 minutes later when I realize I don’t care to deal with the replies I’ll eventually get.

You have to be quick because if someone does reply, you can no longer delete your message.


One benefit of this forum is that they purposely passed over notifications precisely to save us from the temptation to "deal with" replies.

And I very much appreciate that feature, and hope it never changes.

However when I make comments here, I do it with the intention of reading what people have to say in response.

If I am making a comment with the intention to ignore the responses to it, then that’s a good signal for myself that what I am writing is likely not an appropriate comment for HN, and then delete it.


I didn't realise messages even had a delete button. I'm going to reply here so I can check.

edit: you're right, there's a delete button.


I see properly argued positions, even if very anti-AI, hang around, but cheap tribalist takes usually get downvoted pretty quickly.

Cheap pro-AI comments don't get flagged though. You can repeat the same talking points forever:

- "Artists have always been exploited" (patently false since at least 1950, it was a symbiosis with the industry).

- "Humans have always done $X".

- "You are a Luddite."

- "This is inevitable."


Personally I’d downvote these if not further substantiated. Flags are reserved for outright rage bait or personal insults for me.

At least I hope; can’t say I always perfectly follow “up/downvote doesn’t indicate (dis)agreement but rather contribution to the discussion” perfectly.


> I and the people I work with are using agents to learn new topics so fast.

I'm a person who loves learning but I don't really understand this claim. My brain quickly reaches a saturation point when learning new topics. I need to leave and come back multiple times until I begin to understand, but this seems to me to be a normal part of the process. It's the struggle that forms the connections in my brain.

Being spoon-fed information isn't the same as learning, to me. Are you also using AI to test you on your new knowledge? Does it administer these tests periodically? Or are you just reviewing notes and saying to yourself "I know this now"?

How are you ensuring you've learned anything at all?


Reminds me of the book "Make It Stick: The Science of Successful Learning" and its comparison of spaced repetition and cramming.

Cramming often feels more satisfying, more like you're learning, but actually leads to worse retention. Spaced repetition that includes the struggle of recalling something just at the edge of being forgotten, on the other hand, feels worse but leads to much higher retention.


> Being spoon-fed information isn't the same as learning, to me

It's like it distills it for you. I feel like you're thinking of an example like trying to learning operating systems by reading wikipedia articles (i.e. it gives you a high level summary but nothing more).

The way I see it, code says a lot, but it takes time to scroll through it and cmd+click back and forth. But if you just ask the AI "where's x thing happening around this file" it will just point you right to it. So I feel like less cognitive energy is spent dealing with the syntactic quirks of code and more is spent on the essential algorithmic task.

I don't really like using it to summarize natural language written by one author or group, like a paper for example, that just feels like laziness to me.


AI has helped me rediscover my love of coding. It helps me write my emails for me, puts together my shopping list, and gives me advice on how to structure my day. AI tells me what to do. I don't have to fear my choices anymore, because AI makes the choices for me.

Sam and Dario were in the closet making AIs and I saw one of the AIs and the AI looked at me.

The AI looked at you?!

When Claude had an outage I forgot how to walk up stairs and couldn’t look it up so I waited for someone to come and get me

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: