Hacker News .hnnew | past | comments | ask | show | jobs | submit | multjoy's commentslogin

>Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.

As a proud neo-luddite, I'm watching the AI hype with grim amusement and I'll tell you hwhat, it doesn't look like a good time. Even putting to one side the planetary scale economic crash that is incoming, all the hypers seem to be on some sort of treadmill that is out of their control and it simply doesn't look like fun.


Do you think that avoidance is going to protect you from the fall-out?

Everyone keeps saying how essential it all is yet a few years in and I still don’t see anything like the promised future of “everyone using them every day for everything.” Everyone’s just constantly talking (or stressing) about it.

We - including the companies - don’t know what the real “billion dollar application” of them is other than the unproven claim it makes everyone more productive in some general sense. When it doesn’t work people continue to say “it’s your fault not the tool’s.” Meanwhile investors are getting skittish and not one AI company is profitable yet. Companies that laid people off for LLM’s are regretting their decisions, leadership (and educators) is dealing with unvetted writing and having to waste their time cleaning it up, the list goes on. “Slop” is still a huge and growing problem.

LLM’s are here to stay, but IMO it’ll be more relevant in the long run than 3D printers yet less revolutionary than the internet. Everyone will touch them at various points but this whole-life, every-industry-disrupted integration still seems far fetched to me. Pricing is still a huge unsolved problem - everyone is still subsidized and despite gains in using fewer resources, it’s still too much to run these locally, even small models (not even getting into tooling and knowledge required to use them in a productive way).

When we zoom out and look at the whole picture, LLM’s have mostly made everyone’s online experience worse while the VC funded companies behind them are playing municipal and state governments’ for suckers a la Amazon getting so many cities to trip over each other giving away land and tax breaks, but far worse. Those are the biggest contributions so far aside from anecdotes from coders about “1000x productivity.” Again, I think they’re here to stay. But it’s called “AI hype” for a reason.

LLM’s have mostly been a problem creator IME rather than a “disruptor.” Never really seen “revolutionary technology” quite like it.

But hey, I’ll admit it’s useful to have a meh local model when I’m writing TTRPG stuff and have writer’s block. Though then I remember how it was trained, a whole other subject I haven’t even touched, so that kind of sucks too.


Yes, mainly because I will continue to know the difference between a truth and a lie.

The author is blocking UK ip addresses, presumably out of principle rather than because they’d fall foul of it.

If I'm eating a sausage, I like to be certain that no asbestos was used in its production.

This is a ridiculous analogy. Test the app. Read its source code. Developers could always write toxic instruction in your tools. AI may write inefficient or messy code, but it’s far from nefarious. “Asbestos” code is written intentionally by humans, not unintentionally AI.

That's a good way to guarantee nobody will use it. Who is going to test the app in a sandbox with godknowswhat kind of tooling needed to find malicious behavior and read the code? For a tool that's convenient once per decade?

At no point ever in history could you guarantee that third party code downloaded from the internet was not malicious without some sort of security review.

Software security assessments exist for this very purpose. You may personally lack the rigor to do this at home but those who have rigorous security processes absolutely do implement security reviews.

There is a whole industry of professionals who do this work.


Nobody, and that's my point. 99% of people going to install the tool and never bother with the source. This was true before AI and is still true now.

Aptly, given Elon's ancestry, did the whole anti-apartheid movement simply pass you by?

Or, you know, just write it yourself.

The references are almost irrelevant. The banks + fintechs have far more depth than that.


How do you know that the explanations are free from error?


You can still learn from sources that have errors. Many textbooks have mistakes and false information in them, but that didn't stop them from providing educational value to people.


We're talking about LLM's that are designed to be confidently incorrect. Accuracy is a side-effect.


When textbooks are incorrect it is also with great confidence. If you can't spot logical inconsistencies in the material were you actually learning or merely memorizing?


Critical thinking


The EU does not write traffic legislation, it leaves that up to the individual states.

Unlike the US, the EU is a collection of fully sovereign countries.


This was a mental shortcut, to exemplify EU vs US attitude in this case. I am from and live in one of EU countries, I am very well aware of that.


“fully” is optimistic - being a member of the EU means giving up some sovereignty.


No, because in a functioning legislature the offence would be something like 'failing to disclose details', in the same way that refusing to participate in a DUI breath/blood draw would be a discrete offence.


The UK driving licence authority (DVLA) also has a period in which you can’t conduct a range of transactions overnight, but that’s because it interfaces with systems that still run batch jobs overnight and the cost of making it all 24/7 simply wasn’t worth it considering the demand.


Really having common maintenance windows makes things way easier. If you already have a service with a limited geographical range its not bad.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: