Hacker News .hnnew | past | comments | ask | show | jobs | submit | munk-a's commentslogin

We also need to question how many people might go through the same process without eventual exoneration and how much going through this process costs individuals. Being falsely prosecuted comes usually imparts a permanent black mark in search results about the person (outside of places with sane laws like the EU) as well as causing stress or permanent injury.

Wrongly arrested individuals with mental disabilities have a history of physical abuse in jail potentially to the point of death.


It's both. It's good to acknowledge that AI is easy to misuse in this manner but it doesn't detract from the fact that the ultimate responsibility lies in those that should be verifying the tool output.

There is far too little skepticism around the magic box that solves all problems which is causing issues like this. It's not the fault of the AI (as if it could be assigned liability) for being misused, but this kind of misuse is far too common right now so scare stories like this are helpful and we should highlight the use of AI in mistakes like this.


I worry that blaming AI at all actually incentivizes humans to offload things to AI that should not be offloaded, since it lets them escape blame.

That is a huge danger. Legally speaking it's not an issue since misusing a tool doesn't relieve liability (in most circumstances - all the trivial ones at least)... but that's a more significant political issue as evidenced by the Anthropic vs. DoD interactions since the DoD's actions are largely immune to oversight by the justice system.

Of course, that depends on sane non-politicized courts which you may rightfully doubt exist right now - but assuming the system works anywhere near as designed outsourcing a decision to AI wouldn't change liability.

For DC fans Harvey Dent would similarly not be free from liability for actions taken after a coin flip even if that coin could be viewed in a certain light to have the power to potentially force or prevent certain actions. An AI box that tells Harvey whether to shoot or spare would be similarly irrelevant to his liability - and a scenario in which Harvey points the gun at someone and then walks away giving the AI control over the trigger is essentially no different. Harvey in all cases is responsible for constructing the scenario that (potentially) leads to someone's death and, more over, even if the gun wasn't fired because the AI decided to spare the person Harvey would be on the hook for attempted murder.


The spammers wouldn't pay it once though - the idea is that it's a good way to scale moderation. Each time an admin needs to ban a user there is a 10$ subsidy supporting that action - and if the bots come back then they get to pay 10$ to be banned again.

Assuming the money isn't wasted and is actually used to fund moderation 10$ is probably comfortably above the cost to detect and ban most malicious users.


The spammers wouldn't pay it once though

There are large swaths of spammers that indeed would not pay it. There are on the other hand plenty of NGO's that would pay it without a second thought to promote specific topics and dogpile on others. Those are the movements I would expect AI to take over if not already. AI does not sleep, humans do. AI won't miss the comments that groups believe need to be amplified or squelched.


That’s basically what Valve does on cheaters with premier accounts on cs:go/cs2. And the revenue still growing up.

Can McKinsey fund McKinsey by consulting for McKinsey? Could we oroborus corporate consulting so that those consultants could be trapped in a loop and those of us doing useful work wouldn't need to interact with them anymore?

Have you seen current AI deals? This IS the future, but so much more efficient than requiring OpenAI, NVidia, MS, Amazon, etc. all be involved.

What do you mean exactly?

You could mirror article postings and upvotes to another site and let AI play around there - if it's interesting to people maybe it will gain a following. I don't see any reason it'd need to happen in this specific forum as that'd likely just cause confusion.

At the time being, at least, HN is a single uncategorized (mostly, lets ignore search) message board - splitting it into two would cause confusion and drastically degrade the UX.


I'm going to guess we'll eventually settle onto a psuedo-anonymous cert system like HTTPS where some companies are entrusted with verification and if that company says "That's definitely a human" it'll fly - not a great solution, of course, but I really can't see a non-chain-of-custody/trust based approach to the problem and those might only slightly compromise anonymity in optimal scenarios but some compromise is inevitable.

AI generated comments can also be verified and caught in many ways. I'd guess that it's statistically more likely for a murder to be resolved than a random AI comment to be detected but I'm not actually sure. There are a lot of sloppy murderers (since it's rare for an individual to have _practice_ at it) - but there are also a lot of sloppy LLMs.

Reviewing code changes (generally) takes more time than writing code changes for a pretty significant chunk of engineers. If we're optimizing slop code writing at the expense of more senior's time being funneled into detailed reviews then we're _doing it wrong_.

A long list of contribution PRs are seen as a resume currency in the modern world. A way to game that system is to autogenerate a whole bunch of PRs and hope some of them are accepted to buff your resume. Our issue is that we've been impressed with volume of PRs and not the quality of PRs. The correction is that we should start caring about the volume of rejected PRs and quality of those accepted PRs (like reviewing merge discussions since they're a close corollary to what can be expected during an internal PR). As long as the volume of PRs is seen as a positive indicator people will try and maximize that number.

This is made more complex that the most senior members of organizations tend to be irrationally AI positive - so it's difficult for the hiring layer to push back on a candidate for over reliance on tools even if they fail to demonstrate core skills that those tools can't supplement. The discussion has become too political[1] in most organizations and that's going to be difficult to overcome.

1. In the classic intra-organizational meaning of politics - not the modern national meaning.


As much as I'd like a quick hack to disable raybands recording me - that feels like a pretty slam dunk case of destruction of property.

Just attach a camera to your device and say you where recording in public just like them, no seam to have an issue with that. Your system was just measuring the distance to the target using lidar :)

You're still responsible for damaging people's property even if you have a super clever reason why you totally didn't intend that to happen :)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: