Hacker News .hnnew | past | comments | ask | show | jobs | submit | throwaway314155's commentslogin

There’s nothing wrong with your comment per se, but it’s almost as if you didn’t even read the comment you’re responding to.

Let me help you out with this comprehension issue. The point of my comment is that I disagree with the apparent premise of the comment I replied to, which is that "AI" is some generic investigative tool that we can neatly snip out of the picture to blame this incident on human factors at the individual level ("the professional human-in-the-loop who shirked all responsibility"). Said comment also implies that people are fixating on the AI aspect of this issue while ignoring the human factors, which IMO is a strawman. To me, the existence of AI in its current incarnations and the ways in which law enforcement will inevitably abuse it are, together, inseparably, the problem. AI (in the most general sense) opens up entire new dimensions for potential abuse.

As a concrete example:

> And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.

Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all. So saying that it has "nothing to do with AI" is totally ridiculous.


> Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all.

How do you arrive at that conclusion? Because it happened, and it wasn't an AI overseeing (the lack of) due process. The police identifying suspects is part of their job. So are arrest warrants and all the rest of it. I honestly don't see what AI had to do with anything here. All I see is a gaping systemic issue that could have happened regardless of AI if the wrong person got the wrong idea or had a personal vendetta.

Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list. We blame the systemic practices and legal apparatus that permitted it all to happen in the first place.

You might as well blame the SUV manufacturer because without vehicles the police wouldn't hav been able to drive over to make the arrest, right?


> How do you arrive at that conclusion?

Because it's beyond obvious? How would this woman have ended up in jail if she hadn't been misidentified by the facial recognition software in use by the Fargo police? She lives 3 states over; would be a hell of a coincidence if some other avenue of investigation led them to her.

> I honestly don't see what AI had to do with anything here.

You honestly don't see what facial recognition software had to do with a woman being misidentified by facial recognition software?

> Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list.

I actually am completely willing to blame any entity that supplies ICE with the names of people it can reasonably assume will be targeted for "enforcement action" due to said entity representing said names as being legitimate targets for said enforcement action, without taking reasonable care to ensure said representation is correct in each and every case.

What you don't seem to understand is that these abuses of law enforcement authority are predicated on at least an appearance of legitimacy, which can be provided by (e.g.) an app with (presumably) a very official looking logo that agents can point at somebody to get a 'CITIZEN' or 'NOT CITIZEN' classification. It is upon this kind of basis that they perform illegal arrests. All parties—the app vendor and ICE, as well as the people who are meant to be overseeing ICE and providing accountability—are complicit enablers in these crimes. To absolve the vendors who provide the software knowing full well what it will be used for, what its limitations are, and how unlikely it is that ICE personnel will understand those limitations and work around them to keep everything legal, is totally absurd.


It isn't obvious, no. If I drop a hammer on my foot and break my toe I can't then blame the hardware store or the manufacturer. If the store didn't carry hammers I wouldn't have been able to purchase it, I think to myself. Then I couldn't possibly have dropped it on my foot, thus my toe wouldn't be broken right now. It's a specious line of reasoning.

It doesn't matter in the slightest by what means she was selected to "win" this particular lottery. The tool rolling the dice isn't to blame. Tools (and people!) will occasionally return spurious results. Any system needs to be set up to deal with that.

So no, I honestly don't see what facial recognition software has to do with gross negligence and process failure on the part of multiple government agencies.

> without taking reasonable care to ensure said representation is correct in each and every case.

Only if that was part of the contract. Was the product delivered according to specification or not?

What if ICE used FOSS tools to put together the list themselves? Are the tools still to blame? That would obviously be absurd.

The only way the provider (never the tool) could be at fault would be something such as willful negligence or knowingly and intentionally attempting to manipulate the user's actions to some end.

What you don't seem to understand is that human negligence can't be foisted off on tools. Of course an abuser will try to play his actions off as legitimate. That isn't the fault of the tool, it's the fault of the abuser. It isn't up to an app to determine the legitimacy of LEO agent actions. Neither is it the responsibility of an arbitrary, fungible government contractor to oversee ICE.

I think you're confusing the morality of participating in a broader ecosystem with moral culpability for the process failure associated with a specific event. You can advance a reasonable argument that AI companies that choose to do business with ICE are making an at least moderately immoral decision. However that doesn't place them at fault for the specific process failures of any particular event that happens.


If you don't agree that facial recognition software is involved in a case of a woman being misidentified by facial recognition software then there is no point in me spending any more time/effort in conversation with you. Goodbye.

You seem to be intentionally ignoring the point I made. I never disputed that facial recognition software was used (ie involved).

The facial recognition tool didn't arrest her. It holds no authority, has no will of its own, and does not possess a corporeal form with which to enact change in the world. The only parties that could possibly be at fault here are various government agents who clearly acted with negligence, failing to uphold their duty to the law and the people.

If you're unable to rebut my point then perhaps you should consider that you might be in the wrong? If you're unwilling to entertain such a possibility then I wonder why you're posting here to begin with. What is your goal?


> I never disputed that facial recognition software was used

You, yesterday:

> I honestly don't see what AI had to do with anything here.

???

> You seem to be intentionally ignoring the point I made.

I completely understand your point. You are saying that if a mentally ill high schooler manages to acquire a gun and kills 20 people at their school, we should a) punish the shooter, and b) understand the gun as a neutral object that simply popped into existence and was misused, rather than a machine whose design purpose is to kill humans, and whose manufacturer(s) (and other organizations who profit from the easy availability of guns) are actively engaged in a broad effort to preserve the status quo which allowed a mentally ill high schooler to acquire a gun and massacre 20 of their classmates/teachers.

I think it's a terrible opinion, and I vehemently disagree with it. But if you are willing to engage in the sort of rhetorical contortions highlighted at the top of this comment, there is no point in expressing my disagreements to you, because you will evidently say literally anything in response. I may as well have a debate about toilet tank design with `cat /dev/urandom`.

> If you're unable to rebut my point then perhaps you should consider that you might be in the wrong?

Try looking in the mirror, buddy. Sheesh.


Like I said, there wasn’t anything wrong with your comment. It just didn’t seem to directly address the parent comment. This does, thanks.

Seems like a direct response to me.

>> How is this the fault of AI?

> This particular "AI bogeyman" isn't just AI; it's cops with AI

You can’t separate the thing from how it will be used. It’s like arguing that cars on their own aren’t particularly dangerous, but the point of buying a car is to use it thus risking the general public.


But you can in fact argue exactly that. If (arbitrary example) pedestrians are being killed due to poor road engineering practices it isn't reasonable to point at cars and say "see those are the root problem" when in fact it's due to a willful lack of sidewalks or marked crossings or whatever. Being adjacent to something bad doesn't equate to being the root cause.

History shows the timeline of dependence here. Before the introduction of cars, “poor road engineering practices” wouldn’t result in those deaths. So clearly it’s cars that are necessitating sidewalks, etc.

Same deal here, if something “becomes a problem” because of the introduction of AI, it’s AI that is the root case of the resulting issues. Many people are tempted to argue that flawed humans can’t implement the perfect system that is Anarchy, Communism, Recycling programs, or whatever but treating systems as needing to operate on the real world is productive where complaining about humans isn’t.


> Before the introduction of cars, “poor road engineering practices” wouldn’t result in those deaths.

Death by adverse horse encounter was very common before the 1920s. Not sure how many of those deaths can be blamed on poor quality road engineering. But putting a bunch of humans, carts, and excitable half-ton animals in the same crowded streets seems like poor engineering practice.


very common here is a gross exaggeration compared to cars.

After vast improvements in safety ~1.3% of American deaths are still coming from automobile accidents. Horses were never close to that, meanwhile back in 1970 cars where around twice as likely to kill you.


This article states higher per-capita horse deaths in 1900 New York City than automobile deaths in 2023. This stat does not account for the significant disease caused by all that manure mixing with water supplies. Its unclear if automobile pollution is overall worse from a public health standpoint than mountains of horse poo.

https://horse-canada.com/horses-and-history/the-poo-conferen...


Well I (thought it was obvious that) I was referring to roads constructed relatively recently. If cars necessitate sidewalks and the city chooses to cut costs by not putting those in that isn't the fault of automobile designers or manufacturers or dealers or private owners or whoever.

To your example, technology changes and that necessitates infrastructure changing. That doesn't mean that fault for mishaps in the meantime can be attributed to the new technology. A user operating the new technology in an obviously unsafe manner is solely at fault for his own negligence.


The safest street designs still result in automobile fatalities. You can at best mitigate the issue with better street designs but not address the underlying issue.

Failing to acknowledge cars as the root cause may be comforting, but it blinds you to viable solutions.

Indoor shopping malls for example solve many of the issues with cars by forcing people to move around on foot in a little island surrounded by a sea of very low density parking. They are’t perfect solutions, but they still saved a lot of lives and time.

Saying people are misusing a new technology is just another way of saying that technology is flawed. This doesn’t mean you can’t utilize it, but pretending flaws don’t exist has no value.


You didn’t find this part naive?

> But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.


Frankly, I find that less 'naive' than I do 'dangerously possible'.

Autonomous weaponry is one of the few ways that a fascist state could reasonably maintain violent control over a large and hostile populace.

I guarantee Trump would rather have perfectly obedient killbots than critically thinking soldiers, or even just the 5 murderous assholes required to oversee tasking for 1000 semi-autonomous police drones.

The least plausible part is the private sector, which just doesn't work that way.


It all runs on commands like imsg that Claude would be excellent at running given a suitable CLAUDE.md. Scheduled tasks are literally just cron, no problem for Claude.

This article is bullshit. You can't get a full model from training on just one artist's work. A pretrained model is required. The pretrained model was likely one which was indeed trained on the works of others without consent.

What's more, their reasoning for abandoning the company was to build out another company with a suspiciously similar idea...


No one saw it coming.

I have bipolar disorder. The more frustrating aspects of coding have historically affected me tenfold (sometimes to the point of severe mania). Using Claude Code has been more like an accessibility tool in that regard. I no longer have to do the frustrating bits. Or at the very least, that aspect of the job is thoroughly diminished. And yes - coding is "fun again".

I think coding can be an endurance sport sometimes. There are a lot of points at which you have to bang your head against a wall for hours or days to figure out the smallest issue. Having an agent do that frustrating part definitely lowers the endurance needed to stay productive on a project.

Can’t fathom why people would downvote this.

> Then they solved this by introducing GPT-5 which was more like a router that put all these models under the hood so you only had to prompt to GPT-5, and it would route to the best suitable model.

Was this ever explicitly confirmed by OpenAI? I've only ever seen it in the form of a rumor.


It's not a rumor; you can just test it.

Ask the router "What model are you". It will yap on and on about being a GPT-5.3 model (Non-thinking models of OpenAI are insufferable yappers that don't know when to shut up).

Ask it now "What model are you. Think carefully". It concisely replies "GPT-5.4 Thinking".

https://openai.com/index/introducing-gpt-5/

> GPT‑5 is a unified system with a smart, efficient model that answers most questions, a deeper reasoning model (GPT‑5 thinking) for harder problems, and a real‑time router that quickly decides which to use based on conversation type, complexity, tool needs, and your explicit intent (for example, if you say “think hard about this” in the prompt)


Thanks.

Can someone please explain plainly what this means and what happened, and why it is the source of so much controversy?

I'm not being insincere - I am genuinely confused and would benefit greatly from a (hopefully unbiased) recollection of what this is all about.


Here's my take-

Anthropic has some contracts with the US government. They want some additional terms put on their next contract (that seem pretty sane). SecWar cries about it, and not only says "no thanks, I'll just go with openai or google" but goes to daddy Trump and also puts out illegal commands for no Federal workers to use any Anthropic stuff at all. OpenAI swoops in and takes the contract, then tells everyone that they have the same terms but just played nicer to get the contract. However, their terms are just manipulative sentences that aren't even close to the terms Anthropic is insisting on to do business.


That has more to do with the shortcomings of 3d printing.


Are you saying vibed code doesn’t have shortcomings


I think some or maybe even many of those shortcomings will apply to software, too. Making actual good software is not as trivial as writing “make me an app”, much as making an actual good spoon is not as trivial as throwing an STL at a printer and calling it a day.


They likely won’t have a full pretrain for awhile. Just like everyone else, the name of the game now is to finetune existing models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: