Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin

> It’s quite a stretch to jump straight to innocent people being jailed because they’re using an LLM to summarise audio.

98% of cases end in plea bargains. Especially for lower level offences. These cases are decided in an assembly line fashion based on summaries and reports. It will be a blue moon when someone at the DA's office will sit down and listen through the audio.

The DA will use the summary and a summary of the case to pressure someone poor and not that well educated to take a plea deal. Their public defender will do the same.

And the innocent person will often be so frightened out of their mind that they will say yes.

It happens every day.

    > Pleas can allow police and government misconduct to go unchecked, because mistakes and misbehavior often only emerge after defense attorneys gain access to witness interviews and other materials, with which they can test the strength of a government case before trial.
https://www.npr.org/2023/02/22/1158356619/plea-bargains-crim...

    > Eyster believed that Sweatt was innocent of the drug charges against her. “This is a hardworking woman who lived in a heavily policed community for 10 years,” she told me. “If she were a drug dealer, she would have already been evicted. She doesn’t have a history of drug use.” But the idea of taking this case to trial was a nonstarter. The best path forward, Eyster decided, was to humanize Sweatt to the prosecutor—hence those time sheets—and then try to negotiate a plea bargain. In exchange for a guilty plea, the prosecutor might not recommend a prison sentence.

    > The strategy worked. The prosecutor reduced the charge from a felony to a Class A misdemeanor and offered Sweatt a six-month suspended sentence (meaning she wouldn’t have to serve any of it) with no probation. Her paraphernalia charge was dismissed, and her conviction would result in a fine and fees that totaled $1,396.15.
https://www.theatlantic.com/magazine/archive/2017/09/innocen...


Implicit in your argument is that the AI summary report will have more bias than the report written by arresting officers. I am not sure that is the case - this could reduce bias as a more neutral summary and go counter to your argument.


I think the people who keep harping about bias are trying to convey a narrow point. My point is broader. Probabilistic models are probabilistic.

I am simplifying the behavior of these systems, but my argument is twofold. First,, just because an output has a high probability of being correct, doesn't mean that any particular output is correct. Second, "low probability" events that are acceptable in a limited use case are disastrous in broader use.

For example, If I were being generous, I'd say that GPT-4 makes an error 0.1% of the time i.e. 99.9% of the time it doesn't make an error. I would say that is extremely fair. I use this model daily and I've found the rate in my limited sample set to be higher than that.

If you are dealing with 10 cases, a 0.1% error rate is immaterial. If you are dealing with 100,000 cases, that's 100 cases where an error was introduced.

The true error rate is likely to be higher, for example, https://www.ncbi.nlm.nih.gov/corecgi/tileshop/tileshop.fcgi?...

Is it acceptable for a few thousand to a few hundred innocent people to face legal action because an overgrown next token prediction model made a mistake?

I love GPT-4. I love its promise, but I don't think it should be implemented in safety-critical situations.


I am also worried about officers "gaming" the system to bolster whatever stat their superiors want them to optimize at the detriment of those being "policed".

We've all seen how prompting can change GPT output and this is what worries me the most.

Imagine users finding out that starting every interaction that is recorded with a certain sentence acts as a pre prompt that in turn subtly biases the gpt output so that it makes judges or prosecutors less sympathetic to defendants when the police department is optimizing for higher conviction rates, eg. (That is, doing the opposite of the quote above of the public defender "humanizing" the drug case defendant in the plea deal example.)

There will be a lot of reports transcribed, people will find ways to optimize the output to their gain and we've all seen how easy it is to bias llm output by promoting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: