Hacker News new | past | comments | ask | show | jobs | submit login

> For example, it’s being used to job screen applicants even though we have proven that AI models still suffer from thing like racial bias.

Can't we just say racism is illegal, and if a company uses an AI to be racist, they get fined the same way they would if they were racist the old fashioned way?




Look up the Dutch tax return scandal where the Dutch tax arm of the government (‘IRS’) used machine learning to identify fraud but it turned out to be very racially biased and it uprooted thousands of families with years of financial struggles and legal battles.

See https://en.m.wikipedia.org/wiki/Dutch_childcare_benefits_sca...


or the royal mail horizon scandal. although the software not called an "ai" afair


"Just", no.

Fence at the top of a cliff (make sure the AI is unbiased and can be fixed when it turns out it is) vs. Ambulance at the bottom (letting people sue if they think the machine is wrong).


When you make specific methods illegal, it tends to lead to loopholes. If you just make the result illegal then nobody can try and get around it by using a slightly different system design.


Also true.

But which way around does that apply? Racism is a concrete example of what AI may incorrectly automate, it's not the only bias that AI can have — any correlation in the training data will be learned, but not necessarily causation — and the goal of these laws is to require the machines to be accurate, unbiased, up to date, respectful of privacy etc., with "not racist" as merely one example of why that matters.

(Also the existing laws on racial equality were not removed by GDPR; to the previous metaphor, the fence being better doesn't mean you can fire the ambulance service).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: