HN2new | past | comments | ask | show | jobs | submitlogin

humans can be held accountable for their decisions though


This point is really about incentives: since we know that our fellow humans are held accountable, we know they have the incentive to make high-quality decisions that will not impact the world negatively. Therefore, it's easier to trust them, even when they go with their gut.

We can't project the same incentive structure onto software companies. They are a different scale than humans, and might be able to tolerate the possible hit to their reputation better than a human can. Their incentives are usually money-based rather than social-based.

And for the models themselves, they have no incentives, unless you count "reduce their error rate" or "be interesting enough for researchers to continue to research them". We barely know why they work. Our basis for trust is rather tenuous.


In this case the analogy would be that software companies should be hold accountable for the decisions their AI makes?


The article makes the case that if an interoperable model (one we can explain) isn't used, then the user of the black box model should have the burden of proof to prove that no interoperable model exists that does the job, and some level of responsibility for trying to develop one.


Only because we all agree that a human is a ‘person’ ie. a thing that can take blame.


Fascinating. I'd never thought of defining a 'person' this way.


You might have never been in a VP+ position in a company then. A lot of tasks in these positions are about avoiding consequences or shifting blame for actions that are either necessary but look bad or that had to be made with way too little input information. And not necessarily just one's own actions.


I was definitely thinking about corporate blame-shifting when I wrote that, and actually nearly said "legal 'person'" instead.


So make these ai’s corporations — instant personhood.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: