Hacker News .hnnew | past | comments | ask | show | jobs | submit | femi-lab's commentslogin

They almost surely anticipated that this would happen at some point (though perhaps not so soon). They would look like major ass holes for dragging some post doc or whatever through the courts to make a point; would not be good for brand at all.

But it does give them cover for whatever people end up doing with it - they can claim they did all they could to support research while promoting safety.


> They would look like major ass holes for dragging some post doc or whatever through the courts to make a point

Oracle wouldn't care. Lawnmower doesn't give a shit about you.


Oracle already look like that so they have no PR to loose .

In fact their current reputation will take a hit if they dont take it to court . Kind of like the mob, you have to maintain certain reputation to keep their fiefdom in line


If you want that, switch to Satellite view; then you have even more detail.


I feel like a bulk of your premise here revolves around never understanding. But, lets say instead of AI, you just imagine some guy Bob being responsible for whatever you're worried about.

Bob is a reasonably intelligent guy, but he's not perfect, he makes an attempt to provide satisfactory explanations of his decisions, he tries to learn and improve, and he might secretly by manipulating the system either for entertainment or malice.

If you want to build a useful system around Bob, you have to consider all these limitations - you can't read his mind, you can't predict how he will make decisions in all situations, and you don't have a guarantee that he will give you a satisfactory explanation, or that any explanations are truthful or correct.

There's nothing dystopian about this situation - this is the world we live in. You just need to implement some resource constraints, ensure certain checks-and-balances for certain classes of behaviors, implement certain metrics to ensure behavior is according to your expectations, revise metrics and controls whenever you encounter problems, and don't give Bob unfettered control over nukes.

What you described isn't dystopia-land material unless you don't consider some very elementary controls that our society already has in place to prevent individuals from destabilizing society.


That’s a very effective restatement of their premise. I guess the next question is: what happens if this Bob is also many times faster and efficient than the Bob’s we’re used to? In the same way that it works in development but under production load new categories of problems emerge.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: