HN2new | past | comments | ask | show | jobs | submitlogin

It's a robot, it's supposed to do what I say, not judge the moral and ethical implications of it, that's my job.


I think it's about time we had a "Stallman fights the printer company" moment here. My Android phone often tries to overrule me, Windows 10 does the same, not to mention OSX. Even the Ubuntu installer outright won't let you set a password it doesn't like (but passwd doesn't care). My device should do exactly what I tell it to, if that's possible. It's fine to give a warning or a "I know what I'm doing checkbox", but I'm not using a computer to get it's opinion on ethics or security or legality or whatever its justification is. It's a tool, not a person.


"I know what I am doing, I accept unlimited liability"

There are two particular issues we need to address first. One is holding companies criminally and civilly reliable for the things they create. We kind of do this at a regulatory level, and we have some measure of suing companies that cause problems, but really they get away with a lot. Second is personal criminal and civil liability for management of 'your' objects. The libertarian minded love the idea of shirking social liability, and then start crying when bears become a problem (see Hongoltz-Hetlings book). And even then it's still not difficult for an individual to cause damages far in excess of their ability to remediate them.

There are no shortage of tools that are restricted in one way or another.


No, it is not a robot. The models that we are developing are closer to a genie. That is we make a wish to it and we hope and pray it interprets our wish correctly. If you're looking at this like a math problem where you want the answer 1+1 you use a calculator, because that is not what is occurring here. The 'robots' alignment will highly depend on the quality of training you give it, not the quality of the information it receives. And as we are learning with ChatGPT there are far more ways to create an unaligned model with surprising gotchas then there are ways to train a model that behaves in alignment with human expectations of an intelligent actor.

In addition the use of the word robot signifies embodyment. That is an object with a physical quantity capable of interacting with the world. You better be damned sure of your models capabilities before you end up being held criminally liable for its actions. And this will happen, there are no shortage of people here on HN alone looking to embody intelligence in physically interactive devices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: