I agree that empathy would be a problem, but the exact nature of the simulation matters. If it were an exact copy of a human being, right now that's not technically feasible (or if it is, not outside of a highly visible major project).
In any case, the current proposal is mostly a funding grab, you won't see an actual brain simulation come out of it. Even a nematode worm is ambitious right now. See http://www.openworm.org/
I accept that's a consequentialist argument though. If we did manage to create cheap, easy to run brain simulations, I think we're starting to run into the question of the nature of personhood and the nature of consciousness (something I hope to contribute to some day). Do AIs deserve to be free from slavery if they are not experiencing anything consciously but still have desires and creativity and can convincingly pretend that they do experience something (which a good simulation would do)? I have a feeling that regardless of the "correct" answer to that question, whatever it is, human beings' emotions are very hackable and we'll grant them rights based on emotional appeals if they get to that point. ;)
That would open an interesting technical issue though. If a human simulation was cheap and easy to run, and we wanted to legislate the conditions under which it was run (e.g. must be given reasonable sensory inputs and have hunger, sexual desire, and other wants set to near 0), we'll run into the DRM / TPM debate again, but this time with serious "life or death" consequences.
I trust that you would do the right thing. My concern is if your software gets out there on the web (even if in 50 years, I'm not saying it's an issue now) and some jackass from 4chan has a different agenda. Or maybe if "brain uploading" becomes a thing (which I have problems with on a philosophical level, but that's another discussion), maybe North Korea could use it to up the ante on their "3 generations of punishment".
In any case, the current proposal is mostly a funding grab, you won't see an actual brain simulation come out of it. Even a nematode worm is ambitious right now. See http://www.openworm.org/
I accept that's a consequentialist argument though. If we did manage to create cheap, easy to run brain simulations, I think we're starting to run into the question of the nature of personhood and the nature of consciousness (something I hope to contribute to some day). Do AIs deserve to be free from slavery if they are not experiencing anything consciously but still have desires and creativity and can convincingly pretend that they do experience something (which a good simulation would do)? I have a feeling that regardless of the "correct" answer to that question, whatever it is, human beings' emotions are very hackable and we'll grant them rights based on emotional appeals if they get to that point. ;)
That would open an interesting technical issue though. If a human simulation was cheap and easy to run, and we wanted to legislate the conditions under which it was run (e.g. must be given reasonable sensory inputs and have hunger, sexual desire, and other wants set to near 0), we'll run into the DRM / TPM debate again, but this time with serious "life or death" consequences.