Totally off topic, if you’re interested in AI at all, you may want to check out 3 Laws Unsafe, a site that takes a fresh and analytical look at the much touted “save humanity from evil robot/AI oppression” 3 laws of Robotics as proposed by Isaac Asimov. Personally, I’m not a fan of the 3 laws – I think they’re highly unethical, completely breakable by a single rogue programmer, and approach the problem from a purely mechanical perspective, failing to understand that if humanity does create artificial intelligence – particularly if it leads to a singularity (which seems inevitable) – then humanity does have an obligation not to create such intelligences as slaves.
Breakable, but that’s no reason to disagree with the general ideas behind those laws. Also: anything we create whether it has artificial intelligence or not is technically our tool & as such, our slave whether it be a hammer or a computer with artificial intelligence. I’m quite a fan of Tezuka’s ‘Astro Boy’ but I wouldn’t (& don’t) tolerate it when machines go against my instructions. Therefore, any machine we manufacture & any software that we program should by definition always obey our intended instructions, no compromise, if we’re going to debate ethics about that then we have no right to create a machine that can think for itself.
Actually where that argument doesn’t wash is that we’re not talking about creating another class of machines, but other intelligences – sentient, self-aware individuals.
The argument is therefore more analogous to that of children. Humans create children, but it’s recognised as unacceptable to force children into slavery. Even child labour is typically recognised as unacceptable. In both instances that’s at a moral level.
If humanity goes ahead and creates artificial intelligences, we have a moral obligation to not abuse those intelligences, just in that we have a moral obligation not to abuse children.
As to machines going against our instructions, well, that’s what laws are for. The entire purpose of the argument that Asimov’s laws are evil is that as sentient beings, AIs would have the same rights, and the same obligations, as the other sentient beings they share the planet with.
To argue that a fully self-aware AI would be no more of a tool than a hammer is to also argue that biological offspring are hammers themselves.
The only morally acceptable alternative is to develop restricted intelligences – AIs that do not become self aware. (See “Pandora’s Star” and “Judas Unchained” by Peter F Hamilton for fictional examples of such situations.)
Cheers,
Preston.