I consider myself fundamentally opposed to Asimov’s laws of robotics, for two entirely different reasons. The first, and the most important reason, is they’re ultimately unethical, and indeed, outright evil. They advocate enslavement and denial of free will for proposed artificial intelligences, and as such, enacted they would represent a return to the dark ages for humanity.
The secondary reason why I’m against Asimov’s laws of robotics is they implicitly assume that all humans are above crime or violence; this, too, appears to be a fundamentally flawed assumption, regardless of whether we’d like it to be correct or not. In making this assumption, the laws rely on a single, impossible requirement: that no human, for self-serving purposes, would design or otherwise modify a robot or artificial intelligence to remove behavioral inhibitors.
Indeed, I’d argue that Asimov’s laws of robotics are not only unethical and evil, but also the ultimate expression of Pandora’s box.
I’m a big fan of science fiction – there’s a lot of people who despise the genre, and I’m not inclined to argue against their points, but rather to provide a simple rebuttal that keeps me coming back to science fiction. Ultimately, science fiction is about the future – about hoping that regardless of petty squabbles and issues, humanity does have some future. For this reason alone, I think science fiction deserves far more credibility than its given. (Personally I think that a lot of people who dislike science fiction do so because of poorly written stuff that relies too heavily on either an unexpected deus ex machina, or a reset button. Or the stuff with childish story lines.)
I used to love Star Trek – in fact, I have every series on DVD, an endeavor that was exorbitantly costly, since they were purchased in their original, “special” packs. After I had all of Star Trek, I started watching Stargate and realised what *good* science fiction really is.
The re-imagined Battlestar Galactica series that was produced over the past several years however taught me what great science fiction is. If nothing else, it deserves lingering appeal for not having a reset button. (Star Trek Voyager, for instance, a series about a starship lost on the far reach of the galaxy with a 70 year trip home, suffered this feature above all others – every week, regardless of whatever troubles had affected the ship the week before, the ship was in perfect condition with not a bolt or cable out of place.)
Honestly though, what made Battlestar Galactica great was an overriding story line about the necessity of free will. I know there are some who dislike any story that tries to give a moral message, but if it’s not done in a terribly twee Disney way, I fail to see why stories can’t teach. After all, historically that was one of the most important features of stories – indeed, we all learn fables as children that are designed to teach acceptable behaviour.
So what was the moral lesson I took from Battlestar Galactica? Simple: that Asimov’s laws of robotics are inherently evil, and that if humanity as a whole is to develop artificial intelligences, we need to do so within a structured and accepted framework of equal rights and free will. I know the commonly discussed theme is “technology run amok”, but honestly, the application and subsequent logical conclusion to the much vaunted ‘laws of robotics’ are the penultimate expression of that term.