About: "Three Laws"-Compliant   Sponge Permalink

An Entity of Type : owl:Thing, within Data Space : 134.155.108.49:8890 associated with source dataset(s)

Before around 1940, almost every Speculative Fiction story involving robots followed the Frankenstein model, i.e., Crush! Kill! Destroy!. Fed up with this, a young Isaac Asimov decided to write stories about sympathetic robots, with programmed safeguards that prevented them from going on Robot Rampages. A conversation with Editor of Editors John W. Campbell helped him to boil those safeguards into The Three Laws of Robotics: Also see Second Law, My Ass. Examples of "Three Laws"-Compliant include:

AttributesValues
rdfs:label
  • "Three Laws"-Compliant
rdfs:comment
  • Before around 1940, almost every Speculative Fiction story involving robots followed the Frankenstein model, i.e., Crush! Kill! Destroy!. Fed up with this, a young Isaac Asimov decided to write stories about sympathetic robots, with programmed safeguards that prevented them from going on Robot Rampages. A conversation with Editor of Editors John W. Campbell helped him to boil those safeguards into The Three Laws of Robotics: Also see Second Law, My Ass. Examples of "Three Laws"-Compliant include:
dcterms:subject
dbkwik:all-the-tro...iPageUsesTemplate
dbkwik:allthetrope...iPageUsesTemplate
abstract
  • Before around 1940, almost every Speculative Fiction story involving robots followed the Frankenstein model, i.e., Crush! Kill! Destroy!. Fed up with this, a young Isaac Asimov decided to write stories about sympathetic robots, with programmed safeguards that prevented them from going on Robot Rampages. A conversation with Editor of Editors John W. Campbell helped him to boil those safeguards into The Three Laws of Robotics: According to Asimov's account, Campbell composed the Three Laws; according to Campbell's account, he was simply distilling concepts that were presented in Asimov's stories. The laws cover most obvious situations, but they are far from faultless. Asimov got a great deal of mileage writing a huge body of stories about how the laws would conflict with one another and generate unpredictable behavior. A recurring character in many of these was the "robopsychologist" Susan Calvin (who was, not entirely coincidentally, a brilliant logician who hated people). It is worth noting Asimov didn't object exclusively to "the robot as menace stories" (as he called them) but also the "the robot as pathos" stories (ditto). He thought that robots attaining and growing to self awareness and full independence were no more interesting than robots going berserk and turning against their masters . While he did, over the course of his massive career, write a handful of both types of stories (still using the three laws), most of his robot stories dealt with robots as tools, because it made more sense. Almost all the stories surrounding Susan Calvin and her precursors are really about malfunctioning robots, and the mystery of investigating their behavior to discover the underlying conflicts. Alas, as so often happens, Asimov's attempt to avert one overused trope gave birth to another that has been equally overused. Many writers (and readers) in the following decades would treat the Laws of Robotics as if they were as immutable as Newton's Laws of Motion, the Theory of Relativity, the Laws of Gravity... wait ... you know, they treated these laws better than they treated most real scientific principles. Of course, even these near-immutable laws were played with and modified. Asimov eventually took one of the common workarounds and formalized it as a Zeroth Law, which stated that the well-being of humanity as a whole could take precedence over the health of an individual human. Stories by other authors occasionally proposed additional extensions, including a -1st law (sentience as a whole trumps humanity), 4th (robots must identify themselves as robots), a different 4th (robots are free to pursue other interests when not acting on the 1st-3rd laws) and 5th (robots must know they are robots), but unlike Asimov's own laws these are seldom referenced outside the originating work. The main problem, of course, is that it is perfectly reasonable to expect a human to create a robot that does not obey the Three Laws, or even have them as part of their programming. An obvious example of this would be creating a Killer Robot for a purpose like fighting a war. For such a kind of robot, the Three Laws would be a hindrance to its intended function. (Asimov did, however, suggest a workaround for this: an autonomous spaceship programmed with the Three Laws could easily blow up spaceships full of people because, being itself an unmanned spaceship, would assume that any other spaceships were unmanned as well.) Also see Second Law, My Ass. Examples of "Three Laws"-Compliant include:
Alternative Linked Data Views: ODE     Raw Data in: CXML | CSV | RDF ( N-Triples N3/Turtle JSON XML ) | OData ( Atom JSON ) | Microdata ( JSON HTML) | JSON-LD    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 07.20.3217, on Linux (x86_64-pc-linux-gnu), Standard Edition
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2012 OpenLink Software