略过内容 略过页脚

3 Laws of Robotics Legal

The laws of robotics are presented as something like a human religion and are mentioned in the language of the Protestant Reformation, with the series of laws containing the Zero Law known as the “Giskardian Reformation” belonging to the original “Calvinian Orthodoxy” of the Three Laws. The Zero Law robots under the control of R. Daneel Olivaw are constantly fighting with the “First Law” robots who deny the existence of the Zero Law and promote agendas other than Daneel. [27] Some of these programs are based on the first clause of the First Law (“A robot must not hurt a human..”), which advocates strict non-interference in human politics so as not to cause harm without knowing it. Others are based on the second sentence (“. or, through inaction, allow a human to be injured”) and argues that robots should openly become a dictatorial government to protect humans from any potential conflict or catastrophe. The laws are as follows: “(1) A robot shall not injure a human or allow a human to be injured by inaction; (2) A robot must obey orders given to it by humans, unless such orders conflict with the First Law; (3) A robot must protect its own existence as long as this protection does not violate the first or second law. Asimov then added another rule known as the fourth law or null law, which replaced the others. It states that “a robot must not harm humanity or allow humanity to be hurt by inaction.” The Three Laws are made up of the organizational principles that unite Asimov`s entire fictional world, which can be seen in his many works, which often revolved around science fiction themes. Asimov`s stories often revolved around humanoid robots acting in a way that goes against these three laws, highlighting the inherent conflict between humanity`s understanding of morality and the interpretation of a humanoid android. Asimov`s work has inspired generations of people to imagine a world where humans and robots coexist. Now, 80 years after the publication of The Three Laws, we are closer than ever to living in a world where humanoid androids exist and make our lives easier. As we continue to advance technologically, we must consider the Three Laws and their relevant application to artificially intelligent beings. Over the past two decades, we have seen incredible technological advances in the field of AI.

AI will play an important role in our future. We already rely on AI-supported digital assistants that are accessible via our smartphones. And big companies like Amazon are in the early stages of rolling out fully autonomous vehicles to deliver packages. While modern AI can be programmed to perform various tasks that were once reserved for humans, we are not yet able to develop sensitive AI beyond our control. Since this is the case, it is easy to program machines to unconditionally obey our orders. But we are increasingly approaching a future where machines may have the ability to think independently, which can lead to conflicts between humans and machines, especially when it comes to instilling human morality in an artificial being. For example, we program robots with safety protocols to prevent them from hurting the people around them. But what role does the robot play in protecting humanity from evil if it is smart enough to think for itself? The first law of robotics states that a robot must not injure a human. It also states that a robot must not allow a human to be hurt by inaction by inaction. What is an artificially intelligent being supposed to do when the only way to protect one person from evil is to break the man who hurts the other? In the fictional world of Asimov, the three laws are supposed to make robots the perfect servants of humans. However, Asimov intentionally wrote stories that show the conflict between humans and the understanding of morality through robots.

Unfortunately, these laws are not so easy for an AI to follow when used in the real world. For this AI, it must juggle its duty to serve humanity and its understanding that humanity is often its worst enemy. Can an AI really protect and act at the whim of humans when people violate the laws by which AI must work? This is an uncomfortable question that becomes even more complicated when you introduce human sensitivity into the equation outside of man. The original laws were amended and elaborated by Asimov and other authors. Asimov himself made slight changes to the first three in various books and short stories to further develop the interaction of robots with humans and with each other. In later novels, in which robots had taken responsibility for governing entire planets and human civilizations, Asimov also added a fourth or no law to precede the others: three laws of robotics, rules developed by science fiction author Isaac Asimov, who tried to create an ethical system for humans and robots. The laws first appeared in his short story “Runaround” (1942) and later became very influential in the science fiction genre. In addition, they then found relevance in discussions about technology, including robotics and AI. In the 1990s, Roger MacBride Allen wrote a trilogy set in the fictional universe of Asimov.

Each title has the prefix “Isaac Asimov`s,” because Asimov had approved of Allen`s outline before his death. [Citation needed] These three books, Caliban, Inferno and Utopia, introduce a new series of the Three Laws. The so-called New Laws are similar to Asimov`s originals with the following differences: the First Law is amended to remove the “inaction clause”, the same modification made in “Little Lost Robot”; the second law is amended to require cooperation instead of obedience; the third law is amended so that it is no longer replaced by the second (i.e. a robot of the “New Law” cannot be ordered to destroy itself); Finally, Allen adds a fourth law that orders the robot to “do what it wants,” as long as it doesn`t conflict with the first three laws. The philosophy behind these changes is that the “New Law” robots should be partners rather than slaves of humanity, according to Fredda Leving, who designed these New Law Robots.