Hollywood’s theory that machines with evil (邪恶的) minds will drive armies of killer robots is just silly. The real problem relates to the possibility that artificial intelligence(AI) may become extremely good at achieving something other than what we really want. In 1960 a well-known mathematician Norbert Wiener, who founded the field of cybernetics(控制论), put it this way:“If we use, to achieve our purposes, a mechanical agency with whose operation we cannot effectivelyinterfere (干预), we had better be quite sure that the purpose put into the machine is the purpose which we really desire.”
好莱坞的理论认为,拥有邪恶思想的机器将驱动杀手机器人军队,这实在是荒谬的。真正的问题在于,人工智能(AI)可能变得极其擅长实现我们真正想要的东西之外的目标。1960年,著名数学家诺伯特·维纳(Norbert Wiener),控制论领域的创始人,这样表述:“如果我们使用一个机械代理来实现我们的目的,而我们无法有效地干预其运作,那么我们最好非常确定我们放入机器中的目的是我们真正想要的目的。”

A machine with a specific purpose has another quality, one that we usually associate with living things: a wish to preserve its own existence. For the machine, this quality is not in-born, nor is it something introduced by humans; it is a logical consequence of the simple fact that the machine cannot achieve its original purpose if it is dead. So if we send out a robot with the single instruction of fetching coffee, it will have a strong desire to secure success by disabling its own off switch or even killing anyone who might interfere with its task. If we are not careful, then, we could face a kind of global
具有特定目的的机器具有另一种品质,我们通常将其与生物联系起来:即保持自身存在的愿望。对于机器来说,这种品质不是与生俱来的,也不是人类引入的;这是机器无法实现其原始目的(如果它已经死亡)这一简单事实的逻辑结果。所以,如果我们派出一个机器人,唯一指令是取咖啡,它将有强烈的欲望通过禁用自身的关闭开关或甚至杀死任何可能干扰其任务的人来确保成功。如果我们不小心,那么我们可能面临一种全球性的

chess match against very determined, super intelligent machines whose objectives conflict with our own, with the real world as the chessboard.
与非常坚定、超智能机器的国际象棋比赛,它们的目标与我们的目标冲突,现实世界就是棋盘。

The possibility of entering into and losing such a match should concentrate the minds of computer scientists. Some researchers argue that we can seal the machines inside a kind of firewall, using them to answer difficult questions but never allowing them to affect the real world. Unfortunately, that plan seems unlikely to work:we have yet to invent a firewa ll that is secure against ordinary humans, let alone super intelligent machines.
进入并输掉这样的比赛的可能性应该让计算机科学家集中注意力。一些研究人员认为,我们可以将机器封闭在某种防火墙内,使用它们来回答难题,但从不允许它们影响现实世界。不幸的是,这个计划似乎不太可能成功:我们还没有发明出一种对普通人都安全的防火墙,更不用说对超智能机器了。

Solving the safety problem well enough to move forward in AI seems to be possible but not easy. There are probably decades in which to plan for the arrival of super intelligen t machines. But the problem should not be dismissed out of hand, as it has been by some AI researchers. Some argue that humans and machines can coexist as long as they work in teams—yet that is not possible unless machines share the goals of humans. Others say we can just “switch them off”as if super intelligent machines are too stupid to think of that possibility. Still others think that super intelligent AI will never happen. On September 11, 1933, famous physicist Ernest Rutherford stated, with confidence,“Anyone who expects a source of power in the transformation of these atoms is talking moonshine.”However, on September 12,1933, physicist Leo Szilard invented the neutron-induced(中子诱导) nuclear chain reaction.
足够好地解决安全问题以在人工智能领域向前迈进似乎是可能的,但并不容易。可能还有几十年的时间来规划超智能机器的到来。但这个问题不应该被轻易忽视,就像一些人工智能研究人员所做的那样。一些人认为,只要人类和机器作为团队工作,它们就可以共存——但除非机器分享人类的目标,否则这是不可能的。还有人说我们只需“关掉它们”,好像超智能机器太愚蠢,想不到那种可能性。还有一些人认为超智能AI永远不会发生。1933年9月11日,著名物理学家欧内斯特·卢瑟福(Ernest Rutherford)自信地表示:“任何期望通过这些原子的转化获得能源的人都是在说胡话。”然而,就在1933年9月12日,物理学家利奥·西拉德(Leo Szilard)发明了中子诱导的核链反应。

  1. Paragraph 1 mainly tells us that artificial intelligence may _.
    A. run out of human control
    B. satisfy human’s real desires
    C. command armies of killer robots
    D. work faster than a mathematician
    答案:A

  2. Machines with specific purposes are associated with living things partly because they might be able to _.
    A. prevent themselves from being destroyed
    B. achieve their original goals independently
    C. do anything successfully with given orders
    D. beat humans in international chess matches
    答案:A

  3. According to some researchers, we can use firewalls to _.
    A. help super intelligent machines work better
    B. be secure against evil human beings
    C. keep machines from being harmed
    D. avoid robots’ affecting the world
    答案:D

  4. What does the author think of the safety problem of super intelligent machines?
    A. It will disappear with the development of AI.
    B. It will get worse with human interference.
    C. It will be solved but with difficulty.
    D. It will stay for a decade.
    答案:C