Looking over the year that has passed, it is a nice question whether human stupidity or artificial intelligence has done more to shape events. Perhaps it is the convergence of the two that we really need to fear.
Artificial intelligence is a term whose meaning constantly recedes. Computers, it turns out, can do things that only the cleverest humans once could. But at the same time they fail at tasks that even the stupidest humans accomplish without conscious difficulty.
人工智能是一个术语，其意义在不断地模糊。 事实证明，计算机可以做最聪明的人才能做到的事情。 但与此同时，即使最愚蠢的人类在没有意识到困难的情况下能完成的任务，他们也会失败。
At the moment the term is mostly used to refer to machine learning: the techniques that enable computer networks to discover patterns hidden in gigantic quantities of messy, real-world data. It’s something close to what parts of biological brains can do. Artificial intelligence in this sense is what enables self-driving cars, which have to be able to recognise and act appropriately towards their environment.
目前该术语主要用于指机器学习：一种使计算机网络能够发现隐藏在巨大数量的杂乱的，真实的数据中的规则模式的技术。 它与生物大脑的某些部分可以做得很接近。 从这个意义上说，人工智能是自动驾驶汽车的必需品，它能够识别并适应环境。
It is what lies behind the eerie skills of face-recognition programs and what makes it possible for personal assistants such as smart speakers in the home to pick out spoken requests and act on them. And, of course, it is what powers the giant advertising and marketing industries in their relentless attempts to map and exploit our cognitive and emotional vulnerabilities.
Changing the game
Last year saw some astonishing breakthroughs, whose consequences will become clearer and more important. The first was conceptual: Google’s DeepMind subsidiary, which had already shattered the expectations of what a computer could achieve in chess, built a machine that can teach itself the rules of games of that sort and then, after two or three days of concentrated learning, beat every human and every other computer player there has ever been.
去年取得了一些惊人的突破，其带来的后果将变得更加清晰和重要。 第一个是概念性的：Google的DeepMind子公司已经彻底改变了人们对于计算机能在国际象棋中认知，他们构建了一台可以自学相关游戏规则的机器，然后，经过两三天的集中学习， 击败了每一个人类和其他所有电脑棋手。
AlphaZero cannot master the rules of any game. It works only for games with “perfect information”, where all the relevant facts are known to all the players. There is nothing in principle hidden on a chessboard – the blunders are all there, waiting to be made, as one grandmaster observed – but it takes a remarkable, and, as it turns out, inhuman intelligence to see what’s contained in that simple pattern.
AlphaZero并不能掌握任何游戏的规则。 它仅适用于具有“完美信息”的游戏，即游戏中所有相关事实都为所有玩家所知。 原则上没有任何东西隐藏在棋盘上 --- 但每一步棋都有可能下错，就像一位象棋大师等待着你犯错 --- 但它需要一个非凡的，并且，事实证明，非人类的智慧，发现这个简单模式中所包含的真正奥义。
Computers that can teach themselves from scratch, as AlphaZero does, are a significant milestone in the progress of intelligent life on this planet. And there is a rather unnerving sense in which this kind of artificial intelligence seems already alive.
可以像AlphaZero一样从头开始自学的计算机是这个星球上智能生活进步的重要里程碑。 有一种相当令人不安的感觉是 ------这种人工智能似乎已经存在。
Compared with conventional computer programs, it acts for reasons incomprehensible to the outside world. It can be trained, as a parrot can, by rewarding the desired behaviour; in fact, this describes the whole of its learning process. But it can’t be consciously designed in all its details, in the way that a passenger jet can be. If an airliner crashes, it is in theory possible to reconstruct all the little steps that led to the catastrophe and to understand why each one happened, and how each led to the next. Conventional computer programs can be debugged that way. This is true even when they interact in baroquely complicated ways. But neural networks, the kind of software used in almost everything we call AI, can’t even in principle be debugged that way. We know they work, and can by training encourage them to work better. But in their natural state it is quite impossible to reconstruct the process by which they reach their (largely correct) conclusions.
与传统的计算机程序相比，它们能够出于外界难以理解的原因而发生作用。 它可以像鹦鹉一样通过奖励所期望的行为来训练; 事实上，这描述了整个学习过程。 但它不能有意识地设计出所有细节，就像一架客机一样。 如果客机坠毁，理论上可以重建导致灾难的所有小步骤，并了解每个发生的原因，以及每个步骤如何导致下一次。 可以通过这种方式调试传统的计算机程序。 即使它们以非常复杂的方式相互作用也是如此。 但神经网络，我们称之为AI的几乎所有东西中使用的软件，原则上都不能以这种方式进行调试。 我们知道它们有效，并且可以通过鼓励让它们更好地工作。 但是在他们的自然状态下，很难去重建他们达到（大部分是正确的）结论之前的过程。
Friend or foe?
It is possible to make them represent their reasoning in ways that humans can understand. In fact, in the EU and Britain it may be illegal not to in certain circumstances: the General Data Protection Regulation (GDPR) gives people the right to know on what grounds computer programs make decisions that affect their future, although this has not been tested in practice. This kind of safety check is not just a precaution against the propagation of bias and wrongful discrimination:，it’s also needed to make the partnership between humans and their newest tools productive.
以人类可以理解的方式表示出它们的思考是可能的。 事实上，在欧盟和英国，在某些情况下可能是非法的：通用数据保护条例（GDPR）赋予人们有权利知道计算机程序做出影响其未来的决策，尽管尚未在实践中经过测试。 这种安全检查不仅是防止偏见和非法歧视传播的预防措施，它还使得人与这种最新的科技之间的伙伴关系富有成效。
One of the least controversial uses of machine learning is in the interpretation of medical data: for some kinds of cancers and other disorders computers are already better than humans at spotting the dangerous patterns in a scan. But it is possible to train them further, so that they also output a checklist of factors which, taken together, lead to their conclusions, and humans can learn from these.
The second great development of the last year makes bad outcomes much more likely. This is the much wider availability of powerful software and hardware. Although vast quantities of data and computing power are needed to train most neural nets, once trained a net can run on very cheap and simple hardware. This is often called the democratisation of technology but it is really the anarchisation of it. Democracies have means of enforcing decisions; anarchies have no means even of making them. The spread of these powers to authoritarian governments on the one hand and criminal networks on the other poses a double challenge to liberal democracies. Technology grants us new and almost unimaginable powers but at the same time it takes away some powers, and perhaps some understanding too, that we thought we would always possess.
去年的第二次大发展更有可能带来不良的后果。 功能强大的软件和硬件的有更广泛可用性。 尽管需要大量的数据和计算能力来训练大多数神经网络，但是一旦经过训练，神经网络就可以在非常便宜和简单的硬件上运行。 这通常被称为技术的民主化，但它实际上是它的无政府化。 民主国家有执行决定的手段; 无政府国家甚至无法制造它们。 这些权力一方面传播给独裁政府，另一方面传播到犯罪网络，这对自由民主国家构成了双重挑战。 技术赋予了我们新的，几乎难以想象的力量，但与此同时它剥夺了我们认为我们永远拥有的一些力量，也许还有一些理解力。