广域混流
当前位置:首页 - 奇闻 >

卫报观点: 人工智能的未来丨双语阅读

2019-05-29来源:广西女性网



卫报观点: 人工智能的未来

The Guardian view on the future of AI: great power, great irresponsibility



The vexed politics of our times has obscured the view ahead. Over the holidays we have been examining some big issues on the horizon. Today, in our final instalment, we look at the spread of artificial intelligence.




Looking over the year that has passed, it is a nice question whether human stupidity or artificial intelligence has done more to shape events. Perhaps it is the convergence of the two that we really need to fear.

回顾已经过去的一年,有一个好的问题值得提出来:人类的愚笨或人工智能已经极大地改变我们的生活了吗? 也许正是我们真正需要担心的两者的融合。


Artificial intelligence is a term whose meaning constantly recedes. Computers, it turns out, can do things that only the cleverest humans once could. But at the same time they fail at tasks that even the stupidest humans accomplish without conscious difficulty.

人工智能是一个术语,其意义在不断地模糊。 事实证明,计算机可以做最聪明的人才能做到的事情。 但与此同时,即使最愚蠢的人类在没有意识到困难的情况下能完成的任务,他们也会失败。


At the moment the term is mostly used to refer to machine learning: the techniques that enable computer networks to discover patterns hidden in gigantic quantities of messy, real-world data. It’s something close to what parts of biological brains can do. Artificial intelligence in this sense is what enables self-driving cars, which have to be able to recognise and act appropriately towards their environment. 

目前该术语主要用于指机器学习:一种使计算机网络能够发现隐藏在巨大数量的杂乱的,真实的数据中的规则模式的技术。 它与生物大脑的某些部分可以做得很接近。 从这个意义上说,人工智能是自动驾驶汽车的必需品,它能够识别并适应环境。

 

It is what lies behind the eerie skills of face-recognition programs and what makes it possible for personal assistants such as smart speakers in the home to pick out spoken requests and act on them. And, of course, it is what powers the giant advertising and marketing industries in their relentless attempts to map and exploit our cognitive and emotional vulnerabilities.

这是面部识别程序,或者可以执行语音命令的家庭智能音响背后的令人毛骨悚然的技术。 当然,正是这些技术能广告和营销行业不断地识别和利用我们的认知和情感的漏洞。



Changing the game

改变格局


Last year saw some astonishing breakthroughs, whose consequences will become clearer and more important. The first was conceptual: Google’s DeepMind subsidiary, which had already shattered the expectations of what a computer could achieve in chess, built a machine that can teach itself the rules of games of that sort and then, after two or three days of concentrated learning, beat every human and every other computer player there has ever been.

去年取得了一些惊人的突破,其带来的后果将变得更加清晰和重要。 第一个是概念性的:Google的DeepMind子公司已经彻底改变了人们对于计算机能在国际象棋中认知,他们构建了一台可以自学相关游戏规则的机器,然后,经过两三天的集中学习, 击败了每一个人类和其他所有电脑棋手。


AlphaZero cannot master the rules of any game. It works only for games with “perfect information”, where all the relevant facts are known to all the players. There is nothing in principle hidden on a chessboard – the blunders are all there, waiting to be made, as one grandmaster observed – but it takes a remarkable, and, as it turns out, inhuman intelligence to see what’s contained in that simple pattern.

AlphaZero并不能掌握任何游戏的规则。 它仅适用于具有“完美信息”的游戏,即游戏中所有相关事实都为所有玩家所知。 原则上没有任何东西隐藏在棋盘上 --- 但每一步棋都有可能下错,就像一位象棋大师等待着你犯错 --- 但它需要一个非凡的,并且,事实证明,非人类的智慧,发现这个简单模式中所包含的真正奥义。


Computers that can teach themselves from scratch, as AlphaZero does, are a significant milestone in the progress of intelligent life on this planet. And there is a rather unnerving sense in which this kind of artificial intelligence seems already alive.

可以像AlphaZero一样从头开始自学的计算机是这个星球上智能生活进步的重要里程碑。 有一种相当令人不安的感觉是 ------这种人工智能似乎已经存在。


Compared with conventional computer programs, it acts for reasons incomprehensible to the outside world. It can be trained, as a parrot can, by rewarding the desired behaviour; in fact, this describes the whole of its learning process. But it can’t be consciously designed in all its details, in the way that a passenger jet can be. If an airliner crashes, it is in theory possible to reconstruct all the little steps that led to the catastrophe and to understand why each one happened, and how each led to the next. Conventional computer programs can be debugged that way. This is true even when they interact in baroquely complicated ways. But neural networks, the kind of software used in almost everything we call AI, can’t even in principle be debugged that way. We know they work, and can by training encourage them to work better. But in their natural state it is quite impossible to reconstruct the process by which they reach their (largely correct) conclusions.

与传统的计算机程序相比,它们能够出于外界难以理解的原因而发生作用。 它可以像鹦鹉一样通过奖励所期望的行为来训练; 事实上,这描述了整个学习过程。 但它不能有意识地设计出所有细节,就像一架客机一样。 如果客机坠毁,理论上可以重建导致灾难的所有小步骤,并了解每个发生的原因,以及每个步骤如何导致下一次。 可以通过这种方式调试传统的计算机程序。 即使它们以非常复杂的方式相互作用也是如此。 但神经网络,我们称之为AI的几乎所有东西中使用的软件,原则上都不能以这种方式进行调试。 我们知道它们有效,并且可以通过鼓励让它们更好地工作。 但是在他们的自然状态下,很难去重建他们达到(大部分是正确的)结论之前的过程。


Friend or foe?

是敌是友?


It is possible to make them represent their reasoning in ways that humans can understand. In fact, in the EU and Britain it may be illegal not to in certain circumstances: the General Data Protection Regulation (GDPR) gives people the right to know on what grounds computer programs make decisions that affect their future, although this has not been tested in practice. This kind of safety check is not just a precaution against the propagation of bias and wrongful discrimination:,it’s also needed to make the partnership between humans and their newest tools productive.

以人类可以理解的方式表示出它们的思考是可能的。 事实上,在欧盟和英国,在某些情况下可能是非法的:通用数据保护条例(GDPR)赋予人们有权利知道计算机程序做出影响其未来的决策,尽管尚未在实践中经过测试。 这种安全检查不仅是防止偏见和非法歧视传播的预防措施,它还使得人与这种最新的科技之间的伙伴关系富有成效。


One of the least controversial uses of machine learning is in the interpretation of medical data: for some kinds of cancers and other disorders computers are already better than humans at spotting the dangerous patterns in a scan. But it is possible to train them further, so that they also output a checklist of factors which, taken together, lead to their conclusions, and humans can learn from these. 

机器学习中争议最少的用途之一是解释医学数据:对于某些类型的癌症和其他疾病,计算机在扫描比人类更好地发现危险病症的模式。 可以进一步训练它们,以便它们也输出一个因素清单,所有这些因素一起导致了它们的结论,而人类,则可以从中学习。 


Power struggle

力量纷争


The second great development of the last year makes bad outcomes much more likely. This is the much wider availability of powerful software and hardware. Although vast quantities of data and computing power are needed to train most neural nets, once trained a net can run on very cheap and simple hardware. This is often called the democratisation of technology but it is really the anarchisation of it. Democracies have means of enforcing decisions; anarchies have no means even of making them. The spread of these powers to authoritarian governments on the one hand and criminal networks on the other poses a double challenge to liberal democracies. Technology grants us new and almost unimaginable powers but at the same time it takes away some powers, and perhaps some understanding too, that we thought we would always possess.

去年的第二次大发展更有可能带来不良的后果。 功能强大的软件和硬件的有更广泛可用性。 尽管需要大量的数据和计算能力来训练大多数神经网络,但是一旦经过训练,神经网络就可以在非常便宜和简单的硬件上运行。 这通常被称为技术的民主化,但它实际上是它的无政府化。 民主国家有执行决定的手段; 无政府国家甚至无法制造它们。 这些权力一方面传播给独裁政府,另一方面传播到犯罪网络,这对自由民主国家构成了双重挑战。 技术赋予了我们新的,几乎难以想象的力量,但与此同时它剥夺了我们认为我们永远拥有的一些力量,也许还有一些理解力。


  翻译:刘铭


感谢您的浏览,在右下角点个赞再走呗~

转载文章地址:http://www.ecrpl.com/qiwen/3323.html
(本文来自广域混流整合文章:http://www.ecrpl.com)未经允许,不得转载!
标签:
相关推荐
网站简介 联系我们 网站申明 网站地图

版权所有:www.ecrpl.com ©2017 广域混流

广域混流提供的所有内容均是网络转载或网友提供,本站仅提供内容展示服务,不承认任何法律责任。