Home » Default » 谷歌:呼之欲出的人工智能时代

(注:上图即是今天我们的采访对象,故事的主人公戴密斯·哈沙比斯。)

在上一章节《Google 伟大征程之四:探索用户信息需求》中,我们细说了 Google 是如何进行缜密的实验和调研,来掌握人们对信息的需求的。其实曾经说的各种产品到最终都指向了人工智能。如今,呼之欲出的人工智能目前究竟处在怎样的进展中?这一次,我们将采取访谈的形式……

2011 年,戴密斯·哈沙比斯(Demis Hassabis)联合其他人共同创建了 DeepMind,这家位于英国,致力于探寻人工智能前沿科技的初创公司。从那天开始,这家公司逐渐成为了当今世界上科技大公司竞相追逐的热门目标。终于,在 2014 年的 6 月,哈沙比斯联同其他创世人锡恩·莱格(Shanne legg)以及穆塔法·舒雷曼(Mustafa Suleyman)同意了 Google 总计 4 亿美金的购买价格。而在 2014 年的年末,哈沙比斯接受了 Backchannel 的采访,畅谈了为什么它的团队同意加入 Google,为什么 DeepMind 对于 Google 所研究的人工智能领域是那么得重要。这次采访经过编辑、力求简洁清楚,重点明确。

关于 Deep Mind 和 Google 的结合

问:Google 是一家人工智能公司,对吗?是什么吸引你加入 Google 的?

哈沙比斯:是啊,其实 Google 的核心就是人工智能。当我开始创业的时候,我就想起来了 Google 对外界宣布的使命。它说自己的使命就是不断地整合世界的信息,使之更加有用,并且将其普及到全世界每一个角落。我对这段话的解读就是通过知识与信息,使得人类更加得强大。如果你能够明白这一切的深意,应该可以理解我们所从事的人工智能技术的本质是和这种愿景完美地贴合在一起的。我们所从事针对人工智能的研究,就是使得系统能够自动地将非结构化的信息转变成更加有用、也是能够给人以行动指导的信息。

问:是不是拉里·佩奇在这次加入 Google 的决定中起到了关键性的作用?

答:是啊,这是非常关键的因素。拉里和其他几个人都是发自内心地热爱人工智能。我知道其实也有其他几家大公司都意识到了人工智能的威力,并且想在这方面多做点儿事。但是我不认为他们能够具有像拉里以及 Google 公司所具有的那种热情。即使 Facebook 也许在未来的人工智能领域会扮演起领导者的角色,但马克·扎克伯格看待人工智能,只是将其当做一种工具,而非在此之上更加复杂宏伟的一种愿景。我非常相信包括 Facebook 在内的若干科技大公司其实对于人工智能的热情远远不足于与 Google 相提并论。马克扎克伯格还喜欢其他的技术,「将人们连接起来」是他的使命,他还喜欢诸如 Oculus 这样的虚拟现实科技。人工智能领域并非他所关注的唯一领域。

问:你们的技术在加入了 Google 之后会起到哪些实质性的进步?

答:起到的进步作用会非常大。这也是我们为什么加入 Google 的另外一个原因。我们在加入之前,其实已经有了非常强大的投资背景以及充裕的现金储备,但是并没有一个像样的计算机基础设施及工程基础建设。而这恰恰是 Google 的强项,他们已经花了 10 年时间将这一切做的非常完善。如今,我们因为可以并行进行 100 万次试验,从而以非常快速高效的方式来完成研究。

问:其实你所做的最实质性的巨大进步并不是去深入到已经组织过后的数据库中,而是去分析那些未经组织的信息,比如流转在互联网上的文件和图片,并且寻找到更好地利用它们的方式。我这么说是否正确?

答:完全正确。这也是在接下来的几年时间里发生的重大技术进步中的一件。同时我还坚信如果你想要获得最为强大的人工智能,那么学习将非组织化的信息如何有效利用起来将是唯一的途径。这也称之为自主学习,也就是你只是给了它数据,它自己就会知道接下来该怎么做,自行分析其中有着怎样的结构,里面包含着哪些实质性的信息。我们只对这样的人工智能感兴趣。

问:听说你在 Google 中合作的搭档是吉尔夫·辛顿(Geoff Hinton),这是神经网络的前沿探索者。他的工作内容对你工作是否意义重大?

答:当然。他在 2006 年的时候开启了这个领域的新篇章。他引入了深度神经网络的概念:深度学习(Deep Learning)。与此具有同等重要地位的是自我学习(Reinforcing Learning)。而我所负责的Deep Mind 就是结合了这两个领域的内容。

关于 Deep Mind 在人工智能领域的研究方向

问:能进一步解释一下你的研究所具有的独到之处吗?

答:我之前所开的公司名称就叫做 Deep Mind。顾名思义,它肯定是将重点放在了深度学习上面。但是我们同样对神经科学非常感兴趣。

我相信如果我们对大脑了解得越多,那么我们就越可能创造出一个接近于人类智能的机器出来。

你知道吗?其实研究这些学习算法所能让人最兴奋的点在于,它们都位于「元级别」的最底层的领域。我们要赋予它一种能力,根据它自己的经验来不断的进行自我的学习强化,正如一个人的成长过程一样,也许它成长到一定级别,会了解到一些连我们人类都不知道该怎么去编程设计的东西。这种前沿科技所隐藏着的未知性是最让人兴奋的原因了。当然,在这个过程中得需要最顶尖的程序员和研究者,来打造一个如大脑一样的结构和系统,可以执行自我学习的过程。

换句话说,我们要集合很多人的智慧来打造一个系统。这个系统并不是针对某一个程序来运行的,我们所开发出来的这款程序将既能下国际象棋、又能玩儿 Crosses 以及 Drafts 这样的桌面游戏,而不会每换到一个游戏上我们都要重新编制程序。这种算法能够将一个领域的经验复制到另外一个全新的领域。就拿我们人类来说,如果你本身就会玩儿扑克,又或者是一些基本的桌面游戏,那么我给你拿出另外的一种扑克玩法或者另外一种类型的桌面游戏,想必你也很快能够上手。因为你记住了其中的一些基本原则:比如高分牌能够打败低分牌,这是一些能够跨越多个扑克游戏的经典规则。

问:根据你的描述,是不是这样的程序还是会有所限制?比如只不过是一个能够玩儿很多种扑克游戏的程序而已?难道不是那种胜任一切工作的程序吗?

答:其中最终还是某种更加宏观的东西。我们的研究项目正在一点一滴地扩大自己的领域。我们已经有了一个参考的范本,那就是我们的大脑。我们可以模仿它的沟回、结构、我们知道这一切都是可行的。

问:给我们说时候你刚刚收购的两家从牛津大学走出来的公司好吗?

答:这些从牛津毕业的家伙们真的是一群天才。其中一个团队(官方名称是 Dark Blue Labs)将焦点放在自然语言习得上,通过深度神经网络来进行研究。这种研究方法不同于传统的神经语言程序学(NLP)。该研究项目的领导人是菲尔·布鲁森(Phil Blunsom)。我们希望能够把语言嵌套到系统中使之可以进行转化。但与此同时它一定程度上又明显是语言诞生前的状态,因为并没有既成的语言能力。第二个研究团队名叫 Vision Factory,领导人是安德鲁·泽斯曼(Andrew Zisserman),他是著名的计算机学的专家。这两个研究团队的研究成果将最终汇聚到同一个引擎当中。

问:你们团队研究的成果将主要应用在 Google 旗下的哪个产品上?

答:虽然我们刚加入 Google 不久,但是我们觉得这里有成吨的产品都能够用得上我们的技术。我们围绕着人工智能对其各个方向做着尝试,比如探索如何在 YouTube 上做更加准确的个性化推荐;我们同时也在思考如何让 Google Now 表现的更加强大,能够真正成为你身边的助手管家,了解你即将做的每一件事并提前给你想要的答案;我们同样还在考虑人工智能与自驾驶的无人汽车挂起钩来。

问:我们什么时候才能看到真正的成果问世?

答:在接下来的 6 个月到一年的时间内,你们将看到我们把这项技术应用在 Google Plus,自然语言处理以及一些推荐功能上,到时会出现一些比较具有代表性,也是比较具体完整的功能和产品出现。

问:那么视频搜索呢?(这里说的视频搜索不是我们传统意义上输入文字来进行有关视频的搜索,而是输入指令从文字转换成了动作。)

答:这又是另外的一项重大技术突破。当你想要看人们踢球的视频片段,或者抽烟的视频片段,你不再输入文字,而只是模仿出相应的动作即可。Vision 团队就在做这样的研究。动作识别,而非之前简单的图片识别。

问:从长远来看,你希望能为 Google 做点什么?

答:其实我最期待的还是人工智能所蕴含的潜力。牵扯到人工智能的领域包罗万象,无论是医疗、气候、能源、甚至宏观经济、只要是所有集成信息方面的领域全部都能用得上人工智能。这些数据的整合和应用,如果没有人工智能的介入是根本无法实现的。无论是一名杰出顶尖的科学家,又或者是一个顶尖的团队都没办法做到。我们需要机器学习和人工智能来帮助我们在这些领域获得研究突破。我的这个愿景就是希望在将来能够和 Google 挂钩上。

关于对目前人工智能领域的看法

问:你怎么评价《她》这部电影?(这部电影讲述了作家西奥多在结束了一段令他心碎的爱情长跑之后,他爱上了电脑操作系统里的女声,这个叫「萨曼莎」的姑娘 (斯嘉丽·约翰逊) 不仅有着一把略微沙哑的性感嗓音,并且风趣幽默、善解人意,让孤独的男主泥足深陷。)

答:我觉得这部电影很美,我很喜欢。这部电影反映出来了人类对于未来人工智能的美好想象,并且在电脑是否具有感情以及其他一些类人类的反应的时候有着独到的观点。不过我觉得这不太现实。因为现在我们的手机上就存在着人工智能,并且每天处理着一些日常任务,它本应该应用到科学研究领域,而非电影体现的那样……

问:你曾经进行过很多成功的实验。将这些实验成果整合到一个系统中,使得成百上千万的人能够使用,这样做的难度有多大?

答:我想这是一个分阶段、按照计划步骤来实施的过程。一开始进行研究的时候肯定是带着某些问题来进行的,通过实验来寻找答案,然后我们再进行一些重要的神经学方面的研究,最后再转向机器学习,并且开始设计出来一套现实中能够应用的系统。Deep Mind 的研究团队中四分之三的人员都是研究人员,还有四分之一是负责应用领域。这四分之一的成员是能嫁接起来目前研究成果与 Google 现有产品的中间力量。

问:你之前从事过游戏开发方面的工作,并且获得了事业上的巨大成功。但是你忽然觉得你应该去研究人脑,然后就离开了那个行业?

答:是啊,我整个的事业重心几乎就此完全转移,我将未来的筹码全部押在了人工智能公司上。其实在我十几岁的时候我就相信,未来扮演最重要角色的必将是人工智能。

问:但你曾经是游戏业界中的翘楚,曾经给这个市场带来的最棒的游戏产品,并且合伙创办了 Elixir Studios 工作室。当时的想法难道是:「好吧,是时候让我们开始学神经学了。」难道真的这么简单吗?告诉我你是怎么转型的?

答:其实更好的表达应该是:「其实我想看看在游戏的形式下,人工智能究竟能够走多远?」也许 Black & White 这款游戏是我所制作的游戏的顶峰,之后还有主题公园等游戏,但是到了 2004 至 2005 年,身处游戏业,我感觉自己是在有着严格约束力的商业环境下去推进人工智能。当时我就觉得未来的游戏将更加简单,也更加具有移动化的特征,现在周遭发生的一切也都印证了我的预言。所以根据我的判断,我觉得在一个游戏项目中去运作一个大型的人工智能项目的可能性越来越小。所以我在 2004 年的时候考虑筹办 Deep Mind,但是在那个时候我发现,其实让一切有利于研究开展的外部条件并不完全具备。深度学习(Deep Learning)那个时候还没出现呢,并且那个时候计算能力并不足够强大。在这个环境下我就在考虑我是应该将精力放在哪个领域呢?后来发现与其你现在直接去研究人工智能,不如先沉浸在神经学的领域里,我想通过这方面的涉足,给我带来全新的一些滚点。

问:在你研究大脑的这么些年时间里,当你创办了人工智能公司的时候,什么是你最大的收获?

答:收获的东西太多太多。其中之一就是自我学习。神经学给我们揭示了自我学习中的某些机制、证明了人工智能的可行性。在上世纪九十年代末,Peter Dayan 和他的同事们就拿猴子做过实验,证明确实猴子们在自我学习过程中神经网络确实在发挥作用的,所以由此得出神经网络是人工智能系统的一部分,这样的结论并非天方夜谭。如果你在黑暗中摸索,神经网络算是远处照射过的一束光,告诉自己其实自己并没有疯,告诉你这一切是可行的,只要你再努力一些。另外我还对大脑海马体的了解有了进一步的认识、事实上,对它的了解、探索以及发现让人兴奋不已。

问:为什么?

答:深度学习主要是发生在大脑皮层,但是海马区是另外一个非常重要的位置,其构造完全不同于皮层。没有了这部分构造,人类的记忆就无从谈起。所以我非常想了解这一切都是如何运转起来的。比如当你睡觉的时候,在皮层和海马区之间发生着怎样的信息传递和整合。你在白天所积累下来的记忆,会按照重要程度的顺序迅速回放到大脑的其他部分,这些对人工智能的研究来说都意义非常重大。

问:当你在说「大脑算法」的时候,是一种比喻,还是真的大脑存在某种算法?

答:我倾向于后者的理解。但是我们并不会真的去完全模拟制造出来一个人工的海马区。在这里我们强调的是智能所具有的功能性。我们之所以这么提,是因为要强调大脑的重要性,并以此与人工智能连接上。有太多做机器学习的人都忽视了大脑的重要性了。

问:你工作中所遇到的最大难题是什么?

答:最大的难题是我们称之为「学习转移」的功能实现。你如果掌握了一个领域的技能,那么你是否能够从中提取出来一些准则,存储到类似于「图书馆」的地方以备未来使用,尤其是在你接触到了一个新的领域的时候。其实我们已经能够很好地处理知觉信息并将行动建立在此之上,但是一旦涉及到「概念」这种模糊的定义,现在至今还没有人能够做得到学习上的转移

关于「人工智能道德委员会」这个神秘组织

问:听说你当时同意 Google 购买的时候定了一个条件:就是必须公司设置一个类似于人工智能道德委员会的组织出来。能给我们讲讲这些吗?

答:这其实就是收购协议中的一部分内容。这是一个相对独立的顾问委员会,跟其他领域没什么不同。

问:你为什么这么做呢?

答:我相信人工智能即将改变世界,这是一项令人叹为观止的科技。当然我们一定要谨记:所有的科技都是中性的,它既可以为人类的福祉所用,又可以给人类带来灾难。所以我们必须确保它在使用过程中是对人类负责任的。我和其他创始人考虑这件事已经非常长的时间了。

问:现在这个团体做了什么工作没?

答:当然目前还没有,只是在任何问题还没发生之前提前预备着。但是我们确实有一些原则性的东西:比如所有从 Deep Mind 出来的科技都不能应用在军事和情报领域

问:你是否相信这样一个委员会真的能够起到约束人工智能的作用?

答:我相信,在这个委员会不断成熟完善的过程中,它自然会实现我们的初衷。这也是为什么它这么早就筹备起来的原因。因为他们有足够的时间来去思考、探索这项技术的各方面细节,无论是好的,亦或者是坏的方面都能够尽收眼底。委员会的成员来自计算领域、神经学、机器学习领域,都是顶尖的学者和教授。

问:现在这个委员会已经正式成立了是吗?

答:确实正式成立了,但是我无法告诉你里面都有谁。

问:为什么不说呢?

答:因为这都属于保密级别。我们认为在目前这个组织还未成熟起来的阶段,保密是非常重要的工作。毕竟这是一件即将震动全世界的技术。

问:那你是否会最终公布研究人员姓名?

答:应该有可能。毕竟,人工智能领域中,信息的透明度也是非常重要,值得我们重点关照的一环。

来自:http://tech2ipo.com/95605

原文:https://medium.com/backchannel/the-deep-mind-of-demis-hassabis-156112890d8a

原文:

From the day in 2011 that Demis Hassabis co-founded DeepMind—with funding by the likes of Elon Musk—the UK-based artificial intelligence startup became the most coveted target of major tech companies. In June 2014, Hassabis and his co-founders, Shane Legg and Mustafa Suleyman, agreed to Google’s purchase offer of $400 million. Late last year, Hassabis sat down with Backchannel to discuss why his team went with Google—and why DeepMind is uniquely poised to push the frontiers of AI. The interview has been edited for length and clarity.

[Steven Levy] Google is an AI company, right? Is that what attracted you to Google?

[Hassabis] Yes, right. It’s a core part of what Google is. When I first started here I thought about Google’s mission statement, which is to organize the world’s information and make it universally accessible and useful. And one way I interpret that is to think about empowering people through knowledge. If you rephrase it like that, the kind of AI we work on fits very naturally. The artificial general intelligence we work on here automatically converts unstructured information into useful, actionable knowledge.


Demis Hassabis. Photo: Souvid Datta/Backchannel
Were your interactions with Larry Page a big factor in your decision to sell to Google?

Yes, a really big factor. Larry specifically and other people were genuinely interested in AI as a cool thing. Many big companies realize the power of AI now and want to do some AI, but I don’t think they’re as passionate about it as we are or Google is.

So even though Facebook may have super intelligent leadership, Mark [Zuckerberg] might see AI as more of a tool than a mission in a larger sense?

Right, yes. That may change over time. I certainly believe AI is one of the most important things humanity can work on but he hasn’t got a deep rooted interest in it that someone like Larry has. He’s interested in other things— connecting people is his mission. And he’s interested in very cool things like Oculus and stuff like that. I used to do computer games and graphics and that stuff but it’s not as important to me as AI.

How big of a boost is it to use Google’s infrastructure?

It’s huge. That’s another big reason we teamed up with Google. We had tons of venture money and amazing backers, but to build the computer infrastructure and engineering infrastructure that Google had would have taken a decade. Now we can do our research much quickly because we can run a million experiments in parallel.

The big leap you are making is not only to dig into things like structured databases but to analyze unstructured information — such as documents or images on the Internet — and be able to make use of them as well, right?

Exactly. That’s where the big gains are going to be in the next few years. I also think the only path to developing really powerful AI would be to use this unstructured information. It’s also called unsupervised learning— you just give it data and it learns by itself what to do with it, what the structure is, what the insights are. We are only interested in that kind of AI.

One of the people you work with at Google is Geoff Hinton, a pioneer of neural networks. Has his work been crucial to yours?

Sure. He had this big paper in 2006 that rejuvenated this whole area. And he introduced this idea of deep neural networks—Deep Learning. The other big thing that we have here is reinforcement learning, which we think is equally important. A lot of what Deep Mind has done so far is combining those two promising areas of research together in a really fundamental way. And that’s resulted in the Atari game player, which really is the first demonstration of an agent that goes from pixels to action, as we call it.

What was different about your approach to research here?

We called the company Deep Mind, obviously, because of the bet on deep learning. But we also were deeply interested in getting insights from neuroscience.

I imagine that the more we learn about the brain, the better we can create a machine approach to intelligence.

Yes. The exciting thing about these learning algorithms is they are kind of meta level. We’re imbuing it with the ability to learn for itself from experience, just like a human would do, and therefore it can do other stuff that maybe we don’t know how to program. It’s exciting to see that when it comes up with a new strategy in an Atari game that the programmers didn’t know about. Of course you need amazing programmers and researchers, like the ones we have here, to actually build the brain-like architecture that can do the learning.

In other words, we need massive human intelligence to build these systems but then we’ll —

… build the systems to master the more pedestrian or narrow tasks like playing chess. We won’t program a Go program. We’ll have a program that can play chess and Go and Crosses and Drafts and any of these board games, rather than reprogramming every time. That’s going to save an incredible amount of time. Also, we’re interested in algorithms that can use their learning from one domain and apply that knowledge to a new domain. As humans, if I show you some new board game or some new task or new card game, you don’t start from zero. If you know to play bridge and whist and whatever, I could invent a new card game for you, and you wouldn’t be starting from scratch—you would be bringing to bear this idea of suits and the knowledge that a higher card beats a lower card. This is all transferable information no matter what the card game is.


Demis Hassabis. Photo: Souvid Datta/Backchannel
Would each program be limited — like one that plays lots of card games — or are you thinking of one massive system that learns how to do everything?

Eventually something more general. The idea for our research program is to slowly widen and widen those domains. We have a prototype of this — the human brain. We can tie our shoelaces, we can ride cycles and we can do physics with the same architecture. So we know this is possible.

Tell me about the two companies, both out of Oxford University, that you just bought.

These Oxford guys are amazingly talented groups of professors. One team [formerly Dark Blue Labs] will focus on natural language understanding, using deep neural networks to do that. So rather than the old kind of logic techniques for NLP, we’re using deep networks and word embeddings and so on. That’s led by Phil Blunsom. We’re interested in eventually having language embedded into our systems so we can actually converse. At the moment they are obviously prelinguistic—there is no language capability in there. So we’ll see all of those things marrying up. And the second group, Vision Factory, is led by Andrew Zisserman, a world famous computer vision guy.

But all of this research would all eventually be part of the same engine.

Yeah. Eventually all of those things become part of one bigger system.

What products at Google is your team looking to improve?

We still feel quite new to Google, but there’s tons of things we could apply parts of our technology to. We’re looking at various aspects of search. We’re looking at stuff like YouTube recommendations. We’re thinking about making Google Now better in terms of how well it understands you as an assistant and actually understands more about what you’re trying to do. We’re looking at self-driving cars and maybe helping out with that.

When will we see this happening?

In six months to a year’s time we’ll start seeing some aspects of what we’re doing embedded in Google Plus, natural language and maybe some recommendation systems.

How about video search?

That’s another big thing—do you want to type in actions like someone kicking a ball or smoking or something like that? The Vision group is working on those kinds of questions. Action recognition, not just image recognition.

What do you hope to do for Google in the long run?

I’m really excited about the potential for general AI. Things like AI-assisted science. In science, almost all the areas we would like to make more advances in—disease, climate, energy, you could even include macroeconomics— are all questions of massive information, almost ridiculous amounts. How can human scientists navigate and find the insights in all of that data? It’s very hard not just for a single scientist, but even a team of very smart scientists. We’re going to need machine learning and artificial intelligence to help us find insights and breakthroughs in those areas, so we actually really understand what these incredibly complex systems are doing. I hope we will be linking into various efforts at Google that are looking at these things, like Calico or Life Sciences.

What did you think of the movie Her?

I loved it aesthetically. It’s in some ways a positive take on what AI might become and it had interesting ideas about emotions and other things in computers. I do think it’s sort of unrealistic, in that there was this very powerful AI out there but it was stuck on your phone and just doing fairly everyday things. Whereas it should have been revolutionizing science and…there wasn’t any evidence of anything else going on in the world that was very different, right?

You’ve had successful experiments, but how difficult is it to build those into a system that hundreds of millions of people will use?

It’s a multi-step process. You start with the research question and find that answer. Then we do some major neuroscience and then we look at it in machine learning and we implement a practical system that can play Atari really well and then that’s ready to scale. Here at Deep Mind about three quarters of the team is research but one quarter is applied. That team is the interface between the research that gets done here and the rest of Google’s products.

You had a fantastic career in the gaming world and you left it because you felt you had to learn about the brain.

Yeah. Actually my whole career, including my games career, has been leading up to the AI company. Even in my early teens I decided that AI was going to be the most interesting to work on and the most important thing to work on.

But you were at the top of the game world — you worked on huge hits like Black and White and founded Elixir Studios — and you just thought, “OK, time to study neuroscience?”

It was more like, “Let’s see how far I can push AI under the guise of games. So Black & White was probably the pinnacle of that, then it was Theme Park and Republic and these other things that we tried to write. And then around 2004–2005, I felt we’d pushed AI as far as it could go within the constraints of the very tight commercial environment of games. And I could see that games were going to go more towards simpler games and mobile — as they have done— and so actually there would be less chance to work on a big AI project within a game project. So then I started thinking about Deep Mind — this is 2004 — but I realized that we still didn’t have enough of the components to make fast progress. Deep Learning hadn’t appeared at that point. Computing power wasn’t powerful enough. So I looked at which field should I do my PhD in and thought it would be better to do it in neuroscience than in AI, because I wanted to learn about a whole new set of ideas and I already knew world-class AI people.

In your years of studying the brain, what was the biggest takeaway as you started an AI company?

Lots of things. One is reinforcement learning. Why do we believe that that’s an important core component? One thing we do here is look into neuroscience inspiration for new algorithms and also validation of existing algorithms. Well it turns out in the late ‘90s, Peter Dayan and colleagues were as involved in an experiment using monkeys, which showed that their neurons were really doing reinforcement learning when they were learning about things. Therefore it’s not crazy to think that that could be a component of an overall AI system. When you’re in the dark moments of trying to get something working, it’s useful to have that additional information—to say, “We’re not mad, this will really work, we know this works—we just need to try harder.” And the other thing is the hippocampus. That’s the brain area I studied, and it’s the most fascinating.

Why?

Deep Learning is about essentially [mimicking the] cortex. But the hippocampus is another critical part of the brain and it’s built very differently, a much older structure. If you knock it out, you don’t have memories. So I was fascinated how this all works together. There’s consolidation [between the cortex and the hippocampus] at times like when you’re sleeping. Memories you’ve recorded during the day get replayed orders of magnitude faster back to the rest of the brain. We used this idea of memory replay in our Atari agent. We replayed trajectories of experiences that the agent had had during the training phase and it got the chance to see that hundreds and hundreds and hundreds times again, so it could get really good at that particular bit.

When you talk about the algorithms of the brain, is that strictly in the metaphoric sense or are you talking something more literal?

It’s more literal. But we’re not going build specifically an artificial hippocampus. You want to say, what are the principles of that? [We’re ultimately interested in the] functionality of intelligence, not specifically the exact details of the specific prototype that we have. But it’s a mistake also to ignore the brain, which a lot of machine learning people do. There are hugely important insights and general principles that you can use in your algorithms.

Because we don’t fully understand the brain, it seems difficult to take this approach all the way. Do you think there’s something that’s “wet” that you can’t do in silicon?

I looked at this very carefully for a while during my PhD and before that just to check where this line should be drawn. [Roger] Penrose has quantum consciousness [which postulates there are quantum effects in the mind that computers can’t emulate]. Beautiful story, right? You wish it’s sort of true, right? But it all collapses. There doesn’t seem to be any evidence. Very top biologists have looked carefully for quantum effects in the brain and there just didn’t seem to be any. As far as we know it’s just a classical computation device.


Demis Hassabis. Photo: Souvid Datta/Backchannel
What’s the big problem you’re working on now?

The big thing is what we call transfer learning. You’ve mastered one domain of things, how do you abstract that into something that’s almost like a library of knowledge that you can now usefully apply in a new domain? That’s the key to general knowledge. At the moment, we are good at processing perceptual information and then picking an action based on that. But when it goes to the next level, the concept level, nobody has been able to do that.

So how do you go about doing that?

We have several promising projects on that which we’re not ready to announce yet.

One condition you set on the Google purchase was that the company set up some sort of AI ethics board. What was that about?

It was a part of the agreement of the acquisition. It’s an independent advisory committee like they have in other areas.

Why did you do that?

I think AI could be world changing, it’s an amazing technology. All technologies are inherently neutral but they can be used for good or bad so we have to make sure that it’s used responsibly. I and my cofounders have felt this for a long time. Another attraction about Google was that they felt as strongly about those things, too.

What has this group done?

Certainly there is nothing yet. The group is just being formed — I wanted it in place way ahead of the time that anything came up that would be an issue. One constraint we do have— that wasn’t part of a committee but part of the acquisition terms—is that no technology coming out of Deep Mind will be used for military or intelligence purposes.

Do you feel like a committee really could make an impact on controlling a technology once you bring it into the world?

I think if they are sufficiently educated, yes. That’s why they’re forming now, so they have enough time to really understand the technical details, the nuances of this. There are some top professors on this in computation, neuroscience and machine learning on this committee.

And the committee is in place now?

It’s formed yes, but I can’t tell you who is on it.

Why not?

Well, because it’s confidential. We think it’s important [that it stay out of public view] especially during this initial ramp-up phase where there is no tech— I mean we’re working on computing Pong, right? There are no issues here currently but in the next five or ten years maybe there will be. So really it’s just getting ahead of the game.

Will you eventually release the names?

Potentially. That’s something also to be discussed.

Transparency is important in this too.

Sure, sure. There are lots of interesting questions that have to be answered on a technical level about what these systems are capable of, what they might be able to do, and how are we going to control those things. At the end of the day they need goals set by the human programmers. Our research team here works on those theoretical aspects partly because we want to advance [the science], but also to make sure that these things are controllable and there’s always humans in the loop and so on.

标签: none

添加新评论

V