Why AI can’t solve unknown problems

9min read

欢迎来到AI book reviews, a series of posts that explore the latest literature on artificial intelligence.

When will we haveartificial general intelligence, the kind of AI that can mimic the human mind in all aspect? Experts are divided on the topic, and answers range anywhere betweena few decades and never

But what everyone agrees on is that current AI systems are a far shot from human intelligence. Humans can explore the world, discover unsolved problems, and think about their solutions. Meanwhile, the AI toolbox continues to grow with algorithms that can perform specific tasks but can’t generalize their capabilities beyond their narrow domains. We have programs that canbeat world champions at StarCraft但不能在业余水平上玩略微不同。我们有artificial neural networksthat can find signs of breast cancer in mammograms but can’t tell the difference between a cat and a dog. And we have复杂的语言模型这可以每小时旋转数千个看似相干的文章,但是当你向他们询问他们对世界的简单逻辑问题时开始打破。

在short, each of our AI techniques manages to replicate some aspects of what we know about human intelligence. But putting it all together and filling the gaps remains a major challenge. In his bookAlgorithms Are Not Enough,数据科学家Herbert Roitblat对AI的不同分支提供了深入的回顾,并描述了为什么每个人都缺乏创造一般情报的梦想。

Roitblat讨论了所有AI算法的常见缺点是需要预定义的表示。一旦我们发现一个问题并且可以以可计算的方式代表它,我们就可以创建可以解决它的AI算法,通常比我们自己更有效。然而,这是未被发现的和不可思议的问题,继续躲避我们。

Representations in symbolic AI

algorithms are not enough
赫伯特·罗比特“算法不够”

Throughout the history of artificial intelligence, scientists have regularly invented new ways to leverage advances in computers to solve problems in ingenious ways. The earlier decades of AI focused onsymbolic systems

AI的这个分支假设人类思维是基于符号的操纵,以及任何可以计算符号的系统是智能的。符号AI要求人类开发人员精心指定定义计算机程序行为的规则,事实和结构。符号系统可以执行卓越的壮举,例如记忆信息,以超快速计算复杂的数学公式,并模拟专家决策。受欢迎的编程语言和我们每天使用的大多数应用程序都在符号AI上完成的工作中。

But symbolic AI can only solve problems for which we can provide well-formed, step-by-step solutions. The problem is that most tasks humans and animals perform can’t be represented in clear-cut rules.

“The intellectual tasks, such as chess playing, chemical structure analysis, and calculus are relatively easy to perform with a computer. Much harder are the kinds of activities that even a one-year-old human or a rat could do,” Roitblat writes inAlgorithms Are Not Enough

这就是所谓的 ”Moravec的悖论“以科学家Hans Moravec命名,据称,与人类相比,计算机可以以很少的努力进行高级推理任务,但以很简单的技能自然地获得的简单技能斗争。

“人类大脑已经进化了数百万年的机制,使我们能够执行基本的SensoImotor功能。我们抓住球,我们认识到面孔,我们判断距离,似乎没有努力,“雷伯特写道。“另一方面,智力活动是最近的发展。我们可以用很多努力履行这些任务,通常很多培训,但如果我们认为这些能力是智能的原因,我们应该怀疑,而不是智能使这些能力成为可能的能力。“

因此,尽管有其显着的推理能力,但符号AI严格与人类提供的表示。

Representations in machine learning

Machine learningprovides a different approach to AI. Instead of writing explicit rules, engineers “train” machine learning models through examples. “[Machine learning] systems could not only do what they had been specifically programmed to do but they could extend their capabilities to previously unseen events, at least those within a certain range,” Roitblat writes inAlgorithms Are Not Enough

The most popular form of machine learning is监督学习,其中模型在一组输入数据(例如,湿度和温度)和预期的结果(例如,雨概率)上培训。机器学习模型使用此信息调整将输入映射到输出的一组参数。当介绍以前看不见的输入时,训练有素的机器学习模型可以以显着的准确性预测结果。没有必要明确if-then规则。

但是监督机器学习仍然建立在人类智能提供的陈述上,尽管那个比符号AI更松散。这是Roitblat如何描述监督学习:“[M] Archine Learning涉及它的表示,它将其设置为SATULE为三组数字。一组数字表示系统接收的输入,一组数字表示系统产生的输出,第三组数字表示机器学习模型。“

Therefore, while supervised machine learning is not tightly bound to rules like symbolic AI, it still requires strict representations created by human intelligence. Human operators must define a specific problem, curate a training dataset, and label the outcomes before they can create a machine learning model. Only when the problem has been strictly represented in its own way can the model start tuning its parameters.

“由系统的设计者选择了表示,”Roitblat写道。“在许多方面,表示是设计机器学习系统的最重要部分。”

One branch of machine learning that has risen in popularity in the past decade isdeep learning,这通常与人脑相比。在深度学习的核心是深度神经网络,在层上堆叠层简单的计算单位to create machine learning models that can perform very complicated tasks such as classifying images or transcribing audio.

Different layers of a neural network
Deep learning models can perform complicated tasks such as classifying images (source:http://www.deeplearningbook.org)

But again, deep learning is largely dependent on architecture and representation. Most deep learning models needs labeled data, and there is no universal neural network architecture that can solve every possible problem. A machine learning engineer must first define the problem they want to solve, curate a large training dataset, and then figure out the deep learning architecture that can solve that problem. During training, the deep learning model will tune millions of parameters to map inputs to outputs. But it still needs machine learning engineers to decide the number and type of layers, learning rate, optimization function, loss function, and other unlearnable aspects of the neural network.

“像大多数机器智能一样,真实的天才[深度学习]来自系统的设计方式,而不是从自己的任何自主智能设计。聪明的表现,包括巧妙的建筑,使得聪明的机器智能,“Roitblat写道。“深入学习网络通常被描述为学习自己的表示,但这是不正确的。网络的结构确定它可以从其输入中获得的表示。它如何代表输入以及它代表问题解决过程的方式就是确定为任何其他机器学习系统的深度学习网络。“

机器学习的其他分支遵循相同的规则。例如,无监督的学习不需要标记的示例。但它仍然需要一个明确的目标,如不造型的网络安全检测,客户细分in marketing, dimensionality reduction, or embedding representations.

加强学习,另一个流行的机器学习分支,非常类似于人类和动物智能的某些方面。AI代理不依赖于标记的培训示例。相反,它给出了一个环境(例如,棋盘或去板)一组可以执行的动作(例如,移动碎片,放置石头)。在每个步骤中,代理执行动作并以奖励和惩罚的形式从其环境接收反馈。通过试验和错误,加强学习代理发现了产生更多奖励的行动序列。

Computer scientist Richard Sutton describes reinforcement learning as “the first computational theory of intelligence。” In recent years, it has become very popular for solving complicated problems such asmastering computer and board gamesand developingversatile robotic arms and hands

加强学习
加固学习可以解决诸如播放板和视频游戏和机器人操纵之类的复杂问题

但增强学习环境通常非常复杂,并且代理可以执行的可能动作的数量非常大。因此,强化学习代理需要从人类智能的许多帮助来设计正确的奖励,简化问题,并选择合适的架构。例如,Openai五, the reinforcement learning system that mastered the online video game DotA 2, relied on its designers simplifying the rules of the game, such as reducing the number of playable characters.

“在任何琐碎的系统中,不可能检查,所有可能导致奖励的所有可能动作的可能组合,”Roitblat写道。“与其他机器学习情况一样,即使不能保证产生最佳答案,也需要将问题简化为更具易行的东西。”

这是Roitblat如何总结当前AI系统的缺陷Algorithms Are Not Enough:“目前对人工智能工作的方法,因为他们的设计师已经www.yabovip4弄清楚了如何构建和简化问题,以便现有计算机和进程可以解决它们。为了拥有真正的一般情报,计算机将需要能力定义和构建自己的问题。“

Is AI research headed in the right direction?

“Every classifier (in fact every machine learning system) can be described in terms of a representation, a method for measuring its success, and a method of updating,” Roitblat toldTechTalks.over email. “Learning is finding a path (a sequence of updates) through a space of parameter values. At this point, though, we don’t have any method for generating those representations, goals, and optimizations.”

有各种努力解决当前AI系统的挑战。一个流行的想法是continue to scale deep learning。一般推理是,更大的神经网络最终会破解一般情报代码。毕竟,人类大脑有超过100万亿突触。迄今为止最大的神经网络,由谷歌的研究人员开发,拥有one trillion parameters。证据表明,向神经网络添加更多层数和参数yields incremental improvements,特别是在GPT-3等语言模型中。

But big neural networks do not address the fundamental problems of general intelligence.

“These language models are significant achievements, but they are not general intelligence,” Roitblat says. “Essentially, they model the sequence of words in a language. They are plagiarists with a layer of abstraction. Give it a prompt and it will create a text that has the statistical properties of the pages it has read, but no relation to anything other than the language. It solves a specific problem, like all current artificial intelligence applications. It is just what it is advertised to be—a language model. That’s not nothing, but it is not general intelligence.”

Other directions of research try to add structural improvements to current AI structures.

例如,hybrid artificial intelligencebrings symbolic AI and neural networks together to combine the reasoning power of the former and the pattern recognition capabilities of the latter. There are already several implementations of hybrid AI, also referred to as “神经象征性系统,” that show hybrid systems require less training data and are more stable at reasoning tasks than pure neural network approaches.

Herbert Roitblat
赫伯特·罗比特,数据科学家和“算法的作者不够”(信誉:Josiah Grandfield)

System 2 deep learning,另一个方向的研究提出的深learning pioneer Yoshua Bengio, tries to take neural networks beyond statistical learning. System 2 deep learning aims to enable neural networks to learn “high-level representations” without the need for explicit embedding of symbolic intelligence.

另一个研究努力self-supervised learning,由Yann Lecun提出,另一个深入学习先锋和发明者convolutional neural networks。Self-supervised learning aims to learn tasks without the need for labeled data and by exploring the world like a child would do.

“I think that all of these make for more powerful problem solvers (for path problems), but none of them addresses the question of how these solutions are structured or generated,” Roitblat says. “They all still involve navigating within a pre-structured space. None of them addresses the question of where this space comes from. I think that these are really important ideas, just that they don’t address the specific needs of moving from narrow to general intelligence.”

Algorithms Are Not Enough,Roitblat提供了关于寻找能够积极寻求和解决它们尚未设计的问题的AI系统的想法。我们仍然有很多远离自己的学习以及我们如何在世界上应用智慧。

“Intelligent people can recognize the existence of a problem, define its nature, and represent it,” Roitblat writes. “They can recognize where knowledge is lacking and work to obtain that knowledge. Although intelligent people benefit from structured instructions, they are also capable of seeking out their own sources of information.”

但是观察智能行为比创造它更容易,而且,因为雷塔特在我们的信函中告诉我,“人类并不总是以他们说/认为他们所做的方式解决他们的问题。”

但随着我们继续探索人为和人类智力,我们将一步一步一步地走向AGI。

“Artificial intelligence is a work in progress. Some tasks have advanced further than others. Some have a way to go. The flaws of artificial intelligence tend to be the flaws of its creator rather than inherent properties of computational decision making. I would expect them to improve over time,” Roitblat said.

3 COMMENTS

  1. Put it simple – someone randomly discovered an artificial emulation of ONE learning process feature and called it intelligence. That is not intelligent at all. It’s like calling a rich man – smart and entrust him a civilization.

  2. 我们越来越多地看到令人印象深刻的技术ing and observing – and handling association (“how are the variables related?” “how would seeing X change my belief in Y?”).
    We need to be able to intervene (“what would Y be if I do X?” “How can I make Y happen?”)
    最终 - 想象,了解并执行回顾。(“是x导致y的x?”如果x没有发生怎么办?“”如果我采取不同的行为怎么办?“)。

  3. I had a thought the other day. I was thinking of the famous conundrum, that while the cellular structure of neurons is identical throughout the Human brain, there are over 5,000 local structures in that brain. And it seemed to me that Watson [the famous Jeopardy contestant] was in that sense biomimetic; it is composed of many subroutines which are polled to find out which of them might be relevant in a particular instance.

    现在在这浪潮中,我们已经专注于模仿的“视觉”功能,成功 - 所以我们已经推动了同样的“简单”的神经网络结构。我现在对你说,这不是AGI的方式。The way to AGI will involve more neural mimicry and more neuroscience to build more forms of networks, verified ant tested – ‘subroutines’ which may be polled, Watson-like, to build from the assemblage a result ‘more than the sum of its parts.

Leave a Reply

本网站使用AkisMet减少垃圾邮件。Learn how your comment data is processed