DeepMind Gato and the Long and Uncertain Path to Artificial General Intelligence – Wirescience

Photo: Proprietary Photography / Unsplash


  • Last month, DeepMind, a subsidiary of tech giant Alphabet, caused a stir in Silicon Valley when it announced Gato, perhaps the most diverse AI model in existence.
  • For some computing experts, this is evidence that the industry is on the cusp of a long-awaited and interesting milestone: Artificial General Intelligence (AGI).
  • This would be huge for humanity. Think of everything you could accomplish if you had a machine that could be physically adapted to fit any purpose.
  • But a group of critics and scientists have argued that something fundamental is missing from the mega-plans to build Gato-like artificial intelligence into full AGI machines.

Last month, DeepMind, a subsidiary of tech giant Alphabet, caused a stir in Silicon Valley when it announced that Gato, is perhaps the most diverse AI model in existence. Gato, described as a “general agent”, can perform more than 600 different tasks. It can drive a robot, comment on photos, identify objects in photos, and more. It is perhaps the most advanced artificial intelligence system on the planet that is not dedicated to a single job. And for some computing experts, it’s evidence that the industry is on the cusp of a much-anticipated and exciting milestone: Artificial General Intelligence.

Unlike regular AI, Artificial General Intelligence (AGI) will not require huge data sets to learn a task. While ordinary AI must be pre-trained or programmed to solve a specific set of problems, general intelligence can learn through intuition and experience.

In theory, an AI would be able to learn just about anything a human could, if it had the same access to information. Basically, if you put an AGI on a chip and then put that chip into a robot, the robot can learn to play tennis the same way you or I do: by swinging the racket and learning about the game. This does not necessarily mean that the robot will be conscious or able to perceive. She won’t have thoughts or emotions, it would be really nice to learn to do new tasks without human help.

This would be huge for humanity. Think of all you could accomplish if you had a machine with the intellectual capacity of a human and the loyalty of a trusted canine companion—a machine that could be physically adapted to suit any purpose. This is the promise of artificial general intelligence. it’s a C-3PO without feelings Lieutenant Commander Data without curiosity and Rosie the robot without personality. In the hands of the right developers, it can embody an idea Human-centered artificial intelligence.

But how close is the dream of artificial general intelligence? Is Gato really close to us?

For a certain group of scientists and developers (I’ll call this group “Scaling-Uber-Alles“Crowd, which has adopted a term coined by world-renowned AI expert Gary Marcus (Gatto) and similar systems based on deep learning transformer models have already given us a blueprint for building artificial general intelligence. Essentially, these transformers use huge databases and billions or trillions of adjustable parameters to predict what It will then happen in sequence.

The Scaling-Uber-Alles audience, which includes such notable names as Ilya Sutskever of OpenAI and University of Texas at Austin Alex Dimakis, believes that Transformers will inevitably lead to Artificial General Intelligence. All that remains is to make it bigger and faster. As Nando de Freitas, one of the team members who created Gato, Tweet recentlyIt’s all about scale now! It’s game over! It’s about making these models bigger, safer, more efficient in computing, faster sampling, and smarter memory…” De Freitas and the company know they’ll have to create new algorithms and architectures to support this growth, but it seems They also believe that AGI will emerge on its own if we keep making models like the bigger Gato.

Call me old fashioned, but when a developer tells me that their plan is to wait for AI to magically emerge from the swamp of big data like muddy fish from primal stew, I tend to think they’re a few steps ahead. Apparently, I’m not alone. A large number of critics and scholars, including Marcus, have argued that something fundamental is missing in the grandiose plans to build Gateau-like artificial intelligence into intelligent machines in general.

I recently explained my reasoning for a file Triple From Articles for Next Web‘s vertical nervous, where I’m editor. In short, a key premise of AI is that it must be able to get hold of its own data. But deep learning models, such as AI switches, are little more than machines designed to make inferences about databases that have already been provided to them. They are librarians, and as such, they are only as good as their training libraries.

A general intelligence can theoretically figure things out even if it has a small database. It would intuition of the methodology to accomplish its task on the basis of nothing more than its ability to select external data which were important and unimportant, such as a human’s determination of the place of his interest.

Gatto is awesome and there is nothing quite like it. But it is, essentially, a smart package that arguably presents the illusion of general artificial intelligence through expert use of big data. Its giant database, for example, likely has datasets built on it The entire contents of the sites Like Reddit and Wikipedia. It’s amazing that humans have been able to do so much with simple algorithms just by forcing them to analyze more data.

In fact, Gato is a great way to fake general intelligence, which makes me wonder if we’re going to be barking at the wrong tree. There were many tasks that Gato could do today once thought To be something that only AI can do. It seems that the more we achieve with ordinary AI, the more difficult the challenge of building a generic agent seems to be.

For these reasons, I doubt that deep learning alone is the path to artificial general intelligence. I think we will need more than larger databases and additional parameters to modify. We will need a completely new conceptual approach to machine learning.

I believe that humanity will eventually succeed in the quest to build Artificial General Intelligence. My best guess is that we’ll be knocking on the AGI door sometime in the early to mid-21st century, and when we do, we’ll find that it looks very different than the scientists at DeepMind imagine.

But the beautiful thing about science is that you have to show your work, and now, DeepMind does just that. She has every opportunity to prove me wrong and the other opponents.

I really, really hope you succeed.

Tristan Greene is a futurist who believes in the power of human-centered technology. He is currently the Neural Future Vertical Editor for The Next Web.

This article was first published by not dark.