I recently finished reading Herbert Dreyfus’s (who apparently inspired the name of Professor Hubert Farnsworth in Futurama) What Computers Still Can’t Do, which gave me serious philosophical doubts about the feasibility of artificial general intelligence (AGI) on digital computers for the first time. I feel like I’ve heard every argument under the sun for why AGI is impossible, and I’ve never found a single one persuasive before. Most aren’t even very coherent! So, this was a first for me. (I should also give this book credit for having the coolest subtitle ever: A Critique of Artificial Reason.)

The first part of the book gives a history of AI from the 50s through the 70s. In particular, it chronicles a history of AI’s failures. Dreyfus characterizes the history of AI as a series of initial successes, followed by failures to scale. His critique is mostly levied against what’s now called “Good Old-Fashioned AI” or “GOFAI.”

More in AIWhat Should We Expect From AI?

 

What Is GOFAI?

GOFAI essentially consisted of logic-based systems, which used a set of known facts and formulaic rules. The paradigmatic example is Cyc, developed by Douglas Lenat, which was (and still is) an attempt to reach AGI by manually encoding commonsense knowledge into a large database of millions of discrete facts, such as “Bill Clinton is a former U.S. President,” "All trees are plants,” and "Paris is the capital of France” (yes, really). On top of this, it has an inference engine, a set of rules to derive new facts and answer questions.

My favorite of Dreyfus’s arguments is in fact a criticism of Cyc in particular:

AI researchers have long recognized that the more a system knows about a particular state of affairs, the longer it takes to retrieve the relevant information, and this presents a general problem where scaling up is concerned. Conversely, the more a human being knows about a situation or individual, the easier it is to retrieve other relevant information. This suggests that human beings use forms of storage and retrieval quite different from the symbolic one representationalist philosophers and Lenat have assumed.

That is to say, such databases become harder to search the more they know, whereas when humans learn, they refine their intuition and their reasoning becomes faster. This seems to represent a fundamental disconnect between GOFAI and true intelligence. Dreyfus also quotes Lenat’s proposed solution:

The natural tendency of any search program is to slow down (often combinatorially explosively) as additional assertions are added and the search space therefore grows. . . . [T]he key to preserving effective intelligence of a growing program lies in judicious adding of meta-knowledge.

 

So, to fix the problem of slogging through too many facts, he proposes ... adding yet more facts to his database. Suffice it to say that I am not holding my breath on Cyc becoming general any time soon (or ever, for that matter). Dreyfus’s criticism of GOFAI was essentially spot on, as these methods have indeed failed to scale and researchers have largely abandoned them in the modern deep learning era.

 

What Is Artificial General Intelligence (AGI)? 

The latter half of the book moves on to the philosophical underpinnings of AGI. This is where the fun truly begins. There’s a lot of nonsense in here, in my opinion. Dreyfus goes on at some length about how true intelligence must be “embodied.” I’m not quite sure how the sensory inputs to a body in three-dimensional space form some privileged set of input-output channels compared to any other data streams available to an AI. This might be beside the point, however, since Dreyfus says even an AI inside a robot body would still fail to be truly “embodied.” I’m not really sure what he means by this word, and I’m not convinced it’s anything all that coherent.

He also repeatedly insists that humans are always already “in a situation,” whereas AIs are not and maybe never can be. He considers this point to be of central importance, but sadly I am again unable to determine exactly what he means by it. Fortunately, he does not use any Gӧdel-based arguments, and in fact explicitly rejects them, for which he has my undying gratitude.

Gӧdel and AGI

The mathematician Kurt Gӧdel showed, roughly, that for any sufficiently expressive formal mathematical system, there will be truths within the system that cannot be proven in that system. But, the argument goes, humans are able to apprehend these truths, so the brain must be doing something more than merely computational.

The Future of Artificial IntelligenceAI Winter Is Coming

 

The Argument Against AGI

But he had one argument in particular that did throw me. It wasn’t even a very central argument, exactly. Just one point in a larger section, split between the main text and a footnote. But it hooked into my brain, and made me doubt the feasibility of AGI. So here’s the argument. Or at least, the version that formed in my head after reading Dreyfus. 

Planets orbit stars according to certain differential equations, but they don’t have to internally compute those equations to do so. Soap bubbles take on the shape of minimum surface area (given their boundary) without having to internally minimize an integral. How do we know the brain is any different?

The brain can produce intelligent behavior, but how do we know it’s actually computing anything internally? What if, simply by dint of its nature, it acts in accordance with our mathematical models of intelligence in just the same way that it’s simply a bubble’s nature to take on a certain shape, without having to compute anything? In fact, we can spend a lot of computing power to reproduce what a bubble can (apparently) do automatically and effortlessly.

Okay, but does this matter? The brain is still a physical system, and any such system can ultimately be simulated by a Turing machine, right? Well, yes, but that doesn’t really tell us anything beyond the fact that we could simulate the physics of the brain. This would produce intelligent behavior if done finely enough, but if we did it all the way down at the atomic level, it would be totally infeasible in real life. There are trillions of cells in the brain and trillions of atoms per cell, so we’d have to simulate a system of trillions of trillions of atoms, way beyond any realistic amount of computing power.

This isn’t what we do in AI. We want to model intelligence at a higher level than the atomic one, so we create mathematical models of intelligent behavior, and try to get our machines to compute them. In GOFAI, this process was done explicitly, but even in deep learning, we train our machines to learn and explicitly compute functions that produce the behavior we want to see.

But if the brain isn’t performing any computation to do what it does, then we have no idea how hard computing intelligence might be! The fact that the brain produces intelligent behavior does not prove that it’s computationally feasible for a computer to do it. For instance, it’s not even feasible for us to get exact solutions to the n-body problem, but physics seems to have no trouble doing it.

What Is the N-Body Problem?

Given n gravitational bodies, the n-body problem is the problem of predicting the paths of each body as they all interact.

Nature seems to be able to act in accordance with complex mathematical models without actually expending computing power to do so. So, it seems conceptually possible that the brain produces intelligent behavior without having to explicitly compute it, and that building machines to explicitly compute intelligent behavior could be infeasible to reach AGI.

On this model, evolution presumably selected for certain physical structures that naturally produce intelligent behavior. Of course, we could understand this structure well enough in principle to someday build new physical artifacts that, by their nature, act in accordance with even higher standards of intelligence. But that sounds much harder and isn’t anything like what we’re doing currently.

 

The AGI Hypothesis

Ultimately, Dreyfus’s argument is not that we know for a fact that intelligence isn’t computable. Rather, his point is simply that AI researchers seem to take it as axiomatically true that it is so. Thus, no matter what failures they encounter, they continue on with total optimism. They conclude only that new methods are necessary, never that the fundamental project is flawed.

I must say, he described my epistemic state quite well. I never viewed the failures of GOFAI as evidence that AI was impossible, just that different methods were necessary. I always relied on something like the Church-Turing Thesis, that the brain was a physical system, and so must ultimately be simulable. But Dreyfus thinks this argument doesn’t actually show what it needs to show, for the reasons outlined above.

What Is the Church-Turing Thesis?

The Church-Turing Thesis comes in many forms. At its most basic, it states that any mathematically computable function is computable by a Turing machine, which is a certain abstract model of computation created by Alan Turing that inspired modern-day computers. Another version says that any physical system is simulable by a Turing machine.

So, Dreyfus thinks the claim that intelligence is feasibly computable should ultimately be an empirical one; we should go out and look at how intelligence works and see if it aligns with our models of computation. If it doesn’t, then we should reject the hypothesis, not assume it in advance. And at the time, AI had experienced a series of failures, the so-called AI winters. Dreyfus took this as evidence against the hypothesis, and concluded that AI could potentially be forever beyond our grasp. On the other hand, with the success of modern deep learning, perhaps it’s time we view this hypothesis favorably again.

AI in the WorldHey Siri, Do AI Voice Assistants Reinforce Gender Bias?

 

Is AGI Possible?

This argument was interesting to me because it was the first coherent image of the world I’d ever had where AGI might not be possible on a digital computer. Honestly, it sort of shook me. I’d never really doubted that in any concrete way before.

But to be honest, I’ve clearly confused the philosophical daylights out of myself. I’ve somehow ended up not even understanding how bubbles work! What does it mean for something to just “by its nature” “act in accordance with” a mathematical model? How do planets and soap bubbles produce their behavior “for free”? Do they really? And what do I mean by “computation” anyway?

When I first laid out this argument on Twitter, I got a lot of feedback. As I dug through everyone’s links and reflected on what they all said, I concluded that this argument doesn’t hold after all. And in the end, I don’t think it actually paints a coherent view of the world, either. The reason why comes down to getting to grips with what computation really is, and I’ll discuss it all in my next piece.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us