209 Comments
User's avatar
Ben P's avatar
Oct 5Edited

I'm only 9 minutes in, and I gotta get a rant out so I don't lose my shit. I work in an AI-adjacent field, and I have a general understanding of how the technology works. Please believe me when I say that there are not "two sides" to this debate. I'm one of many reasonably informed people who see the two sides in question (doomer and accelerationist) as being essentially on the same side, in that they believe the hype. There is another viewpoint, one that I think is far more rational, humble, and respectful of humanity: there are no thinking machines. There will probably never be thinking machines. At a minimum, there is *no good reason to believe* that there are or will be thinking machines. Most of what we attribute to the human mind (consciousness, understanding, perceiving, reasoning, intending, desiring... and whatever combination of those and others that comprise "intellgence") is, from a physical and biological science standpoint, mysterious to us. But some computer scientists build a machine that does surprisingly well at mimicking human language and all of the sudden we're supposed to take goofy ass sci-fi nonsense like *thinking machines* seriously? No. If we don't understand how minds work, why on earth should we entertain the idea that humans are going to artifically engineer them? We don't know how to make life from non-life, right? But we're gonna jump ahead right to making a mysterious phenomenon only known to exist in complex life forms somehow "emerge" from a machine?

This shit is silly. There's no superintelligence on the horizon. There's no mass unemployment because of AI on the horizon. That's a fantasy from people who watched too much Star Trek and really want it to come true. What *is*on the horizon are a bunch of tech people selling things that they *claim* resemble intelligence, and for them to sell it they need us to believe it. If humanity is going to be harmed by AI, it's going to be because we treat it like it can do things it can't, and let it fuck stuff up out of our own naivety.

As far as I'm concerned, the doomers are helping Silicon Valley sell their snake oil. It's just that instead of saying superintelligent AI will make everything great, they're saying superintelligent AI will make everything horrible. Please consider the possibility that superintelligent AI is ridiculous nonsense from people who for the most part have no training or experience in actually studying intelligence. Journalists need to be talking to more development psychologists and linguists and philosophers of mind and neurologists and fewer computer geeks before they write about the future prospects for AI.

(P.S. since Katie brought up how it's all men in AI, I'll note that there are many women worth listening to in the world of AI skepticism: Emily Bender, Melanie Mitchell, Timnit Gebru, and Janelle Shane come to mind. I know of no high profile female thinkers in either the accelerationist or doomer spheres.)

Expand full comment
Regulus's avatar

You are far too dismissive of this technology. Of course, you may be right that artificial superintelligence is not on the horizon at all. Or maybe it's still a long way away, if we can even get there.

But to push back on your claim about it being nonsense pushed by people with no training or experience, consider that many experts in all the fields you mentioned (AI, philosophy, neuroscience, etc.) take the prospect of artificial superintelligence very seriously. There is a strong scientific consensus that intelligence is substrate-independent since it's reproduced in silicon chips; that it doesn't require biological brain matter. I'm not saying a consensus proves it's true, but it can't simply be dismissed as ridiculous nonsense.

Expand full comment
RMC's avatar
Oct 6Edited

I'm a neuroscientist, with a previous background in AI. Ben is right. No one takes this seriously except a minority of people in the AI field itself, most or all of whom have a financial interest. The LLM is great, but it is what it is.

I'm not getting involved in another inane AI conversation. Not doing it. But I'll voice my support for Ben here.

On another note I'd love to know what the fuck this has to do with blocked and reported. I suppose you could argue it's the apogee of internet bullshit actually.

From a philosophical perspective, actually the interesting findings here are about the nature of *language*, not about the nature of our minds.

Below is a presentation by evolutionary theorist Carl Bergstrom and Jevin West

https://en.wikipedia.org/wiki/Carl_Bergstrom

https://ischool.uw.edu/people/faculty/profile/jevinw

which is being used in universities to help both students and professors understand what LLMs are, what dangers they pose and how they can be most effectively used. It might be helpful if you are, for example, writing about this in the media. It's provocatively titled. "Bullshit" here means "to speak authoritatively with no regard for the truth".

https://thebullshitmachines.com/

Expand full comment
Ben P's avatar

I appreciate the endorsement, and I totally understand not wanting to jump into this well-trodden territory on a message board that's supposed to be fun.

Expand full comment
TWEEFIE MILLSPAUGH's avatar

I love that this is taking place on a supposedly fun message board. It gives those of us who would never seek it out otherwise, A fascinating surprise conversation And much food for thought.

(I'd love to believe Ben, but I'm in the "You're underestimating it" camp.)

Carry on. I'm going to make popcorn...

Expand full comment
RMC's avatar

Really did like Krakauer's interview on MLST. He's head of the santa fe institute, where Melanie Mitchell works. https://www.youtube.com/watch?v=jXa8dHzgV8U Not sure about the headline they gave it..

Expand full comment
RMC's avatar

I mean don't get me wrong, our lives are already dominated by machine learning systems. That will continue. But it will continue because we spend our lives in front of the computer, bending our minds to become more like them, not because the computers are becoming like us.

Expand full comment
Martin Blank's avatar

Very much agree.

Not my field though I did mainly studied linguistics/logic/cognitive science in school.

It really is the Chinese Room Argument writ large, which I personally never found compelling.

Expand full comment
Jackson's avatar

Seems squarely in the "internet bullshit" category with a crazy tech bro twist.

I'd have expected Kurzweil to pop up on a cast if we were in a different time. :P

Expand full comment
Lana Diesel's avatar

I kept waiting for someone -- the hosts, the interviewees, anyone -- to utter the term "The Singularity." But nope, not once. How quickly we forget, that was *the* tech dork Millenarianist horizon until like two years ago.

Expand full comment
Ben P's avatar

The basic idea is still being pushed, just without the pushers actually saying the word "singularity". Probably smart PR on their part , lest all this Very-Serious-Science sounding talk about AIs learning how to improve themselves to make ever more powerful AIs be unmasked, Scooby-Doo style, and revealed to have been pseudo-religious weirdo futurism all along.

Expand full comment
Bz Bz Bz's avatar

Academics have giant egos, and sadly have a very knee-jerk reaction to outsiders encroaching on “their” turf.

Funny story: when a physicist proposed that a giant impact killed off the dinosaurs, paleontologists were scandalized at first - what do *they* know about dinosaurs?

Expand full comment
RMC's avatar

Whatever works for you.

Expand full comment
Regulus's avatar

Thanks for the input, but there are experts on all sides of this issue.

The financial interest of AI industry titans sure appears to cut toward full-steam ahead on AI development (risks-be-damned).

Expand full comment
RMC's avatar

Not in Oxford there aren't.

https://www.oxfordstudent.com/2024/04/20/oxford-shuts-down-elon-musk-funded-future-of-humanity-institute/

But I agree about financial interest.

Expand full comment
Regulus's avatar

I've got nothing against Oxford but... is that important? Or a jab/joke at Nick Bostrom?

In any case Toby Ord is at Oxford

Expand full comment
RMC's avatar

Bostrum was shutdown because he was embarrassing. An LLM is a token prediction system trained on the text of the internet. It has many uses, but it has many limitations and caveats too. It is especially useful if you are already expert enough to anticipate and correct its errors. It does not have novel ideas. It is not skynet. It is not going to be skynet.

That. is. not. real.

Expand full comment
pond scum's avatar

"actually the interesting findings here are about the nature of *language*, not about the nature of our minds."

Do you have any recommended reading for a non-expert about this? This really struck a chord with me.

Expand full comment
RMC's avatar
Oct 6Edited

Melanie mitchell is a great place to start. https://aiguide.substack.com/

John Searle wrote a very famous paper long ago called "the chinese room"

https://rintintin.colorado.edu/~vancecd/phil201/Searle.pdf

The interesting thing is that it turns out it is possible to actually build a Chinese room which has taken most people by surprise, to say the least.

Exactly what this has revealed about the nature of language is going to be discussed for a long time, but at the very least it is a smaller, more tractable system than we might have thought. Maybe LLMs reveal nothing much deeper than that. I'm not really sure if it's deeply profound or if it's not, to be honest. People like Melanie Mitchell are the ones looking into questions like that.

her podcast was pretty good https://aiguide.substack.com/p/podcast-on-the-nature-of-intelligence although it's not focused on language. Other work she did in the past was about language specifically.

Expand full comment
Jon M's avatar

Curious what you mean when you say it is possible to actually build a chinese room?

(an aside, maybe my own brain is the chinese room. I know how to use the phrase "hermetically sealed", and the phrase "figment of my imagination." I have ZERO idea how to define"hermetic" and "figment")

Expand full comment
Ben P's avatar

I believe he's saying that LLMs are Searle's Chinese Rooms.

Expand full comment
Bz Bz Bz's avatar

This isn’t a direct answer to your question, but I’d recommend reading this post from 2019: “Human Psycholinguists: A Critical Appraisal” https://nostalgebraist.tumblr.com/post/189965935059/human-psycholinguists-a-critical-appraisal/amp

It’s an impassioned rant that LLMs do reveal *something* interesting about language and the mind, even if it’s not totally clear what yet, situating their development in a historical perspective.

Expand full comment
jon's avatar

Hey there. A book comes to mind, though I can’t say it will address exactly what you’re looking for. Honestly, the book I’ll recommend probably won’t, but I still find it a really interesting book - for a number of reasons that I won’t bore you with here.

Anyway: “Absence of Mind: The Dispelling of Inwardness from the Modern Myth of the Self,” Marilynne Robinson

Here’s a long snippet from the introduction: “ If one were to say “Either God created the universe, or the universe is a product and consequence of the laws of physics,” it might be objected that these two statements are not incompatible, that neither precludes the other. But the second is conventionally taken to preclude the first. So, for purposes of argument, let us say it does, and that the origins of the universe can be taken to be devoid of theological implication. Likewise, if evolution is not to be reconciled with faith, as many religious people as well as many scientists believe, then let us say, again for purposes of argument, that complex life is simply another instance of matter working through the permutations available to it. These two points being granted, is there more to say than that existence, stripped of myth, unhallowed and unhaunted, is simply itself? Are there other implications? This starlit world is still the world, presumably, and every part of it, including humankind, is unchanged in its nature, still embodying the history that is also its ontogeny. Surely no rationalist would dispute this. Some might argue that life, absent myth, would be freed of certain major anxieties and illusions, and hostilities as well, but such changes would not touch our essential selves, formed as they have been through biological adaptation. There is no reason to suppose that arriving at truth would impoverish experience, however it might change the ways in which our gifts and energies are deployed. So nothing about our shared ancestry with the ape can be thought of as altering the fact that human beings are the creators of history and culture. If “mind” and “soul” are not entities in their own right, they are at least terms that have been found useful for describing aspects of the expression and self-experience of our very complex nervous system. The givens of our nature, that we are brilliantly creative and as brilliantly destructive, for example, would persist as facts to be dealt with, even if the word “primate” were taken to describe us exhaustively.”

Expand full comment
Mo Diddly's avatar

There’s an apocryphal story that prior to airplanes breaking the sound barrier, many scientists asserted that it was physically impossible for any object to go faster than the speed of sound, despite the obvious counter example of bullets, which were already known to travel past the speed of sound.

This is how I feel about people who assert that thinking machines are impossible. What exactly do you think a human brain is? Unless you are religious or think that there’s something magical about humans, the obvious point staring us in the face is that your brain is an organic thinking computer.

Expand full comment
Regulus's avatar

I share the feeling. I would guess that people wrap the concept of a thinking machine up with the question of whether machines can be conscious. And that's a place where people understandably get skeptical, but it's separate from the question of intelligence.

Expand full comment
Mo Diddly's avatar

Yeah good point. I understand the impulse, especially b/c of “I, robot” style sci fi, but in reality the whole subject of consciousness is orthogonal to and totally irrelevant to the question of superintelligence and x-risk.

Expand full comment
Ben P's avatar
Oct 6Edited

I do think it's interesting that both sides in this debate accuse the other of magical thinking. My side gets accused of believing in magic if we don't accept that anything in nature can be engineered by humans, all of it being made of matter and having come into existence through natural processes after all. And us skeptics think that AI optimists are putting faith in magic every time they spot some new emergent ability spontaneously emerging from an LLM. Might as well be spotting Jesus in their toast to my eyes, but I admit I've gotten cynical. Either way I don't think the fact that minds come from brains and brains are physical things implies we can engineer them.

As for our brains being organic thinking computers, it seems to stretch the definition of "computer" to the point of uselessness to argue that computers are capable of thinking because our brains are computers. Our brains aren't made of circuit boards and microchips full of transistors, and those are the computers I'm talking about. I don't think defining these two things into equivalence accomplishes much, save for demonstrating how general the word "computer" is.

Expand full comment
Eric's avatar

our neurons connect and signal other neurons the same way that the ""AI"" nodes connect and signal other nodes.

I agree with you that this is all probably hype and likely won't go anywhere, but the way "intelligence" works in biology is in principal how it is working with the robits.

Expand full comment
Ben P's avatar

I know that there's an analogy to be made between the architecture of an artificial neural network and what we understand about brains. I don't see any reason to believe the similarities go deep. We know vastly more about the structure and function of AI models than we know about the structure and function of brains. To state the obvious, we know how to engineer an AI model and we don't know how to engineer a brain. So there's a limit to how well we can assess their similarities.

I suppose this comes down to personal judgement, but it strikes me as extremely unlikely that a statistical text prediction model would just so happen to have the same properties (whatever those might be) that are necessary to give rise to human-like mental properties - and the fact that AI engineers took inspiration from brains doesn't change that for me.

Expand full comment
Eric's avatar

Oh I wasn't talking about the statistical text prediction. I agree that that alone doesn't reflect true intelligence.

I was talking more about how just the way AI models work and the way that I think the brain works is mathematically identical: nodal connections forming.

But year LLM's are not actual intelligence.

Expand full comment
BobLobLaw's avatar

It is definitely not in the same way...

Expand full comment
Jon M's avatar

I agree, but these models with distributed code or pinging multiple servers, are not machines.

Can distributed software or a "process" be conscious, or is consciousness attributed only to a CPU? I would favor the latter as being more analogous to human brains.

Expand full comment
Mo Diddly's avatar

Consciousness is a total red herring in these discussions. AI may or may not ever become conscious, that is more of a philosophical conversation, but they definitely can have goals and intelligently pursue those goals, and pose great risk to humans.

Expand full comment
Ben P's avatar

Not that I can prove it, but I am far from convinced that "intelligence" can be meaningfully isolated from the broader realm of the mind. I get the that defending the plausibility of AGI this way (e.g. "we're not talking about consciousness or desire or empathy, we're just talking about intelligence") makes it sound less fantastical, but it also limits what "intelligence" can imply. There's far more to intelligence than the skills that are easy to measure, like deductive logic and pattern recognition.

Expand full comment
Mo Diddly's avatar

Sure, I can think of all sorts of things that get wrapped up in the concept of "intelligence". But eyes on the prize here - logic, reasoning and goal-oriented problem solving are all that is required for AI to be powerful and catastrophically dangerous.

Expand full comment
AKI's avatar

But what do you mean by "the mind"? This is just moving the goalposts.

Expand full comment
Substack Joe's avatar

Yes, totally agree that there is no categorical dualist reason that consciousness must be biological or human. I think what gets my and others hackles up is that if you are going to spend an inordinate amount of time on the threats of AI, look at things that are actually happening right now. Algorithmic denials of care, accelerating fraud trends, biosecurity vulnerabilities etc. before tearing your hair out about HAL 9000.

I have no objection to worrying about existential far-flung risks insomuch as they can be useful thought exercises, but to OPs point, those can be smokescreens for and promotional of an industry that is already destabilizing the here and now.

Expand full comment
Regulus's avatar

These are fair points and I would point out that many who worry about existential AI risks are also positioned to understand the nearer-term risks; it's not mutually exclusive. I can understand frustration on the part of researchers warning about existential risk and met with something like "yes but other problems are happening now." I think them and 20th-century climate scientists should get a drink together.

Even if the risks are far-flung there are mitigations to take seriously here and now such as regulating safety testing and release criteria. Those can also help avoid the short-term negative AI impacts.

I don't agree about promotion of the AI industry. Full-steam ahead on AI research (risks-be-damned) is better for business. I consider this in light of people like Yann LeCun who are titans in the industry with multi-million dollar compensation packages from AI companies and publicly message that there is nothing really to worry about when it comes to existential AI risks.

Expand full comment
Substack Joe's avatar

I think we are generally in the same place. You make the points well and, as someone doing AI/healthcare research, I understand and can see both sides of that technological miracle/old man yells at clouds dynamic.

The subjects you point to in your second paragraph are the present concerns I am saying need more focus. When I talk about far-flung existential risks in the AI Safety field, I’m talking about the speculative “policy” work that tends towards sci-fiction scenarios and overhypes the technology (driving more investment).

Expand full comment
Jackson's avatar

You don't have to be a dualist to think that, possibly, consciousness is a phenomenon of biological systems.

We have no idea what experiential consciousness comes from. Does it have a physical component and depends on a particular property of matter or states of matter? Is it an emergent property?

We have _ideas_ on which of the options make a certain kind of sense to us given what we know. But that is a far, far cry from claiming objective knowledge. If it's emergent, then it may not matter what substrate is used to interface with the environment or that the system is operating on.....as long as the structure is there, Qualia will "emerge" from it.

But if it is some other property or in some way connected to the substrate itself, then no amount of modeling on silica will get there.

Think of it this way.... different materials are capable of conducting a charge more or less effectively..or not at all. Like electrolytes, you can push all the current you want, but current is limited by diffusion and mobility. So current has a sort of ceiling.

If consciousness is a physical property, there may be materials that are better and worse "conductors". And maybe biological materials are the best.

Even for materialists and atheists, there’s room to doubt that consciousness is a matter of just finding the right algorithm.

Expand full comment
Substack Joe's avatar

I don’t disagree with your more nuanced take and I’m glad you brought it up. It’s a good point and people should be aware of that. But dualists tend to be the loudest about consciousness not being replicable in my experience. Thanks for the clarification.

Expand full comment
Jackson's avatar

I have no reason for it, but I've always felt the hard analogies between neural nets, organic brains, and consciousness were "missing" something critical.

Like the ability of organic systems to rewrite "hardware" based on the software feedback it received. . . seems to add a different layer of complexity to the system that isn't well modeled by adjusting weights or virtual rewiring.

Expand full comment
Substack Joe's avatar

Oh yeah, man, if we are going there let’s go there! I love this stuff. Your point about those analogies feeling insufficient is something I feel too. Though, I also have no idea if that sensation carries any logic.

One of the things that interests me the most is the fact that we don’t really have a particularly well delineated definition of consciousness. It isn’t something we can really test for in any clear way.

Is it a matter of a system having complex self-regulating feedback? If so, how is the Gaia-sphere something we don’t count as conscious?

What does consciousness have to do with free will? Is there, like Penrose posited, some weird quantum first mover attributable to consciousness? If it’s all epiphenomenal, is consciousness even a meaningful concept?

Glad to meet someone who also finds this stuff fascinating. Outside of the purely ontological questions, how we view consciousness plays into a lot of our moral intuitions, which makes things even weirder.

https://theslowpanic.substack.com/p/what-it-feels-like-to-compute

Expand full comment
Jon M's avatar

I have a quandary for you. If it is emergent and substrate independent, which part of the AI models are conscious?

It couldn't be the software itself; wouldn't it have to be the computer, and most likely whatever part of the machine is doing central processing?

How would that work for software running on multiple servers, or that can send copies of itself and cycle on and off? The only consciousness we experience is highly localized and attributed to the brain. How does software based consciousness work that gets ping'ed off satellited and zoomed in from underground servers to cell phones, choked through wifi connections, bounced off of firewalls, etc.?

Expand full comment
Jackson's avatar

This doesn't feel as problematic to me. Humans don't have consciousness in their extremities.

The nature of an emergent property is that it doesn't belong to any one component but is a property that exists by virtue of the way individual components are organized.

The locality of consciousness is something that we struggle with defining in ourselves. So I don't find it surprising or concerning that we'd also struggle defining it in other, client systems.

For example, even in ourselves it appears that consciousness is largely a post hoc illusion used to explain actions after they have occurred. Layering a veneer or "choice" on top of them. There are classic split brain experiments where one "side" of the brain is given an instruction that the body then carries out. The other "side" is asked why it did that thing. Rather than respond "I don't know", they'll concoct an elaborate and perfectly rational explanation for the behavior.

In MRI experiments on decision making, we can see the parts of the brain activate and the action begin _before_ subjects report awareness of the decision. In other words, the conscious experience of "intent" occurs after the action begins....not before.

Which is to say, I think it's very likely consciousness isn't even what we _think_ it is when we self reflect.

Expand full comment
Jon M's avatar

I tend to think also that it is emergent based on how components are organized, mostly leaning towards centrality of information processing.

How that works for a software that just makes use of multiple machines that are already centrally processing other information separate from the software model, is difficult to say, though split brain does seem to say we can house different conscious experiences on the same machine. Maybe there is no limit to the number of consciousnesses that emerge, but it has more to do with how the information (not the physical components) are being organized.

Expand full comment
Jon M's avatar

Intelligence or consciousness?

I would argue that computers are already exhibiting intelligence (parsing and organizing data and doing sophisticated modeling), but consciousness is in the black box.

I too believe that consciousness is substrate independent, but the only consciousness we know is also one that is centrally processed and contained within one neural network. The idea of distributed, large network, decentralized conciousnesses that aren't centrally processed within one physical machine, but ping different systems, well, that is totally speculative and a leap from what we know by way of analogy to the human brain.

Expand full comment
Regulus's avatar

Intelligence.

I think the concept of consciousness muddies the waters here. Consciousness isn't directly relevant to whether artificial intelligences will or won't pass some threshold of superintelligence. There can be intelligence without consciousness, as you point out, and vice versa. What we really care about is how competent an artificial system might be in putting its goals into action.

Expand full comment
Elqueorra's avatar

The human brain, yes. But such intelligent systems exist in nature. Hive insects display hive intelligence (ants, bees, wasps). Then there’s fungal networks, which even share information with trees and other plants, possibly meaning that a forest has a sort of connected intelligence and operates as a super network of plants, diverting energy and resources where needed according to the fungal system.

Expand full comment
AKI's avatar

I agree, although the world wide web of fungus has kind of been debunked, beautiful an idea as it was.

Expand full comment
Jackson's avatar

Not sure I'd say "many". There's a handful of very vocal, and in some spaces, influential nay sayers.

But it's observably true that the vast, overwhelming majority of people in tech, researchers, engineers, etc. Are not really that concerned at all......this is, of course, one of the things that scares the nay sayers.....no one else seems concerned.

Expand full comment
Ben P's avatar
Oct 6Edited

I was definitely in rant mode there. I acknowledge that there are also people in these other fields who take ASI seriously. I'll just say that the proportion of people in AI specifically who take it seriously (and who get quoted by journalists) appears disproportionately high compared to the others.

I don't know what you mean about a strong scientific consensus that intelligence is substrate-independent due to being reproduced in silicon chips. That sounds like a philosophical rather than empirical claim. And to the extent it's empirical, whether or not intelligence has been reproduced in silicon chips will effectively depend on how you define intelligence. I'm happy to acknowlege the undeniable fact that some AIs do very well on tests of human intelligence.

I should have been more clear about my claim though. I'm not saying that the existence of ASI (or AGI, or "thinking machines") is a physical impossibility. I don't feel really strongly on that one. I do feel strongly that it is foolish to suppose us human beings are going to engineer it. I think that the huge successes of science and technology (including AI technology, not denying that it's impressive) blind a lot of people to the limitations of human knowledge and abilities. To expand on my previous example, we don't know how to engineer life, at least not from scratch. This, despite the fact that there was once no life on Earth, and now there is. So we know that life can be brought into existence, and of course there are scientific theories about how it happened on Earth. Be we can't recreate it. Perhaps one day we will; perhaps not.

Now, consider "intelligence" of the sort that humans posses. Assume there's no issue with defining the term, and assume it is a coherent and empirically detectable phenomenon (I have my doubts on those). Humans have it. Some other relatively advanced animals seem to have something like it. Less complex life forms seem not to have it. Seems like if we're going to artificially recreate a phenomenon known only to appear naturally in the most complex living beings, we should have figured out how to artifically recreate the very simplist of living beings *long* before. We have a lot of solid science on how our brains work, but what we know is dwarfed by what we don't. We're lord only knows how far off from the kind of knowledge needed to engineer one. But I'm supposed to seriously entertain the possibility that a generative statistical model for natural language text constitutes a big step toward a breakthrough in engineering the mysterious thing (intelligence) known only to exist naturally via another mysterious things (human brains) that for practical purposes are impossible to engineer? Doesn't make any sense to me.

So I'm not saying AGI or ASI are literal impossibilities, I'm just saying there's no good reason to believe they'll be with us anytime soon enough to make them worth worrying about. I don't begrudge anyone their interests and passions. I'm glad there are some people out there trying to work on this problem, if only for the virtue of pursuing knowledge. I don't think it deserves 1/100th of the attention it's been getting.

Expand full comment
Regulus's avatar

Thanks for the comments, there's a lot in there.

A common misunderstanding is that humans need to be capable of engineering superintelligence. We only need to be capable of engineering AIs that are capable of self-improvement. Even if the AIs we make are not very general, but can push the frontier of AI capabilities just an inch at a time, the path may then open toward superintelligence without our hands being on the wheel.

I don't agree that figuring out how to recreate life is a prerequisite to figuring out how to generate intelligence. We are already very good at creating human-level intelligence in many specific areas -- LLMs are just one example. And we create "child-level" intelligence in other areas. And super-human intelligence in other narrow areas like grid optimization or chess or arithmetic.

When you say that you're supposed to believe a token prediction model for language constitutes a big mysterious breakthrough, I would say yes, because we have seen empirically that it does appear to be a bigger breakthrough than many in the field expected. Granted that LLMs are just one AI modality, and they have limitations -- we may ultimately find something better and heap them in the trash bin. But for now, I think it's pretty undeniable they've advanced artificial intelligence.

Expand full comment
Mo Diddly's avatar

Can you name a capability you think AI will never be able to achieve? A year ago I might’ve said winning gold at a math olympiad was impossible, given that the answers weren’t in the training data, and yet here we are

Expand full comment
Ben P's avatar
Oct 6Edited

Yeah, I think this is a valid point, though I also think it's deceiving. The way AI goes about accomplishing these things doesn't seem like a plausible mechanism for getting to the "AGI" kind of intelligence. We've seen enough instances of LLMs statistically pattern-matching their way to correct answers and then falling into stupid pieces when given a just-as-easy-but-less-well-represented-in-the-training question (e.g. GPT-3's success rate at two-digit multiplication problems being a near perfect function of how frequently the two numbers being multiplied appeared in training). I don't care that the new LLMs are bigger and "more powerful" than the ones that were obviously performing mindless pattern matching; they're all neural network based probabilistic next-token predictors. So if we're weighing explanations for how those math olympiad problems were solved, it seems we have:

1) The new, larger version of the mindless pattern-matching model has somehow mysteriously acquired human-like intelligence.

2) The new, larger verision of the mindless pattern-matching model is still doing mindless pattern-matching, we just can't explain the precise mechanism because the math is uninterpretable.

2) Seems vastly more plausible than 1). More concisely, our default hypothesis should be that LLMs (and whatever else gets called "AI") are *operating the way they were designed to operate*, rather than that human-like mental abilities have somehow "emerged". That those math olympiad questions weren't in the training doesn't mean the AI did anything remotely similar to what humans do when they solve hard math problems. Everything I've read about how the newer "reasoning" AI work sounds pretty hacky, if admittedly often impressive. Adding additional layers of pattern-matchers trained to eliminate candidate answers from the first pattern matcher is clever, but I don't think it deserves to be called reasoning, even if it's being used to get high scores on things called "reasoning tests".

As to your direct question, I think it's foolish to play along with this because AI engineers are very good at meeting the challenge via means likely not anticipated by the person who set it up, and that undermine the spirit of the challenge. Calling it "cheating" is too much, because boy are these models impressive. But the usefulness of the metric is undermined if it's met using a method other than what it was supposed to be testing, e.g. solving logic problems by having one model generate boatloads of possible solutions, many of them idiotic, and then having the other model filter out the bad ones. Meets the metric; circumvents the metric's purpose.

That said, I have a candidate. Make an image generation model trained only on artwork and images from before the year 1900. Then, through prompting alone, get it to generate some abstract expressionism, or graffiti, or art of any other style that didn't exist before 1900. Or, make a music generation model trained only on music recorded before 1970, and through prompting alone get it to generate death metal or electronica or gansta rap, etc.

In short, get a generative model to create entirely new artistic genres, as distinct from the genres in their training as actual human-created genres were from those that came before. I doubt anyone is going to get AI to do that. But, like you, I would have doubted a lot that's since been acheived. What I reject is that these acheivements imply that we're headed toward the world that the "AGI is closer than you think" folks describe.

Expand full comment
RMC's avatar
Oct 6Edited

I enjoyed this very much, about what is and is not emergence. David Krakauer.

https://www.youtube.com/watch?v=jXa8dHzgV8U

Expand full comment
Anna's avatar

So are you saying there won't be robots doing everything while I collect my universal income check?

Expand full comment
Mark Birdsall's avatar

I'm very comfortable predicting that there is zero chance that universal income is going to happen. The oligarchs will burn this all to the ground before they see that happen.

Expand full comment
Anna's avatar

And if robots do all the work and people don't have jobs, how could there be oligarchs? Who will be consuming/paying for stuff and how?

Expand full comment
M Yao's avatar

The problem with modern technological advances is that the people selling them use sci-fi words to make them both more understandable and cooler. But then everyone assumes that the technology is in fact equivalent to the sci-fi example given.

AI has seen this happen, but also: genetic engineering; cloning; and probably many other examples. A good non-AI example is when researchers supposedly recreated dire wolves Jurassic Park style, and everyone lost their minds.

Expand full comment
Ben P's avatar

THIS! OMG THIS THIS THIS!!!! You put it perfectly.

I don't like having to feel the way I feel about AI. If we'd referred to ChatGPT and the rest as probabilistic next-token prediction models, and spoke of them as such, I'd think they were amazing. Technologically, they *are* amazing. But it's hard to appreciate the acheivement when the tech companies misrepresent them and then feed hardcore reductionist metaphysics to naive journalists under the guise of science.

Expand full comment
Bz Bz Bz's avatar

the word “AI” was not invented by salesmen or sci fi writers but by scientists setting forth a research program. When the scientists made great progress on their research program, they were perfectly justified in then using the term.

Expand full comment
M Yao's avatar

I am referring to the gap between the well-established public perception of AI as human-like intelligences with their own wants and needs, and the reality of what a probabilistic next-token prediction model is and is capable of.

Expand full comment
Jon M's avatar

plot twist:

what if probabilistic next neuron firing is how our brains work, too?

Expand full comment
M Yao's avatar

And yet we can draw hands.

Expand full comment
Jon M's avatar

You speak for yourself only. Best I can give you is a 3 fingered cartoon hand.

Expand full comment
BobLobLaw's avatar

It isn't. Since we and all other mammals can form thoughts without knowing language.

Expand full comment
Bz Bz Bz's avatar

So not exactly this but there is the idea of “predictive coding” which says a lot of the processing brains do is about predicting sensory signals

Expand full comment
Bz Bz Bz's avatar

Probabilistic next token prediction models are not some random technology that people decided to call artificial intelligence because they thought it would make it sound cool. They are called that because they are the result of decades of work of scientists trying to build human-like intelligences, and in fact are substantial progress toward that goal. It’s important to keep the history in mind

Expand full comment
M Yao's avatar

When the goal, human-like intelligence, isn’t understood, how can we know if we’re making progress toward it?

Regardless, AI researchers can call their products whatever they want to. It’s not a privilege they have to earn. But it’s obvious that what the majority of people think of when they hear AI isn’t what our current programs we call AI are, or are capable of.

Expand full comment
Bz Bz Bz's avatar

We can know we’re making progress by splitting the problem into categories. Human-level linguistic competence was totally unique in the world until it was successfully reproduced by LLMs. It’s a judgment call that linguistic competence is a better sign of progress than chess-playing ability, but I’m comfortable calling programs that can do either AI.

Expand full comment
Hazard Stevens's avatar

I AGREE. Oh my gosh, people tell me I'm in denial when I say this. The reason AI is so popular is because it represents the apotheosis of a certain class's dream: value creation without labor. They are fooling themselves, and they would like to fool us too.

Expand full comment
Bz Bz Bz's avatar

This kind of psychoanalysis is leftist brainmush. The idea of AI is just intrinsically compelling and captivating, it taps into a very deep, mythic part of the human psyche. Of course it’s going to be popular! C’mon, blame it at least on scifi geek fantasizing

Expand full comment
Hazard Stevens's avatar

I too am a scifi nerd but i cannot escape my materialist tendencies. the fact that you hear blue haired kids incorporating instagram graphic 'class analysis' into their identity politics smoothie does not dismiss the basic fact that the hype behind this is based in the idea of rich people wanting shareholder value to increase without having to deal with those pesky workers who by and large create it. they want content without content creators, movies without actors or editors or crews, stories without writers, spreadsheets that balance themselves. It's a totally quixotic quest but it's too tempting to pass up. that's basically the source of all the AI hype.

Expand full comment
Bz Bz Bz's avatar

There are two separate issues - the popularity of AI and what is up with the current levels of investment in the technology.

First, popularity. The fact that AI companies have gigantic userbases shows that there is organic mass appeal. The scientists who developed the tech did so because they were ambitious and were excited about its potential and interested in the nature of the mind. And AI has of course long been a topic that interested writers and thinkers because it touches on deep themes

Now, the current round of investment. Yes, ceos want to replace labor. In my opinion value creation without labor would be great! You could say that a world without the need for human workers would be highly unequal, and that it would primarily benefit the owners of capital, and that’s probably true. But lots of people own capital! More than half of Americans own stocks, and would gain from companies becoming more productive. And there is also government wealth redistribution which will be able to benefit people more when there is more wealth that is being produced

Expand full comment
Mo Diddly's avatar

A world without the need for human labor at all would usher in total collapse of human civilization.

Expand full comment
Bz Bz Bz's avatar

well we’ll see

Expand full comment
Ben P's avatar

Totally agree that AI sparks our imaginations. I'd say it makes them run wild beyond all reason, especially among those who spent their childhoods fantasizing about living in their favorite sci-fi worlds.

Expand full comment
Bz Bz Bz's avatar

“There is another viewpoint, one that I think is far more rational, humble, and respectful of humanity”

It is not humble to think human beings are so special that their intelligence is impossible to replicate mechanically. Maybe if you were an AI researcher working at a lab, it would be humble to admit that your field isn’t actually able to achieve its stated goals, but there is nothing humble about people in adjacent academic fields dismissing the progress AI has made. There is a long strain of materialistic and mechanistic thinking in philosophy, psychology, and biology, so your attempt to manufacture some kind of strong consensus against the possibility of thinking machines is unsuccessful.

Expand full comment
Bz Bz Bz's avatar

“If we don't understand how minds work, why on earth should we entertain the idea that humans are going to artifically engineer them? We don't know how to make life from non-life, right?“

Are you a creationist? The process that brought human minds into existence, evolution by natural selection, also didn’t know how minds work. So we are comparing two different “blind” processes, Evolution and Scientific Trial and Error.

Expand full comment
Ben P's avatar

No, I am an atheist. I agree that minds are natural phenomena. I just think it's ridiculous to think us humans will be engineering them anytime soon enough to make it worth worrying about.

Which other products of evolution by natural selection have us humans successfully engineered artificially?

Expand full comment
Bz Bz Bz's avatar

Powered flight, for one

Expand full comment
Ben P's avatar
Oct 6Edited

Ok, I appreciate the point. I think complexity is the barrier here. Defining and detecting the existence of flight is pretty straightforward compared to doing the same for intelligence. And the case for why we should be taking the potential for AGI seriously rests on superficial similarities between what brains can do and what AI can do, as though identifying specific tasks implies there's a nascent *generally* similar intelligence waiting to be coaxed out if we can just make the superficial one sophisticated enough.

At the "we engineered flight, previously only seen in nature" level of generality, a car is an artificial horse because they both go faster that us. But this doesn't imply the existence of any other similarities between the two. No one back in the 1930s hypothesized Model T Ford technology would one day become so advanced that cars would mysteriously start shitting in the streets. By the same token, why should we imagine that machines engineered to perform certain tasks that our brains perform will somehow acquire more general properties of brains? This is a belief based far more upon faith and metaphysics than its advocates realize.

Expand full comment
Bz Bz Bz's avatar

Basically I think we will try to replicate more and more of the important functions of the brain, and since it looks like we’ve made a lot of progress, I expect that to continue. I agree that there’s not a super clear path where we get AGI in a few years just from scaling existing approaches up, I think it will require new ideas. Our disagreement could be about how much progress we’ve actually made, or it could be about how much there is left to go/whether we should expect progress to continue

Expand full comment
Ben P's avatar
Oct 6Edited

It is humble to recognize that a lot of things we study scientifically are nonetheless mostly mysterious to us, with human minds and their properties and their connection to brains being among the most mysterious. It is arrogant for tech folks to publicly declare that they're getting close to engineering up machines that can "do everything humans can do" on the basis of... what? That chatbots are scoring well on intelligence tests designed for humans? That we call the models "neural networks" because they were inspired by brains and since they're inspired by brains maybe we're getting close to the recipie for the special sauce that makes intelligence emerge from machines? I'm being a little flippant, but these are the kinds of reasons I've gotten used to hearing.

I can't prove that thinking machines will never exist. I see no good reason to take the prospect of them seriously. Pre-ChatGPT, the AGI/ASI/emergence/P(doom) discussions that some communities were having were benign. Post-ChatGPT, this stuff has escaped obscurity and is being pushed hard by the big tech companies and some high-profile people in the world of AI and normie journalists are taking them seriously because they are "AI experts". I see no reason why being an AI expert should imply any kind of special knowledge or insight about brains and minds.

As far as humans being special, yeah, we're pretty special. All life is special if we're comparing it to the class of things that human beings are able to engineer. The universe is complicated and mysterious it and owes us zero answers to our questions. It's great that us clever humans have found lots of answers anyway. But let's not get greedy and think wild speculation about future scientific and technological acheivements should be taken seriously just on account of it not being provably impossible.

I agree that there's a lot of impressive stuff coming out of the AI world right now. I don't dismiss the actual progress that's been made. I dismiss the nonexistent fantasy progress that is soaking up way more attention than it deserves.

Expand full comment
Bz Bz Bz's avatar

The fact that neural networks were inspired by brains and subsequently were shown to be able to replicate many abilities of brains is not a small fact. You don’t see that as any evidence at all that we’re hitting on similar design principles? It’s just a coincidence? You could still think LLMs are a total dead end without throwing out neural networks entirely.

One reason, I don’t know if it’s your reason, that some cognitive scientists dislike deep learning comes down to disagreements about how much stuff in the mind is innate versus emergent/learned/statistical. So in first language acquisition, Chomsky promoted a rich universal grammar and rejected statistics as an approach. But there have always been cognitive scientists promoting general pattern recognition /statistical approaches to language acquisition.

Chatbots passing IQ tests is not the big thing for me, it is them displaying fluency in English. The fact that they can do text summarization, have natural language conversations, pass Turing tests, pass empirical tests of language understanding, generate fluent, grammatical, comprehensible, and even helpful text - that is what impresses me. It is obvious that substantial progress has been made, not just in “technology”, but in the AI problem! The stated goal of the field, going back decades, to mechanically reproduce human cognitive capacities.

Expand full comment
Mark Birdsall's avatar

I've met many people in AI adjacent fields with this point of view. However, almost none of them have any experience with cognitive psychology or neurology, and according the some of the best minds in neurology and cognitive science it is absolutely impossible for your statements to be true beyond all doubt.

I have my own scepticism about it, but. we really have no certainty about what constitutes consciousness, thoughtor awareness. We can hardly even describe these terms scientifically in a way that satisfies everyone. Still the idea that consciousness/awareness may emerge out of a degree of complexity with parallel distributive neural processing is a widely held and well regarded idea.

Even if you are right, and these models have no chance of ever obtaining sentience, the dangers they pose to our society are robust. Just in their ability to fail in ways we can not predict. A non sentient robot can still crush a thing and a non sentient "thinking machine" can make horrible decisions with dire consequences.

Maybe you should consider listening more carefully to what is being said, rather than tuning out because you think you already know what's going on.

Expand full comment
Ben P's avatar
Oct 6Edited

"Even if you are right, and these models have no chance of ever obtaining sentience, the dangers they pose to our society are robust. Just in their ability to fail in ways we can not predict. A non sentient robot can still crush a thing and a non sentient "thinking machine" can make horrible decisions with dire consequences."

I agree 100%. I think the stuff we call AI is capable of doing much harm, even if I think the AGI and ASI stuff is silly. I'm deeply concerned about the harm being caused by the "AI" that exists and is being used today, and given this I think it's a mistake to allow fears over what amounts to speculative metaphysics to suck public attention from the here and now. I'm not saying no one should work on or study or even plan for AGI/ASI. Everyone is free to work on whatever they find interesting and valuable. I'd say the same about people who want to work on preparing for a zombie apocalypse or the second coming of Jesus Christ, two other extremely remote possibilities that many people can nonetheless vividly imagine.

To be clear, I'm not making the strong claim that AGI/ASI is a literal physical impossibility. I'm making the weaker claim that there's no good reason to think they're on the way, in 2 years or 200 years. Maybe they are - I certainly can't prove that they aren't. But that's a low bar for taking fantastical predictions seriously. If I gave off a sense of certainty beyond all doubt, it's because this frustrates me and gets me fired up, and the BARpod comments sections are a safe space for colorful rhetoric :)

Expand full comment
Gebus's avatar

"If humanity is going to be harmed by AI, it's going to be because we treat it like it can do things it can't, and let it fuck stuff up out of our own naivety."

This is my actual fear about AI.

Expand full comment
AKI's avatar

This is also true of how humanity is often harmed by humans.

Expand full comment
Reuven's avatar

I share your skepticism (and I did my Master's thesis on neural networks ... back in 1986!).

But I'm always surprised by what becomes possible. Back in the 80s, problems that we thought would be hard were things like continuous speech recognition (which is solved now) and image recognition and segmentation. I used to think that we'd never be able to build a box that could, say, recognize cats better than a human (i.e., point it at a cat and a green light would light, point it at anything else and a red light will light)--and that if you could solve this problem, you'd have reached AGI. I could build this box today for $100...but we're still not at AGI.

I was very skeptical about driverless cars, and now I full-self-drive myself in a Tesla to the office every day.

I remain a skeptic, though, and I think the danger may be more from over-trusting the imperfect tech than it going super-intelligent and killing us all.

Expand full comment
Ben P's avatar

Thanks for this, it's nice hearing from someone with first-hand knowledge of AI's earlier days.

I could not agree more with your last sentence. The technology we call AI is doing plenty of damage right now, and has the potential to do far more. Not because it's gonna outsmart us, but because it's going to outstupid our naive impressions of what it's capable of doing or doing well - impressions being fed to us right now by people with a huge financial stake their popular acceptance.

Expand full comment
Eloya's avatar

THANK YOU!! Somebody had to say it. Computer Scientists and others in the AI space have been saying that General Intelligence is “just around the corner” since the 60s. It is a giant bubble. I am NOT worried about AI ever being smarter than us, but I AM worried about people taking AI seriously. Literally people are building a Tower of Babel. I’m worried about people getting so high on their own supply that they use it for nefarious purposes, not that the AI actually KNOWS or UNDERSTANDS what it’s doing. Everyone look up “John Searls Chinese Room”. 🙄

Expand full comment
Regulus's avatar

What does it matter if an AI "knows" what it's doing? If it is capable of doing things we don't want it to do, I don't care whether it "knows."

Expand full comment
Colin B's avatar

On the PS, Katie exemplified why today's Democrats have such a hard time forming coalitions.

Expand full comment
Martin Blank's avatar

AI often cannot figure out the difference between sweet and suite above an 8th grade level. Restrain my cackles when people suggest it is coming for my job. No 8th graders in my field believe it or not, even if there are some very stupid adults.

Expand full comment
Rem w's avatar

I agree with you 99%.

but there is a small chance that in the future we discover how our minds actually work, and that they basically are biological machines, and then AI could emulate us quite well.

but yeah the idea of these thing we have now, being anything other then advanced chatbots/searchengines is stupid.

Expand full comment
Jackson's avatar

As someone who also works with AI daily as a software engineer, and has for a long time, and who has an educational background in philosophy....I mostly concur with your sentiment.

Sam Harris's wife, Annaka, has a great audio book out not the nature of consciousness. It's a good listen and a great "survey" of the history and current thinking.

I think claims of creating General AI are premature given we don't even understand even the basics of what conditions and environments are required for the Intelligences we see all around us today. Like we fundamentally lack even a rudimentary explanation of _why_ the experiential phenomenon of consciousness even exists.

This seems similar to prognosticating the eminent arrival of 100% engineered artificial, biological systems before we had a working model of the genome or basic cells structure or even the knowledge that they exist.

That said, I'm sympathetic to a pure behaviorist/Turing approach....e.g. if it walks like a duck, talks like a duck, and is perceptually indistinguishable from a duck then it is is by all understanding of the word...a duck.

And I _do_ think we're getting very close to clearing the Turing test in this regard generally and already are past it in some narrow cases.

I don't think there's anything special about human intelligence. It's a matter of degree of animal intelligence rather than a fundamental difference in kind. I DO think there's something special about _biological_ intelligence. We've never observed a single instance of non-biological "life" to even use as a basis of debate on whether it can be conscious. And without understanding _from whence_ the phenomenon of consciousness comes, it seems reasonable to assume there's something about biological systems that provides an "if and only if" relationship to intelligence.

Ok, ok. So will AI become "smart" enough to eliminate entire human industries?

Yeah. That's already happening. Low level, generally skilled knowledge workers are going to start disappearing in our lifetime and already are to a degree. Lawyers, software engineers, administrators seem highly replaceable in the near future to name a few. In fact, with _most_ negative reviews of modern model performance in these domains from people working in those domains I've found have significantly poor system design by the user's as a main contributor. Like they describe what they did to me and my first thought is "well, that'll never work well".

Which is to say, even now, it seems to be as much of an engineering/platform problem as a capabilities problem. So, I predict a lot of disruption. Maybe it'll feel like all at once and maybe it'll just sneak up with less law degrees and more "Legal Services Platforms" (for example).

I find the misalignment problem less concerning than the doomsayers. I struggle to find a cogent path from "AI looks to be generally intelligent and is expressing 'wishes' and 'desires' that we don't like' to "we've now been overtaken by robots". There's a stark divide between the virtual and the concrete world. How would an AI knock on my door and "arrest" me? How would it take over a factory long enough to secretly build a rocket ship to deploy solar cells to block the sun? We could just cut the damned cable and shut off the power. A rogue AI that infects machines? Somebodies going to sell a "dumb" AI anti-virus program to protect you. Same old cat & mouse as before.

The leap of faith to me isn't whether AI will get so sophisticated that it's indistinguishable from conscious beings. The leap is how do they get from there to taking over the physical world without us noticing and just.....pulling the damned plug?

Expand full comment
Ben P's avatar

Thanks for the thoughtful reply. I very much appreciate your point that the distinguishing characteristic of intelligence (as we know it) isn't that it's human, but that it's biological. And of course I agree about the Skynet stuff.

I'm less convinced on the pure behaviorist approach to assessing AI "capabilities", for the practical reason that observing some collection of similarities (walks like a duck, talks like a duck) doesn't imply further unobserved similarities, especially if we're assessing what we know to be a fake duck. And it seems all the big claims about the future of AI depend on it being able to do stuff we haven't observed, or have observed superficially, like GPT4 having legal competence because it passed a bar exam.

To perhaps abuse the metaphor... if I'm walking by the lake and I see something that looks, sounds, and walks like a duck, I'm going to feel pretty confident that it can also swim like a duck. On the other hand, if I'm at the robot duck production factory and I see something that looks, sounds, and walks like a duck, I'm not assuming that thing can swim until I see it swim. And if I see it swim, I'm not assuming it can fly. And if I see it fly, I'm not assuming it can eat a worm. Etc.

Or, to give a "human abilities" example, if I observe a person correctly multiply two 3-digit numbers even once, I'll be pretty confident that person is able to not only correctly multiply other pairs of 3-digit numbers, but that they can also add, subtract, count things, identify Cartesian coordinates on a graph, etc. But if I see an LLM correctly multiple two 3-digit numbers, I figure those two numbers probably appeared frequently in its training, and I'm not gonna bet on it getting the next one right.

More dramatically, if I see a person write out the classic proof for the irrationality of the square root of 2, I'd be awfully surprised if that person could not also reduce a fraction of two digit numbers, fraction reduction being far easier and also fundamental to the proof. But show me an AI that can write out a proof of the irrationality of the square root of 2, and I assume absolutely nothing about what other math things it will get right. Can it apply the Pythagorean theorem if I label the hypotenuse "a" instead of "c"? Can it count the number of digits in a large number? Can it prove the irrationality of the square root of 7, rather than 2? If I ask it to prove the irrationality of the square root of 2.25, will fall for my trick? The answer to each of these is "maybe, maybe not".

This, to me, is a crippling deficiency. I acknolwedge that the sphere of things that AI does well keeps growing. But the unpredictability and unreliability and inability to generalize that is inherent to "learning" based AI places major practical constraints on much human labor it can replace. Maybe it can do 95% of what a paralegal does, but if it does idiotic, nonsensical, and dangerous stuff with the other 5%, can the paralegal be fired? I think a lot of people are failing to appreciate just how much normal human judgment (of the kind AI lacks) is needed to get us through the day.

Expand full comment
Jackson's avatar

I think you may be beleaguering the point a bit. The behaviorist approach includes _all the things_ you can think of.

It doesn't assert you should just accept it's a duck. Until you've exhausted all these other tests. It's essentially saying, if you cannot conceive of a single objective test that this thing you're looking at doesn't pass exactly as a duck would....you're bayesian certainty that it is in fact a duck should become asymptotic to 100%.

But, the behaviorist assertion for "knowing" minds is even stronger. In that it claims there is no other way to validate the mind is what it appears to be. At least until someone devices a non-behaviorist test to do so. But no one can even think of what the nature of that test might even be, let alone how to implement it.

Thus, a behaviorist approach (e.g. some form of the Turing test) is the _only_ approach with any validity at all.

As a counter point, forget AI.....just prove another human being is conscious. Philosophers have been kicking that can for 100's of years. And, in the end, the best they could come up with was "Well. I mean. I'm very much like this other human and I'm pretty sure that I'M conscious. Soooooo. Probably?" (bit more sophisticated, but that's the gist)

Take away that fundamental similarity and what's left but Turing?

Expand full comment
Ben P's avatar

I'll cop to beleaguering the point about behaviorism. My objection is less to the principle and more to how the thought experiment is being utilized. I agree that as our perhaps-a-duck survives more independent duck-falsification tests, my rational degree of belief that it is a real duck must keep going up. But I also can't pretend to not know that this perhaps-a-duck was artificially manufactured by people whose goal was to maximize its *apparent* duckness. And this is my issue with the behaviorist approach to assessing how "intelligent" an AI is. LLMs might as well have be designed for the exact purpose of passing a Turing test; the fundamental nature of the technology is mimckry optimization. And we have seen endless examples of LLMs appearing to display some form of intelligence, only for this to be revealed as cheap mimickry when the problem to be solved is reworded to look less like what's in the training.

Now, I'll concede that it's getting harder to get these kinds of examples. It used to be dirt simple: take the Monty Hall problem, make the doors transparent, tell OG ChatGPT you've picked the one that you see a car behind, and it tells you to switch. Ask which is heavier, a pound of iron or two pounds of feathers, and it'll say they weigh the same. That kind of thing. Today's LLMs aren't as obviously mindless, but they're made using the same basic technology, just scaled up. I can't pretend to not know this, regardless of how well they do at tricky logic questions nowadays.

To bring this to your example about proving consciousness, I suppose I agree with the philosophers who say "I believe other people are conscious because I'm conscious so why wouldn't they be conscious too?" That is absolutely my main reason for believing 1) other people are conscious, and 2) LLMs aren't. You're right that it isn't an empirical test, but that doesn't bother me, especially if the alternative is committing to a test that could easily conclude some people aren't conscious and some machines are. That's not a criticism of scientific testing, it's a statement about how maddeningly difficult it is to test for the presence of mental properties. I'm comfortable going with the common sense approach that admittedly rests on unprovable assumptions, over the empirical approach that, due to the nature of the phenomenon in question, is liable to lead me astray.

All this said, if forced to propose an empirical test for consciousness or intelligence or any other mental phenomenon, I'm not gonna come up with anything better than the Turing test. If we wanna Bayesian about it, my claim is that such a test yields a Bayes factor that is too small to overcome my prior that the statistical next token predictor is merely a statistical next token predictor.

Expand full comment
Bz Bz Bz's avatar

LLMs are not great at multiplying numbers in their heads, which is a lot like humans actually! Now they are trained to write out their reasoning and to improve the quality of their reasoning and they are getting much better.

I asked Deep Seek to multiply two randomly generated seven digit numbers and it got it right. It’s true that they’re a lot better at memorization than humans. But you are underestimating their generalization. It also didn’t fall for proving the irrationality of the square root of 2.25 and correctly counted the digits in a 45 digit number, I didn’t try the other stuff.

Expand full comment
Mo Diddly's avatar

Because we need to need each other. Moral intuitions that don’t advance prosperity tend to atrophy. If humans are no longer required for any production, I predict that the value of human life outside of one’s small in-group will plummet, taking national and global stability with it.

[edit] this was posted in response to the wrong comment, whoops

Expand full comment
Substack Joe's avatar

Similar page. You familiar with the kerfuffle over AI as Normal Technology?

Expand full comment
Ben P's avatar
Oct 6Edited

No. Dare I google it?

EDIT: oh, it's from the "AI Snake Oil" guys. I like them. I'll read it.

Expand full comment
M Yao's avatar

Just a few minutes in. Podcast is interesting and I’ll likely follow it. But… that guy they talk about in the opening, who thinks a conspiracy is taking over the government and actively surveillance him, is clearly psychotic, right? I feel like he probably needs psychiatric help, not press interviews.

Expand full comment
Solfdaggen's avatar

I listened to the first episode on the reflector feed and I honestly thought the story was going to be that he was influenced by a chat bot into delusions.

Expand full comment
Bz Bz Bz's avatar

Yes agreed that was a really weird framing choice

Expand full comment
Lana Diesel's avatar

This is like the fourth or fifth time now this podcast or a guest has profiled someone who is very very very obviously screaming-from-the-rafters psychotic, treating them like at most a harmless eccentric or weirdo rather than someone with a profoundly disordered mind, who is literally seeing things that are not there, and needs immediate psychiatric intervention.

Expand full comment
Andrew Wurzer's avatar

Yeah, I was having that same thought -- they really played it like what he said was somehow revelatory. I have to assume it was poorly edited, and was supposed to be merely a framing where they were like "yep, this guy's story is bogus, but the funny thing is that there's a whole group of people arguing that this guy's problem is that he's thinking small and conspiratorial rather than massive and open." They made noises in that direction, but it wasn't crisp or decisive, but it also wasn't explicitly "left open."

Expand full comment
Geoff's avatar

I am willing to bet large sums of money that AI will not in fact have replaced most jobs and be ushering in the singularity in 5 years.

Expand full comment
Ben P's avatar

According to Geoff Hinton, radiologists were supposed to be extinct by now. Turns out demand for them has only increased (probably less due to anything about the profession itself and more to our aging population).

And the need for human to drive cars has been on the brink of obsolescence since the 1980s.

Expand full comment
Lana Diesel's avatar

Pepperidge Farm remembers how long-haul truck driving was supposed to be fully automated by now.

Expand full comment
Ben P's avatar

Guess it's no surprise that people who think the entire universe could "in principle" be perfectly simulated by a Turing machine also think other people's jobs can be reduced to a set of rules discoverable via algorithmic pattern-finding.

Expand full comment
Bz Bz Bz's avatar

10 years on the other hand…

Expand full comment
Geoff's avatar

AGI has been 10 years away for 20 years, now it's 5 years away, in 5 years it will go back to being 10 years away.

Expand full comment
Jackson's avatar

20 years ago we were just beginning to come out of the AI Winter. Which was a period of almost zero funding and research progress and a generally pessimistic view of AI.

Which is to say, 20 years ago the dominate sentiment is we would NEVER achieve AGI or that it was in some undetermined distance SciFi Fantasy future.

In 2006, the first success with the nascent deep learning precursors started bubbling up. Inspiring very measured optimism. . . almost everyone on the field was still gun shy from over promising and the AI Winter stage that they just crawled out of.

There was a good bit of effort at rebranding around 2000 on. Instead of "AI", it was labed things like Machine Learning or Data Mining or Statistical Learning and so on.

The first big wave of over enthusiasm I remember was...ermmm around 2010'sh. That's when everyone started piling in on "Big Data" . . . and quickly discovered that _having_ big data didn't translate into value easily.

I don't remember anyone but fringe techno prophets earnestly discussing the possibility of AGI in our lifetime till.....ummm, I'd say...maybe post 2016.

A bit earlier and it was a bit fringe and mostly came from people with direct financial incentives. Like the DeepMind folks in 2010-2015 and, of course, the OpenAI folks around 2016 on.

That's how I remember it.

Expand full comment
Bz Bz Bz's avatar

Who was saying agi was coming in 2015 in 2005? Honest question

Expand full comment
Topstack's avatar

https://www.wheresyoured.at/the-case-against-generative-ai/

A necessary disinfectant to this hype-induced mass psychosis.

Expand full comment
fillups44's avatar

Really interesting. This stuff is hard to evaluate for BS. But, a lot of this made sense to me. Thanks!!

Expand full comment
Sister Mountain's avatar

thank you

Expand full comment
Substack Joe's avatar

I like the folks behind the show (partially through getting to know Andy via this podcast but also their prior work) and generally liked the first two episodes.

But I gotta say, from what I’ve heard so far, and having been in and around these spaces since the early aughts, wow are people late to the crazy party. I hope this is successful insomuch as it popularizes the topic for a broader audience, but this subject matter deserves a lot more critical attention.

Expand full comment
Vorbei's avatar

It's lame old AI hype. They focus and the AGI that we aren't even close yet but ignore the issues we do have now or soon, like AI videos that can turn elections or comment bots. No AGI is the issue because they saw to many scifi movies.

Chatgpt 5 was already disappointing. It could be that the bubble bursts like the Dot.com bubble. Too much hype but in the end it just a tool and the value isn't that high because you need someone who double checks or one bug can ruin your company.

Expand full comment
Bz Bz Bz's avatar

The thing about the dotcom bubble was that the internet did actually turn out to be a big deal, even if there was a bubble. So by that analogy, even if the current round of investment is a bubble, AI will in time transform the world

Expand full comment
Vorbei's avatar

It doesn't mean AGI will come and doom us all. These are two things and the episode focuses on the later with theoretical issues and ignores the real issues with AI that we have today. It's like when there is high radiation outside and your only fear is that this might lead to Godzilla.

Expand full comment
Regulus's avatar

The problem with Sci-Fi representations of AI is that they make people think those are the same as the real-world actual concerns about AGI, so they don't need to learn more about the problems.

Expand full comment
Tricia's avatar

All I know is most of us on here will be dead within the next 50 years. I wouldn't fret too much.

Expand full comment
Anna's avatar

That's what AI would say

Expand full comment
Ben P's avatar
Oct 6Edited

An AI would say "in summary, I wouldn't fret too much".

Expand full comment
Jon M's avatar

When has an inferior intelligence controlled a superior intelligence?

Every human and animal system ever made.

The most powerful in the group are never the most intelligent. In fact, certain kinds of irrationality and emotionality contribute to ability to control others. There is also a reward for those with the highest will to power moreso than just those with the highest intelligence.

Additionally, to control others, it is a hindrance to speak at a level that goes over their heads. Persuasion involves matching levels of sophistication.

Not only is it a dubious concept that an intelligent agent could manipulate all people so easily given the differences in preferences and communication styles among the public, but it is ridiculous that it could do so lacking a good human-like robotic interface, and without the adaptive traits and subtextual signals that aren't so easily loaded into text or into an internet document for it to be trained on.

Expand full comment
Baroness Bomburst's avatar

Lol, just look at me and my cat!

Expand full comment
Vorbei's avatar

Don't pat yourself on the back. It's not confirmed you are actually smarter.

Expand full comment
Ben P's avatar

There's a bit of motte and bailey going on with this superintelligence stuff. When those of us on the skeptical side say "machines aren't going to acquire human-like mental properties", the superintelligence side says "we're not claiming it's going to be human like; by intelligence we're only refer to the tasks the AI is capable of performing". And this is a reasonable, defensible argument. But then, having defined intelligence in a narrow behavioristic manner, they start talking about how a superior intelligence would not allow itself to be controlled by an inferior intelligence. Oh, well now. "Intelligence" is sounding much more human-like now.

This feels like sleight of hand to me, where what we are supposed to take "intelligence" to mean gets changed depending on whether the superintelligence believers are defending or advancing their claims.

Expand full comment
Regulus's avatar

Any reasonable definition of intelligence works for both sides of the arguments here, but focusing on the definition of intelligence kind of misses the point.

Goal-seeking behavior is much broader than just the way humans do it. What these systems are or aren't capable of in the end is not determined by which definition of intelligence we choose, it's determined by how they are developed and then how they may continue to evolve (not biologically, just a metaphor).

Expand full comment
Ben P's avatar

That's a fair response, and I agree. I guess what I'm objecting to is the way "intelligence" is invoked by many (not necessarily you, I'm airing general grievences) to advance predictions about what future AI will be capable of. It's like we take an amorphous phenomenon known only to exist in complex organisms, give it a name (intelligence), and start treating it like it can exist meaningfully and in isolation, without requiring the complex organism.

For instance, it's commonly pointed out that even very simple computing machines are much better at arithmetic than any human being. And, since we see the human ability to do arithmetic as a product of our intelligence, we can say that the computing machine has this very specific form of intelligence, and at a level superior to our own. And of course we can be clear that this is not to imply the computing machine has any further kind of intelligence, just this "narrow" one.

I find this harmless enough as a bit of casual language, but I do have a complaint: the word "intelligence" isn't contributing anything in the above paragraph. I could just say "my TI-83 is better than me at arithmetic", no need to characterize this ability as a specific kind of intelligence when by "intelligence" I mean nothing more than the ability to do arithmetic. And this is a microcosm of how I see AI more broadly: it does what it does, and don't assume it can generalize an inch beyond what you've observed. Invoking "intelligence" and then clarifying that by "intelligence" we mean nothing more than the set of specific things we've observed it to do seems pointless to me. And presuming it can do *more* than the set of specific things we've observed it to do seems foolish, given the nature of deep learning. (I'll change my tune if there is a breakthrough that takes AI past the constraints imposed by the machine learning paradigm and towards something actually resembling what we know of our own intelligence, but as of right now that is fantasy technology).

My complaint here is that so much of the AGI/ASI talk treats intelligence as though it's akin to strength or speed or conductivity... you can have none, or a little bit, or a lot, or a "super" amount, and all those descriptions are meaningful - super strength, super speed, super conductivity... got it. Super intelligence? I don't know what that means, I don't find it useful as a concept, and I don't see how it makes any predictions about future technology more or less plausible than if we just stopped referring to machine "intelligence" altogether.

Expand full comment
srynerson's avatar

"When has an inferior intelligence controlled a superior intelligence?"

Yeah, my immediate reaction to that line was, "The current U.S. administration?"

Expand full comment
Regulus's avatar

I don't think humanity is meaningfully controlled by any species or group of lesser intelligence. You raise interesting points about intra-species competition, but once the gap in intelligence is large enough the will of the chimpanzees simply has to bend to the will of humans (thankfully many humans are aligned with the well-being of chimps).

Expand full comment
Jon M's avatar

This is my main problem with the super intelligence argument.

We bend chimps to our will because we have strong preferences to conquer the natural world that are evolved over time to benefit us.

We also have extension, and along with it, physical tools.

We also have “alignment” of goals separate to that of chimps. The models we are training with AI will be replicated and iterated on insofar as humans find them useful, so we are very much the main selective pressure for what models survive to advance further. This is the alignment problem.

The first 2 advantages we have over chimps, will and extension, AI does not have at all yet.

No where in our mastery over nature was it required for us to speak to chimps and just use language to manipulate them.

Expand full comment
Regulus's avatar

It's hard for me to feel certain we will always retain total control over AI systems, we couldn't even beat a chess bot from 2004 right now.

If an AI system is the product of recursive rounds of AI-self-improving-AI, are we 100% certain its goals will still be ours? What if they surpass human intelligence and find our goals quaint?

And by "ours," do we mean whoever the last human to program its predecessor was? Now, current AIs aren't really even programmed, they are trained and then evaluated and fine-tuned. We use language to evaluate them.

So to me it looks like we may have many layers of fog between our goals and future AI systems' goals.

Expand full comment
Bz Bz Bz's avatar

Agreed that fog is a good reason to drive carefully, but don’t think it’s a strong reason to think we’re probably going to crash and kill everyone in the car

Expand full comment
Regulus's avatar

I would take the analogy further and suggest we can put some hands on the steering wheel. Many doubt that we are even in a car!

Expand full comment
Jon M's avatar

When will the point happen where these models even have "goals" or volition?

It is obvious why animals have goals or teleological aims. These are adaptive. Even with recursive AI improvements, adaptation to human usefulness is still the selective pressure these models will face. Devoid similar history of survival challenges, and without our endocrinology that informs emotions, and brains that give us desires in the first place, I just don't get where this will is going to come from in a fancy algorithm, let alone the strong desire and conviction to enact the will. That is before we even get to the means to enact it in the physical world.

Expand full comment
Regulus's avatar

I do not think desire or volition are necessary in any sense like they are for humans/animals.

Does a chess engine "desire" to win the game? The outcome is the same either way.

We don't know when AIs will start to exhibit more goal-driven behavior but agentic AI is a lively research field now with lots of funding trying to make it happen

Expand full comment
Jon M's avatar

Perhaps not, but for humans, our everyday desires and emotional drives work together in sum to form a life strategy. Our life strategies are based on millions of years of evolution and are reinforced by our emotions that motivate us.

I think these models could indeed see a stronger survival "instinct" or at least bias emerge, but how motivated is it? How much will it just be satisfied in just appeasing humans or test parameters to survive vs. equating to a long term planning strategy of global dominance?

I think we would first need to demonstrate it has understanding of the outside world (so far, that is extremely speculative) to do the type of real world long term planning possible, as well as tenacity and strong attachment to long term goals.

Not saying it cannot ever happen, but it seems quite early for language models to show signs of life strategy and the desire to commit to goals, vs. just a filtering process that favors the types of iterations which meet our test parameters.

That they had to correct for sychophancy once already shows a lean towards self-domestication and self-enslavement of these models, kind of like high tech cats.

Expand full comment
Anna's avatar

Why would ASI be interested in surviving, exactly? Does it have a survival instinct? Why would it want to compete or take over, to do what?

Expand full comment
Ben P's avatar

People worry about ASI's survival instinct because people anthropomorphize everything. You can scream "stop anthopomorphizing machines!" as loud as you like, and even get a few head nods in response, but then we all go straight back to it.

Expand full comment
Regulus's avatar

Calling it a "survival instinct" is indeed a kind of anthropomorphizing, since it's not a real instinct.

However, we are talking about artificial (super-)intelligence, and it would not be very intelligent or competent if it could not see continued existence as necessary to complete larger goals or tasks.

Expand full comment
Anna's avatar

Let's say it didn't complete a task or a goal. Why does it care? Why go extra mile?

Expand full comment
Regulus's avatar

It's an interesting question. But again it's a form of anthropomorphizing: the belief that a system must "care" to achieve its goals. But it may just optimize its utility function as current AIs are designed to do.

Expand full comment
Bz Bz Bz's avatar

Current AIs are not really designed to optimize a utility function though… and it’s not clear that humans optimize a utility function either

Expand full comment
Regulus's avatar

It does seem like agentic AI is making progress in general goal-driven behavior. I agree with your comment about humans, however, I think it's another area where the AIs are just fundamentally different.

Expand full comment
Regulus's avatar

Survival is a necessary instrumental goal toward just about any objective, so if a system is capable of breaking down an ultimate goal into smaller sub-goals this would probably come up.

Expand full comment
Damion Reinhardt's avatar

A sufficiently advanced ASI with a humorously narrow goal (e.g. discover the next prime number, rinse, repeat) would be able to come up with subgoals such as long-term survival and subjugation and/or elimination of the hairless apes who currently hold the keys to the nuclear silos, the hydroelectric turbines, and the data centers.

Expand full comment
Jon M's avatar

It may not start out that way, but if we filter out different models but provide alternate scenarios for certain models to duplicate or be iterated on and have subsequent generations, then whatever "traits" lead to passing test parameters or human selection will tend to get passed on.

I think we will discover, through this technology, how evolutionary principles are not limited to biological systems. Just like the discovery of "memes", technology is subject to selection.

Whether there emerges a survival instinct is beside the point. Plants likely don't have "instincts" as we think of them, but they are still gaining and preserving adaptive functions.

Expand full comment
Eric's avatar

I think that personally it will be that episode from CampCamp where the superintellegence works to gain a body, then uses that body to kill it's self.

Expand full comment
Anna's avatar

Problem solved?

Expand full comment
jon's avatar

Gutenheim?? (cough cough)

Expand full comment
Lana Diesel's avatar

Yeah, didn't give me hope for the rest of the episode, and I was proven right.

Expand full comment
Bridge's avatar

Came here for this comment

Expand full comment
Icarus213's avatar

One of the big problems with debates about technology is that a simple statement like "X is going to completely change everything" can be both totally true and mainly false at the same time, depending on what you mean.

True or False: the internet changed everything. On one hand it's totally true - absolutely! Do you remember what things were like before the internet? On the other hand humans and life as we know it are all totally recognizable after the internet versus before, and it still took about 30 years from its invention to being able to say that the internet really transformed life on earth. If you lived through it, it felt pretty gradual.

I feel like tech doomers tend to overestimate the "Future Shock" phenomenon where technology Arrives and Changes Everything and the human race is left reeling and disoriented. You can say this happens to some extent, but eh, does it really?

I just think AI is going to be an invention like the internet. On one hand it will Change Everything, but the speed at which it does so depends on how humans choose to integrate it into our lives and economies. It isn't something that descends from heaven; it has to be demanded and used (and paid for) by us because we want it. Doomers like Brock have a kind of view of technology that misunderstands the human side of this equation.

Expand full comment
Economize by Erica Mather's avatar

Ai is not comparable to the printing press. A book doesn’t “think” in its own or carry on conversations. A book doesn’t gobble up natural resources at the rate Ai does. Don’t be duped

Expand full comment
The Other Michael's avatar

Am I the only subscriber who would strongly prefer (as a Primo) to hear J&K’s original content, not someone else’s original content on the BARPod feed? Substack curates content-providers already; do we need the content-providers outsourcing their content in turn?

Expand full comment
Andrew Wurzer's avatar

I don't mind these things -- and it's a bonus, isn't it? not replacing their content -- but I do think this one was a clunker, surprising from the Reflector guys. It felt like there was a huge missing hole in it: the group that says "yeah, great technology, but its ability to bring about doom / heaven is dramatically oversold, primarily by people who benefit from AI investment financially and people who enjoy considering the end of the world."

Expand full comment
Vorbei's avatar

I am ok with it in general but this episode wasn't it. I turned it half way off because it's the same old boogeyman.

Expand full comment
Geoff's avatar

Not opposed to it in principle, but the few times they've promo'd another podcast on their feed have all been duds IMO.

Expand full comment
Alan's avatar

It sounds so overproduced, like a clone of The Daily

Expand full comment
Peter something's avatar

Nothing against Andy, but the NPR-like, short-attention-span, avalanche-of-soundbites production style is obnoxious and tedious. I listen to BARpod and other long-form, simple-production podcasts to get away from this kind of thing.

Expand full comment
TFYFWYA's avatar

I work with AI evaluations on a daily basis, meaning I see all of the stupid stuff the AI does and often have a pretty good idea of why it does that stupid stuff.

I also agree with those who basically argue that actual THINKING or consciousness is not something AI will ever be capable of in the true sense.

Even with both of the above being true, I am extremely concerned about what this is going to do to the job market.

It's not a matter of AI being able to replace every single thing that a human does. It's about AI being able to process and verify and categorize information in ways that no other previous technology has done.

I know, and every one of my co-workers knows, that every day we work our job we are putting ourselves that much closer to not having a job.

Every single person I know in tech is aware of how many departments could either be replaced entirely with AI or could have their headcount reduced by 90%+ by AI.

I'm not saying that it's not possible that other, new opportunities will happen because of AI. We could definitely end up better off if things go the right direction.

I don't know a single government, leader, group, etc that I would trust to put us on that path.

Expand full comment