Just a few minutes in. Podcast is interesting and I’ll likely follow it. But… that guy they talk about in the opening, who thinks a conspiracy is taking over the government and actively surveillance him, is clearly psychotic, right? I feel like he probably needs psychiatric help, not press interviews.
I like the folks behind the show (partially through getting to know Andy via this podcast but also their prior work) and generally liked the first two episodes.
But I gotta say, from what I’ve heard so far, and having been in and around these spaces since the early aughts, wow are people late to the crazy party. I hope this is successful insomuch as it popularizes the topic for a broader audience, but this subject matter deserves a lot more critical attention.
I'm only 9 minutes in, and I gotta get a rant out so I don't lose my shit. I work in an AI-adjacent field, and I have a general understanding of how the technology works. Please believe me when I say that there are not "two sides" to this debate. I'm one of many reasonably informed people who see the two sides in question (doomer and accelerationist) as being essentially on the same side, in that they believe the hype. There is another viewpoint, one that I think is far more rational, humble, and respectful of humanity: there are no thinking machines. There will probably never be thinking machines. At a minimum, there is *no good reason to believe* that there are or will be thinking machines. Most of what we attribute to the human mind (consciousness, understanding, perceiving, reasoning, intending, desiring... and whatever combination of those and others that comprise "intellgence") is, from a physical and biological science standpoint, mysterious to us. But some computer scientists build a machine that does surprisingly well at mimicking human language and all of the sudden we're supposed to take goofy ass sci-fi nonsense like *thinking machines* seriously? No. If we don't understand how minds work, why on earth should we entertain the idea that humans are going to artifically engineer them? We don't know how to make life from non-life, right? But we're gonna jump ahead right to making a mysterious phenomenon only known to exist in complex life forms somehow "emerge" from a machine?
This shit is silly. There's no superintelligence on the horizon. There's no mass unemployment because of AI on the horizon. That's a fantasy from people who watched too much Star Trek and really want it to come true. What *is*on the horizon are a bunch of tech people selling things that they *claim* resemble intelligence, and for them to sell it they need us to believe it. If humanity is going to be harmed by AI, it's going to be because we treat it like it can do things it can't, and let it fuck stuff up out of our own naivety.
As far as I'm concerned, the doomers are helping Silicon Valley sell their snake oil. It's just that instead of saying superintelligent AI will make everything great, they're saying superintelligent AI will make everything horrible. Please consider the possibility that superintelligent AI is ridiculous nonsense from people who for the most part have no training or experience in actually studying intelligence. Journalists need to be talking to more development psychologists and linguists and philosophers of mind and neurologists and fewer computer geeks before they write about the future prospects for AI.
(P.S. since Katie brought up how it's all men in AI, I'll note that there are many women worth listening to in the world of AI skepticism: Emily Bender, Melanie Mitchell, Timnit Gebru, and Janelle Shane come to mind. I know of no high profile female thinkers in either the accelerationist or doomer spheres.)
Of course, you may be right that artificial superintelligence is not on the horizon at all. Or maybe it's still a long way away, if we can even get there.
But to push back on your claim about it being nonsense pushed by people with no training or experience, consider that many experts in all the fields you mentioned (AI, philosophy, neuroscience, etc.) take the prospect of artificial superintelligence very seriously. There is a strong scientific consensus that intelligence is substrate-independent and may be reproducible in silicon chips; that it doesn't require biological brain matter. I'm not saying a consensus proves it's true, but it can't simply be dismissed as ridiculous nonsense.
There’s an apocryphal story that prior to airplanes breaking the sound barrier, many scientists asserted that it was physically impossible for any object to go faster than the speed of sound, despite the obvious counter example of bullets, which were already known to travel past the speed of sound.
This is how I feel about people who assert that thinking machines are impossible. What exactly do you think a human brain is? Unless you are religious or think that there’s something magical about humans, the obvious point staring us in the face is that your brain is an organic thinking computer.
I would argue that computers are already exhibiting intelligence (parsing and organizing data and doing sophisticated modeling), but consciousness is in the black box.
I too believe that consciousness is substrate independent, but the only consciousness we know is also one that is centrally processed and contained within one neural network. The idea of distributed, large network, decentralized conciousnesses that aren't centrally processed within one physical machine, but ping different systems, well, that is totally speculative and a leap from what we know by way of analogy to the human brain.
Can you name a capability you think AI will never be able to achieve? A year ago I might’ve said winning gold at a math olympiad was impossible, given that the answers weren’t in the training data, and yet here we are
Survival is a necessary instrumental goal toward just about any objective, so if a system is capable of breaking down an ultimate goal into smaller sub-goals this would probably come up.
People worry about ASI's survival instinct because people anthropomorphize everything. You can scream "stop anthopomorphizing machines!" as loud as you like, and even get a few head nods in response, but then we all go straight back to it.
Calling it a "survival instinct" is indeed a kind of anthropomorphizing, since it's not a real instinct.
However, we are talking about artificial (super-)intelligence, and it would not be very intelligent or competent if it could not see continued existence as necessary to complete larger goals or tasks.
It's an interesting question. But again it's a form of anthropomorphizing: the belief that a system must "care" to achieve its goals. But it may just optimize its utility function as current AIs are designed to do.
When has an inferior intelligence controlled a superior intelligence?
Every human and animal system ever made.
The most powerful in the group are never the most intelligent. In fact, certain kinds of irrationality and emotionality contribute to ability to control others. There is also a reward for those with the highest will to power moreso than just those with the highest intelligence.
Additionally, to control others, it is a hindrance to speak at a level that goes over their heads. Persuasion involves matching levels of sophistication.
Not only is it a dubious concept that an intelligent agent could manipulate all people so easily given the differences in preferences and communication styles among the public, but it is ridiculous that it could do so lacking a good human-like robotic interface, and without the adaptive traits and subtextual signals that aren't so easily loaded into text or into an internet document for it to be trained on.
I really suck at being scared. I understand that that makes me a bad MSNB^How viewer.
Even if the product or service is objectively-better, people choose what they like regardless.
Without guns, you’re not going to compel people to just accept things. Is there a robot who might do a better job cleaning my teeth than the tiny foreign-born last at the dental practice I’ve used for years? Probably. Do I care? Nah.
So all that can be done is stealing my life or liberty. Okay. GLWT.
I’m not the one who has to look at you in the mirror.
Just a few minutes in. Podcast is interesting and I’ll likely follow it. But… that guy they talk about in the opening, who thinks a conspiracy is taking over the government and actively surveillance him, is clearly psychotic, right? I feel like he probably needs psychiatric help, not press interviews.
I like the folks behind the show (partially through getting to know Andy via this podcast but also their prior work) and generally liked the first two episodes.
But I gotta say, from what I’ve heard so far, and having been in and around these spaces since the early aughts, wow are people late to the crazy party. I hope this is successful insomuch as it popularizes the topic for a broader audience, but this subject matter deserves a lot more critical attention.
I'm only 9 minutes in, and I gotta get a rant out so I don't lose my shit. I work in an AI-adjacent field, and I have a general understanding of how the technology works. Please believe me when I say that there are not "two sides" to this debate. I'm one of many reasonably informed people who see the two sides in question (doomer and accelerationist) as being essentially on the same side, in that they believe the hype. There is another viewpoint, one that I think is far more rational, humble, and respectful of humanity: there are no thinking machines. There will probably never be thinking machines. At a minimum, there is *no good reason to believe* that there are or will be thinking machines. Most of what we attribute to the human mind (consciousness, understanding, perceiving, reasoning, intending, desiring... and whatever combination of those and others that comprise "intellgence") is, from a physical and biological science standpoint, mysterious to us. But some computer scientists build a machine that does surprisingly well at mimicking human language and all of the sudden we're supposed to take goofy ass sci-fi nonsense like *thinking machines* seriously? No. If we don't understand how minds work, why on earth should we entertain the idea that humans are going to artifically engineer them? We don't know how to make life from non-life, right? But we're gonna jump ahead right to making a mysterious phenomenon only known to exist in complex life forms somehow "emerge" from a machine?
This shit is silly. There's no superintelligence on the horizon. There's no mass unemployment because of AI on the horizon. That's a fantasy from people who watched too much Star Trek and really want it to come true. What *is*on the horizon are a bunch of tech people selling things that they *claim* resemble intelligence, and for them to sell it they need us to believe it. If humanity is going to be harmed by AI, it's going to be because we treat it like it can do things it can't, and let it fuck stuff up out of our own naivety.
As far as I'm concerned, the doomers are helping Silicon Valley sell their snake oil. It's just that instead of saying superintelligent AI will make everything great, they're saying superintelligent AI will make everything horrible. Please consider the possibility that superintelligent AI is ridiculous nonsense from people who for the most part have no training or experience in actually studying intelligence. Journalists need to be talking to more development psychologists and linguists and philosophers of mind and neurologists and fewer computer geeks before they write about the future prospects for AI.
(P.S. since Katie brought up how it's all men in AI, I'll note that there are many women worth listening to in the world of AI skepticism: Emily Bender, Melanie Mitchell, Timnit Gebru, and Janelle Shane come to mind. I know of no high profile female thinkers in either the accelerationist or doomer spheres.)
Of course, you may be right that artificial superintelligence is not on the horizon at all. Or maybe it's still a long way away, if we can even get there.
But to push back on your claim about it being nonsense pushed by people with no training or experience, consider that many experts in all the fields you mentioned (AI, philosophy, neuroscience, etc.) take the prospect of artificial superintelligence very seriously. There is a strong scientific consensus that intelligence is substrate-independent and may be reproducible in silicon chips; that it doesn't require biological brain matter. I'm not saying a consensus proves it's true, but it can't simply be dismissed as ridiculous nonsense.
There’s an apocryphal story that prior to airplanes breaking the sound barrier, many scientists asserted that it was physically impossible for any object to go faster than the speed of sound, despite the obvious counter example of bullets, which were already known to travel past the speed of sound.
This is how I feel about people who assert that thinking machines are impossible. What exactly do you think a human brain is? Unless you are religious or think that there’s something magical about humans, the obvious point staring us in the face is that your brain is an organic thinking computer.
Intelligence or consciousness?
I would argue that computers are already exhibiting intelligence (parsing and organizing data and doing sophisticated modeling), but consciousness is in the black box.
I too believe that consciousness is substrate independent, but the only consciousness we know is also one that is centrally processed and contained within one neural network. The idea of distributed, large network, decentralized conciousnesses that aren't centrally processed within one physical machine, but ping different systems, well, that is totally speculative and a leap from what we know by way of analogy to the human brain.
Can you name a capability you think AI will never be able to achieve? A year ago I might’ve said winning gold at a math olympiad was impossible, given that the answers weren’t in the training data, and yet here we are
So are you saying there won't be robots doing everything while I collect my universal income check?
Gutenheim?? (cough cough)
Why would ASI be interested in surviving, exactly? Does it have a survival instinct? Why would it want to compete or take over, to do what?
Survival is a necessary instrumental goal toward just about any objective, so if a system is capable of breaking down an ultimate goal into smaller sub-goals this would probably come up.
People worry about ASI's survival instinct because people anthropomorphize everything. You can scream "stop anthopomorphizing machines!" as loud as you like, and even get a few head nods in response, but then we all go straight back to it.
Calling it a "survival instinct" is indeed a kind of anthropomorphizing, since it's not a real instinct.
However, we are talking about artificial (super-)intelligence, and it would not be very intelligent or competent if it could not see continued existence as necessary to complete larger goals or tasks.
Let's say it didn't complete a task or a goal. Why does it care? Why go extra mile?
It's an interesting question. But again it's a form of anthropomorphizing: the belief that a system must "care" to achieve its goals. But it may just optimize its utility function as current AIs are designed to do.
When has an inferior intelligence controlled a superior intelligence?
Every human and animal system ever made.
The most powerful in the group are never the most intelligent. In fact, certain kinds of irrationality and emotionality contribute to ability to control others. There is also a reward for those with the highest will to power moreso than just those with the highest intelligence.
Additionally, to control others, it is a hindrance to speak at a level that goes over their heads. Persuasion involves matching levels of sophistication.
Not only is it a dubious concept that an intelligent agent could manipulate all people so easily given the differences in preferences and communication styles among the public, but it is ridiculous that it could do so lacking a good human-like robotic interface, and without the adaptive traits and subtextual signals that aren't so easily loaded into text or into an internet document for it to be trained on.
So dark and anxiety provoking. I am on team terrified of all of this. Does quasi homesteading give us some insulation from the AI takeover?
It sounds so overproduced, like a clone of The Daily
I really suck at being scared. I understand that that makes me a bad MSNB^How viewer.
Even if the product or service is objectively-better, people choose what they like regardless.
Without guns, you’re not going to compel people to just accept things. Is there a robot who might do a better job cleaning my teeth than the tiny foreign-born last at the dental practice I’ve used for years? Probably. Do I care? Nah.
So all that can be done is stealing my life or liberty. Okay. GLWT.
I’m not the one who has to look at you in the mirror.
I fired off the above before I’d gotten through the whole thing…..
And they’re treating the men with guns from governments that do actually pose an existential threat to humanity.
AGI & ASI are super-scary, but whatever policy coming out of DC is harmless. Unless, of course, it comes from Cankles McTacotits.
Moose will be listening… and pining.