Artificial Intelligence

Neuralink and Elon Musk

In an interview on the Joe Rogan podcast this month, Elon Musk claimed, among other things, that the technology his company, Neuralink, is working on could render human language obsolete in as few as five years. As a linguist, an one who has a particular interest in language and the brain, I was curious to hear what would make him think something so ridiculous.

He also said that, in principle, his company’s chip could fix almost anything that’s wrong with the brain; that one would not need to talk; that one could communicate very quickly and with far more precision; and “You wanna speak in a different language? No problem, just download the program.” Oh, and that his chip could help correct poor eyesight.

Now out of all this nonsense, the one that might actually be true is the one about eyesight. It’s possible, I suppose; I’m not an expert on vision. But I am an expert on language, computers, and the mind, and Musk has no idea what he’s talking about. Let’s break it down a bit.

  • “In principle it can fix almost anything that’s wrong with the brain.”

I don’t know what principles he’s thinking of, but I suspect he’s falling into the general metaphorical fallacy that the brain is basically just a sophisticated computer. Obviously if you plug in the right kind of new chip to a faulty computer, you can fix a lot. But adding a chip to a brain is more like making a simple, rather stupid robot bee, and dropping it in a beehive. If the bees learn to work with the robot, the hive might be able to do some things better, but the new bee is not going to “fix” anything.

  • “You wouldn’t need to talk.”

I think what he’s saying here is that if you had a chip in your brain, and your conversational partner had a chip in their brain, then you wouldn’t need to talk to communicate. This might be true — in the most basic, banal way. We already have systems in which people can control prosthetics and even drones via chips embedded in their skulls. Essentially, the brain learns to get the chip to obey simple commands. It has not been shown (to my knowledge) that those commands can be understood by another brain. How would you get that information? The chip in your brain would have to stimulate your neurons. Which neurons? The visual ones? Auditory? Maybe you’d get the message via a tingling in your toes. It might be possible, but it’s not going to be obvious. 

Now, it does appear that the brain can, with practice, learn to “hear” thoughts transmitted in this way (rats have done so), but my guess is that the information is best conveyed via language. Much of the most successful research into computer-assisted telepathy uses subvocalization, i.e., talking without actually making any noise. The computer can tell what speech is being subvocalized and interpret it. But, critically, this still uses language. In other words, you still need to talk (silently, to yourself).

Now, I do think that eventually it might be possible to send images or sensations to another brain. I’m not at all sure what that would be like. It’s possible that a transmitted image might feel like remembering something you’ve never experienced… Kind of like a memory of a dream.  It’s cool, but it’s not the same as “not needing to talk”.  It’s not much better than texting someone an image with your phone.

  • “You would be able to communicate very quickly and with far more precision.”

Subvocalization is not faster or more precise than speech; it is speech. And telepathically sending an image or a sensation can be quicker and more precise than speech (after all, an image is worth a thousand words) but there are some things you can’t easily do with images. Like ask someone how their day was.

  • “You wanna speak in a different language? No problem, just download the program.”

This is where Musk shows his greatest ignorance, and his greatest confusion around the brain / computer metaphor. He’s thinking, I’m guessing, that speaking a different language is like applying an instagram filter to a photograph. There’s some “meaning” layer, where thoughts occur without language, and a “language” layer, which takes those thoughts and transforms or filters them to make “English” or “Swahili” or whatever. But language doesn’t operate like that. Each individual word is a cluster of sounds, meanings, and associations. When I say “bee”, your brain not only recognizes the sounds, it calls up images of bees, and knowledge about what bees do and what they’re like; so the knowledge of one word is spread out in different parts of the brain. Things become exponentially more complex as you add more words in the sentence, and the words start interacting. How is one little chip going to navigate all that? I can’t even speculate about how it’s supposed to work.

That said, I think that computers will facilitate something like telepathy, and that it will happen within a generation or two. My guess is that it will happen in the same way other machine learning systems work: the computer maps signals from one domain to another. In the same way that machines can learn to label images or do autocorrection, they can learn that some brain signals mean “hammer” and others mean “apple.” That’s relatively simple but very impressive, and you can do a lot with just that.

There will be devices that read your own mind and assist you throughout the day. Whatever you’re doing, whatever you’re saying, whether you’re in meetings or driving a car or whatever. You can turn it on and off with just a thought, if you want it to stop listening, if you want something not recorded. And at the end of the day, it summarizes and maps your thoughts. You can log in and see all your thoughts, sifted, summarized, charted, and collected. Any ideas you had, any conversation you wanted to remember, anyone’s name you didn’t catch, any promises you made but forgot about, any tasks you thought about doing but didn’t write down, where you left your keys… All of it will be there. Suppose you’re on a diet. It can help you track how often you thought about chocolate, and help you target your willpower. Suppose you’re taking a class. It takes all your notes for you, highlights what you don’t understand, and points out places where you need additional study. Suppose you’ve got young children. It records your frustration level, your sleep level, your mental exhaustion, and warns you if you need to seek outside help. And it remembers your dreams.

Then, there will be devices that gather thoughts at a distance. They’ll have all the same functionality as above, but now it’s not just your thoughts, it’s anyone’s (within range). It’s not impossible — probably difficult and expensive, but not impossible. And things that are difficult and expensive have a way of quickly becoming easier and cheaper, especially when there’s the possibility of massive profit margins. That’s when the social consequences start to get very serious indeed.

Musk doesn’t know what he’s talking about, but that doesn’t mean that some version of his device won’t succeed. We need to think about this, and be ready.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s