We’re Already Cyborgs, We Just Need a Better Interface - OneBonsai - Blog

We’re Already Cyborgs, We Just Need a Better Interface

AI keeps the techies awake at night. Bill Gates, Elon Musk, Sergey Brin, and even the late Stephen Hawking have expressed their concerns at the exponential growth of intelligence in AI. Broadly speaking, there are two types of AI — artificial general intelligence (AGI) and narrow AI — and both carry their own risks.

AGI means an AI that can learn and understand any intellectual task as a human can. We’re still a long way from AGI, with many experts predicting it’ll take decades, if not centuries. The existence of an AGI could be a sudden, existential threat to humanity regardless of our attempts to constrain it. In fact, those who have read Nick Bostrom’s excellent book Superintelligence will know that there are very few scenarios where an AGI wouldn’t lead to the extinction of humanity.

Narrow AI is what we currently have. It’s an AI that vastly outperforms a human in a narrowly defined task. Examples are the Google Search algorithm, Alexa, or the algorithm that drives a Tesla. While we’re quite familiar with this type of AI, it carries its own risks too. Already, narrow AI is being used to guide drones in autonomous warfare and to manipulate elections through fake news on social media. The pace of progress in this type of AI only exacerbates these dangers.

Drone warfare poses an ethical dilemma. Should AI in a drone be able to pick its own target?
Drone warfare poses an ethical dilemma. Should AI in a drone be able to pick its own target?

One of the ways to avoid the dangers of AI while also taking advantage of its increasing intelligence is to merge with it. This sounds more drastic than it actually is. After all, aren’t we all cyborgs already? There’s no denying that our smartphones and laptops have become extensions of ourselves. Society at large is already inextricably linked to technology.

What needs to change is the interface that we use to communicate with AI. Currently, we interact with our phones using two thumbs. We type on keyboards with ten fingers at a speed of forty words per minute. This severely limits the speed with which we can access and use technology. It’s like trying to read a book with one eye closed and the other only half open. We need to increase our technology bandwidth to take full advantage of AI.

BMIs Increase Our Bandwidth

Brain-machine interfaces (BMIs) are the solution to this bandwidth problem. A BMI establishes a link between our brains and an external device. Eventually, a BMI will allow us to communicate directly with AI without limitations, while having direct access to its superior intelligence. The ultimate brain extension.

Control machines directly with your brain
Control machines directly with your brain

Of course, we’re still a long way from such a device. Creating a BMI requires us to understand how we can get info out of the brain and into the brain. The former means that we should be able to accurately and instantly record the activity of billions of neurons, while the latter means we should be able to stimulate the right neurons in such a way that it produces the desired action.

As you can imagine, that’s a hugely complex exercise. Establishing a colony on Mars seems peanuts in comparison. But, considering we’re talking about negating an existential threat to humanity and creating a society that’s exponentially more powerful than anything the world has ever seen, it’s worth undertaking.

WaitButWhy has a fantastic primer on the brain that explains why creating a non-invasive, accurate BMI is so difficult. However, don’t be fooled into thinking BMIs don’t exist yet. They do, and they’re very useful. In fact, you might have come in touch with one already (no pun intended), as an EEG is an example of a BMI. It records the electrical activity as it happens in the different regions of the brain, and it’s used to detect and monitor various medical conditions.

An EEG records electrical brain activity as it happens
An EEG records electrical brain activity as it happens

The problem with an EEG is that it isn’t spatially accurate. While it might tell you which specific regions of the brain are lighting up, there’s no way an EEG can tell you which specific neurons are being triggered.

If we want a BMI that allows us to merge with AI, we’ll need it to be at least two things. It’ll need to be spatially accurate, in the sense that it should be able to record and stimulate very small regions of the brain, and it’ll need to be temporally accurate, in the sense that it’ll need to be able to record and stimulate regions of the brain instantly.

Additionally, in an ideal scenario, the BMI should be wireless, work across the whole brain, and should require surgery that isn’t too invasive (i.e. that doesn’t require drilling a huge hole in your skull). But, for the BMI prototype, spatial and temporal accuracy will go a long way already.

Elon Musk’s Neuralink

It doesn’t come as a surprise that Elon Musk is trying to solve this incredibly complex problem. He founded Neuralink in 2016, a company dedicated to creating a functioning, wireless, high-bandwidth BMI that will eventually allow us to surf the waves of AI intelligence. Neuralink unveiled its plans on the 16th of July this year.

They’ve created a device that’s able to implant a thousand times more read-and-write electrodes into the brain than the next best device available. They’ve also created a robot that’s able to quickly and precisely place those very small electrodes into the brain.

The applications of this device, and indeed how they’ll probably monetize the company, will be medical at first. Their BMI will help quadriplegic and paraplegic patients, giving back some of their functionality through prosthetic limbs connected to the Neuralink BMI. Imagine thinking and being able to move a robotic arm or leg. This is already possible, but Neuralink’s device should be able to help patients do so faster and more accurately.

Indistinguishable Worlds

Unsurprisingly, VR companies are interested in BMIs. After all, VR is all about immersion, and the higher the bandwidth with a virtual world (i.e. the more our real-world actions reflect our virtual actions, the less lag, etc…), the more we’ll feel immersed in it.

During the Game Developer Conference in San Francisco in March this year, Mike Abinder, Valve’s in-house psychologist, spoke about the possibility of using BMIs in VR headsets. More specifically, he spoke about using EEGs to better understand how a player is feeling during a game. This, in turn, will allow developers to create a game that’ll respond to the gamer’s bio-feedback.

Mike Abinder pretending to drill into Gabe Newell’s brain
Mike Abinder pretending to drill into Gabe Newell’s brain

For example, an EEG can tell you whether someone is scared, angry, or happy. When a VR-EEG horror game notices you’re not all that scared, it could ramp up the intensity. Alternatively, if it notices you’re scared to the point of quitting, it could introduce a gentler scene.

While this is all still highly speculative, it’s not technically impossible. There’s no reason to believe that we’ll have progressed toward scenarios like this ten years from now. In fact, if we peer even further in the future, increasingly advanced BMIs paired with hyper-realistic VR could create worlds indistinguishable from real life.

Westworld explored the topic of hyper-realistic, virtual worlds
Westworld explored the topic of hyper-realistic, virtual worlds

All the progress we’re making in these areas make the simulation theory increasingly plausible. After all, if it’s technically conceivable to create hyper-realistic other worlds, how do we know we’re not living in a simulation right now? How do we know that everything around us is real and not created by an advanced race of aliens or humans?

There’s no way of knowing the answer to this question, as the simulation won’t have glaring errors that will uncover what it really is. Unless, that is, its creators have intentionally left such errors in the simulation, for us to discover when we’re technically and psychologically ready to do so.

Well, I say “us” but that implies that everyone is sentient in the simulation. There’s no reason to believe that either. The simulation’s creators might well have created their simulation to occupy a single human being, for reasons we can’t understand. Everything and everyone else could be programmed to create the semblance of a real world. The more the human interacts with someone it thinks is also a human being, the more realistic the simulation will make that person.

It doesn’t really matter, though. Our definition of reality isn’t attached to truth, but to experience. If what we experience feels real in every aspect, then that’s our reality, whether the world we live in is simulated or not and whether we’re all algorithms to serve one human being or not.