So, you have a chip in your brain. Now what?
How can we ensure that having a chip in the brain is safe? How can we protect these devices from being hacked? What will happen to the data collected by the BCIs? What about maintenance?
Imagine you’ve just got a brand new brain-computer interface. This chip implanted in your brain is the first such device for consumers and represents the greatest achievement of neurotechnology.
The procedure went smoothly - after you arrived at the clinic, a surgeon removed a small piece of your skull and the surgical robot precisely threaded electrodes thinner than a human hair into the brain. After waking up, you try out what can you do with your brand-new brain-computer interface (BCI). First, you need to calibrate the chip in your head with a training app. After that, you managed to type a message to a friend using only your thoughts. Excited, you download some apps for the BCI. There's one app that detects when anxiety or depression is about to strike and suppresses those emotions. Another highly recommended app can put your mind into a state of flow on demand, something that will be very useful at work. And last but not least, you can now finally play that game with your cyborg friends.
This is the future neurotech companies working on BCIs hope to achieve. However, there is a long road ahead before booking an appointment for a BCI becomes as straightforward as getting laser eye surgery today. Currently, the neural interface technology is in the research phase, primarily aimed at medical applications such as enabling locked-in patients to communicate with the world or restoring mobility to paralyzed individuals. It will require significantly more research and development to scale from the roughly 50 people who have ever had BCIs implanted in their brains to tens of thousands, if not more, potential customers.
Along the way, the BCI industry will have to overcome many challenges and answer many questions lying ahead. How can we ensure that having a chip in the brain is safe? How can we protect these devices from being hacked? What will happen to the data collected by the BCIs? What about maintenance? What would happen if the manufacturer drops the support or goes out of business?
Making BCIs as safe as LASIK surgery
One of the main challenges that needs to be solved before BCIs can become mainstream is to make them safe.
Inserting invasive BCIs will require some form of surgery, the seriousness of which depends on the approach the development team takes. The general trend among BCI companies is to develop implantation techniques that do not require a highly-skilled, scarce, and expensive neurosurgeon to perform. For example, in the case of Neuralink, a small piece of the skull needs to be removed, but the insertion of the electrodes into the brain is done by a robot. Synchron’s Stentrode can be inserted into its place through the jugular vein in the neck. Both methods do not require an elite neurosurgeon and are designed to minimize the chance of complications during the procedure and shorten the recovery time.
Once implanted, the BCI needs to be proven safe for long-term use. The brain is made of soft tissue. It is constantly moving, expanding, contracting, pulsing. Inserting hard, inflexible electrodes could lead to tissue damage. A better approach involves using soft, flexible electrodes that can adapt to the brain's natural expansion and contraction.
Electrodes used today are made from a conductive material coated in biomaterials to minimise the chances for the brain tissue to reject the implant. These electrodes are constantly being improved, which makes them thinner and better integrated with the brain’s tissue. But there is another way - instead of making and then inserting the electrodes into the brain, why can’t we grow them? Biohybrid neural interfaces offer better integration with the brain and potentially better long-term stability.
The biohybrid approach is one I’m curious to see how will it develop. I find the idea of creating a biological neural interface that grows with us fascinating.
We have to think about the safety of neural interfaces through their entire lifecycle, from the implantation to daily usage and maintenance till the device’s end-of-life and potential removal. We need to have procedures in case the device malfunctions to avoid it causing discomfort, pain or damage to the brain. We will also need to decide for how long and to what degree the manufacturers are responsible for the upkeep of their devices and what to do when that chip in your head becomes obsolete.
If you enjoy this post, please click the ❤️ button or share it.
What happens when your BCI becomes obsolete?
Getting a BCI could be like getting a new piece of tech today. And like with any piece of tech today, at some point it will become obsolete. Either the manufacturer drops the support for the device or the manufacturer closes the shop. In any case, when the neural interface becomes obsolete, its users are left with unsupported devices in their heads. They will not receive software patches anymore, leaving them vulnerable to cyberattacks. And if something happens to the device, they will be on their own.
The conversation about BCIs becoming obsolete might seem like a theoretical discussion, but it's not. There have already been instances where BCIs enabled individuals to live a normal life once again, only for that possibility to be snatched away when the manufacturer of their BCI went bankrupt.
People who regained sight with the help of retinal implants had to learn the hard way what happens when the company behind their life-changing implants abandons the product and is on the verge of going bankrupt. And we are not talking here about a research project trialled by a handful of people. We are talking about a commercial product used by more than 350 people. IEEE Spectrum has an excellent article sharing the story of people using Argus II and how they were affected by Second Sight abandoning its product and its customers.
While Second Sight made some efforts to mitigate the impact on Argus II users, this situation raises ethical concerns regarding the responsibilities of medical device companies to their patients.
Another person who experienced what it feels like to have a life-changing implant removed is Ian Burkhart. Burkhart, who became quadriplegic as a result of a diving accident, received a BCI as part of a research program in 2014. This implant allowed him to move his hand and fingers for the first time in five years. Initially, Burkhart was supposed to have the implant for 12 to 18 months, but the trial was extended year after year. However, in 2021, problems started to emerge. First, securing the funds to continue the program became increasingly difficult. Secondly, he started developing an infection at the point where the cable entered his scalp. After seven years of having a brain implant, Burkhart agreed to have it removed.
Although Burkhart lost his implant, he remains optimistic and is actively advocating for more research in neural interfaces. But Rita Leggett’s story is not that happy.
Leggett was diagnosed with severe chronic epilepsy when she was three years old. In 2010, at the age of 49, she joined a clinical trial to test the effectiveness of a device designed to warn people with epilepsy of upcoming seizures. It was life-changing for Legget. The device was notifying her when a seizure was coming so she could take the medication to prevent it. For the first time, she could live a normal life. She felt that she had become a new person as the device merged with her. But in 2013, this was taken from her. The company that made the device went bankrupt. Leggett had to have her implant removed and revert back to her previous life, very much against her will.
Losing access to the functionality provided by the neural implant, either by having it removed or just by the device stopping to work for whatever reason, could have a devastating impact on a person’s life, as the example of Rita Leggett has shown. Leggett’s case even inspired researchers to explore if the removal of such devices could represent a breach of human rights. An unsupported device can be a source of discomfort, pain and it could even be dangerous, threatening the user’s health or life.
Another way neural interfaces can become obsolete is through inevitable technological advancements. Just as we expect a new iPhone every year, neurotech companies could release updated versions of their devices on a more or less regular schedule. The device you have implanted in your head does not become obsolete because the manufacturer stopped supporting it but rather becomes outdated in comparison to newer models. It remains to be seen how often people would be willing to replace their BCIs with a newer model.
One way of addressing this issue is by encouraging the use of open standards and open-source software or hardware. However, the question remains if the manufacturers will see value in taking the open-source path. The incentive to keep everything proprietary to protect their IP and their business model could outweigh the potential benefits for the customers of their unsupported devices. Also, shifting the responsibility for the upkeep of the device to the open-source community will not solve the problem entirely. There needs to be a community around that device to emerge, which is not guaranteed, or the project dies and joins countless other abandoned open-source projects.
Another option could be introducing legal frameworks that bind manufacturers to support the devices throughout their entire lifecycle and to provide a path for upgrading or removing the device when it becomes obsolete.
We cannot complete the conversation about the obsolescence of neural interfaces without mentioning the possible problem of planned obsolescence where the product is designed to break down much sooner than it is supposed to, forcing the customers to buy a replacement.
You don’t want your BCI to get hacked
The security of medical devices is becoming an important topic as more and more of these devices borrow features like richer connectivity from Internet of Things (IoT) devices, becoming Internet of Medical Things (IoMT) devices. Just like their non-medical cousins, IoMT devices promise to gather more data and connect to other devices to enhance their capabilities. Also like their non-medical cousins, IoMT devices often lack proper cybersecurity measures.
In 2018, US cardiologists pointed out that it is theoretically possible to hack pacekeepers that use wireless methods of communication. In 2023, the FDA acknowledged cybersecurity as a potential threat and issued a Premarket Cybersecurity Guidance document demanding more responsibility be put on medical device manufacturers and suppliers to keep their devices secure.
The security of current BCIs is not the biggest concern as these are mostly research projects. But as more and more people start to get them, making sure these devices are secure will become more important. Getting your brain hacked, something often seen in cyberpunk stories, may become a real threat when BCIs become common. These devices can literally open the doors to one’s mind, gaining access to all their thoughts and feelings. Being able to read them or manipulate what is inside our minds is worth obtaining, either legally or not.
The cybersecurity community has over the years developed a list of best practices to make devices secure, like using encryption and communication only through secure channels. Like any piece of technology, BCIs should receive regular software updates to address new vulnerabilities and enhance security measures. Governmental agencies like the FDA in the US need to set minimal security standards that every device needs to meet as a part of the certification process. On top of that, regular, independent security audits, conducted by reputable cybersecurity firms, can help identify vulnerabilities and recommend improvements.
But none of that will matter if the BCI manufacturers will treat cybersecurity as an afterthought or as an annoyance that needs to be minimised or removed to save costs. We will need to demand these companies to take the security of their devices seriously and hold them accountable.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
The privacy of thoughts and the emergence of neurorights
Our minds are our last bastion of privacy. We do not express everything that is going through our heads and often we don’t want to. However, having a BCI opens the possibility for someone to have access to the most private parts of our lives. These devices will constantly monitor brain activity, and using that data, it will be possible to decode not only every thought going through our head but also everything we see, hear, or feel.
That information would be valuable for many organisations. Ad companies, which already know a lot about us, would learn even more to deliver better targeted ads. Insurance companies could charge more if they knew everything we didn’t tell them.
But things can get even more dystopian when governments and employers get hold of our thoughts. Would you dare to have dissident thoughts in this scenario? Would you be thinking about changing your job? What if having a wrong thought will result in losing a job? Or what if you live in a country in which you need to hide your sexuality or religion? With a chip in a brain, there is nowhere to hide.
Invasive BCIs, those that are put inside our heads, will provide the best quality data. However, recent advancements in AI hint at the possibility that equally sensitive data could be extracted using non-invasive devices. In March 2023, two researchers successfully recreated images from the brain using latent diffusion models. Just a couple of months later, another group of researchers captured exact words and phrases from the brain activity of someone listening to podcasts.
Just like today Google or Meta offer free products and services in exchange for our private data, neurotech companies can too incentivize their customers to give away their private data in exchange for services. Once that happens, we have little to no control over how that data is going to be used, who will obtain it, and what they will do with it. Cambridge Analytica was able to change the world using data obtained from Facebook. With unlimited access to our minds, these companies could do even more.
And that’s just reading our thoughts. Imagine the kind of power over people someone could have if they could also put thoughts directly into people’s minds.
"Then don't get a brain implant," one might suggest. True, but the problem with that response is that having a BCI, whether invasive or not, could become akin to not having an email address or a phone number today. There's no requirement to have either, but lacking them can significantly make your life harder. The same could happen with BCIs. Having a brain implant might become a requirement for certain jobs or necessary to keep pace with others. There might even be social pressure to have one, to not be left behind.
In a society with brain surveillance, thought police could become a reality and redefine what we mean by committing a crime. Could just having a thought be a crime? Would people be arrested before they commit a crime only because they thought about it, just like in The Minority Report?
These questions and many more show that there is a gap between our current legal frameworks and what neurotechnology could do. For some, this is a problem that does not need to be addressed today. We should wait until this technology arrives and then we can deal with it, they say. Others disagree and call to create legal frameworks to grant and protect people’s neurorights now.
In 2021, the people of Chile were the first in the world to be granted neurorights which protects people’s mental privacy, free will and non-discrimination in citizens’ access to neurotechnology. It also gives personal brain data the same status as an organ, meaning it can’t be bought, sold, trafficked or manipulated. Chile is not the only country concerned about the legal vacuum that surrounds neurotechnologies. Spain’s new Digital Rights Charter includes a section on neurorights, and while it’s a nonbinding framework, it may inspire new legislation. The United Nations and the EU have also begun to study the issue. Interestingly, under the incoming AI Act, the EU might have already prohibited BCIs that deploy harmful manipulative "subliminal techniques" designed to change someone's behaviour. These devices will heavily use machine learning to analyze brain data, thus making them fall under the EU’s AI regulations.
Neural interfaces promise to revolutionize our lives. With them, we could control computers with our thoughts or connect with others in new, innovative ways. We could become better at controlling our emotions, better at understanding each other, more productive, and make our lives easier in many other ways.
However, that future is still far away, with many challenges awaiting those who pursue to make that vision come true. Neural interfaces need to prove they are useful, safe, and secure. We also need to address issues of privacy and ensure that freedom of thought exists when we have devices that constantly monitor our brain activity. How we answer these and other questions in the next 12-15 years will define the trajectory of BCIs and their impact on our lives and on humanity as a whole.
Check out other articles in the Brain-Computer Interfaces series
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
Really interesting analysis. I never considered that neurotech companies could follow the Big Tech model of offering the product for free and turning it's users into the real product...
It may be more helpful to think of "mental privacy" as a necessary feature, which enables us to think in abstract, imaginary ways. We get good and bad results, I admit.