Beyond Artificial Intelligence: The Bionic Brain

One of the great unknowns in our rush to develop artificial intelligence (AI) is how to control it for the ultimate benefit of humankind, rather than having it become a destructive force.  There are various dark future scenarios out there these days, from runaway rogue computer systems like “Hal” in the famed science fiction movie “2001”, to “killer robots” like some of the latest military drones that can act autonomously, or the very real possibility of a horrendous mistake that could be made without our knowledge by the “machine learning” and “deep learning” systems that are increasingly independent. 

A number of AI theorists have recently been calling for the developers to step back and create a set of ethical rules and better safety testing protocols.  Others advocate the idea that ethical values should be programmed into our increasingly powerful AI systems as we go forward, to ensure that they cannot act against our best interests.

Well and good, but as I noted in a previous blog item about the future of AI, the biggest problem we face is us – how do we “control the controllers” as Plato put it in his classic work on government and social justice, the Republic, more than 2000 years ago.  How do we prevent people (and governments) with sinister or self-serving purposes from exploiting AI against us?  Indeed, we are already seeing some of this in countries with authoritarian regimes.   

Plato’s admittedly utopian idea was to design a political system that could carefully nurture and train “philosopher kings” who would act selflessly to serve the public interest and the common good.  (Plato’s more practical “second best” alternative was what we now call “the rule of law” and “mixed government,” where all of the various interests are empowered, and represented, and constrained.)  What could be termed “Plato’s conundrum” haunts us to this day, and it is by far the greatest danger that AI poses for humankind.

Elon Musk’s latest venture, embodied in a new company that he has called “Neuralink”, may unwittingly have provided an answer – a solution that is wildly speculative, I’ll admit, but not much more so than what Elon Musk himself has in mind.  So, please do hear me out. 

Brain implants are nothing new.  For instance, they have been used for many years for things like an insert that can mitigate the symptoms experienced by Parkinson’s disease patients.  It is reported that there are some 100,000 of these in use altogether today.  There is also the currently active development of implants that can aid in bodily movement, speech, hearing, and the like.  What Musk envisions is a new generation of implants that would enhance various normal brain functions, such as vision and communications, and even boost intelligence in ways that would enable us to interface better with AI systems.  As one prominent neuroscientist put it, this is “a big jump” from where we are now, but it appears there is some serious money backing it. 

Here’s an even bigger jump – a great leap.  OK, a fantasy (and no money backing it, needless to say).  Imagine the development of a high-powered ethical chip embedded with the Ten Commandments and perhaps another 10,000 “updates” – everything from don’t takes bribes to don’t cheat on your final exams – that would impose an array of behavioral imperatives on all of us.  Call it the “Moses chip” – a fittingly miniaturized and improved version of the original stone tablets.   Granted, the first few commandments might be a bit problematical (and dispensable) for many of us, but how about thou shalt not kill, thou shalt not commit adultery, thou shalt not steal, thou shalt not bear false witness (aka lies).  That’s a good start.  And how about the Golden Rule (“do unto others…”), or thou shalt always repay a kindness or a debt and contribute your fair share (the universal cultural norm of reciprocity), or thou shalt eschew greed, or thou shalt not knowingly cause harm to others, or thou shalt obey all just laws, and so much more.  Imagine a world full of people who spontaneously acted ethically. 

If, in all seriousness, our greatest threat going forward into the coming dark age of climate change is each other, and the lethal harm that we can do to one another, then this is where we must begin with finding a solution.  If an ethical Moses chip is at this point a far-off dream, then we will just have to try to do it the hard way.  Each one of us will need to implant their own hand-made ethical chips and continue to oppose and block (and punish) those who do not behave ethically.  Perhaps AI might even be able to help us with this existential ethical task.  We’ll see.  




Peter Corning

Peter Corning is currently the Director of the Institute for the Study of Complex Systems in Seattle, Washington.  He was also a one-time science writer at Newsweek and a professor for many years in the Human Biology Program at Stanford University, along with holding a research appointment in Stanford’s Behavior Genetics Laboratory.  


Comments Join The Discussion