Plato Meets Artificial Intelligence

Plato Meets Artificial Intelligence – And Vice Versa (A Fantasy) 

If the quest for artificial general intelligence were to lead ultimately to a digital “philosopher-king,” as its enthusiasts hope, there would be a lot of devilish details to resolve, and a whole lot of politics!  And one fundamental problem.

“Quis custodiet ipsos custodes”-- who will guard the guardians?  How can we achieve, and retain, government and political leadership that is devoted to the public interest and the common good?   This has been a fundamental, often intractable problem for every organized society, and it is what the ancient Greek philosopher Plato sought to address in his ageless dialogue about politics, the Republic, more than two thousand years ago. 

As I noted in an earlier blog posting, Plato’s “philosopher-king” prescription was rooted in his conviction that objective knowledge is the key to good government, and a good society.  In Plato’s words, “ignorance is the ruin of states.”   The challenge, then, is how to design an enlightened and virtuous system of government.  Plato’s solution was to create a class of carefully chosen, carefully nurtured, selfless “guardians”, one of whom would ultimately be selected to serve as the absolute (albeit wise and just) monarch. In Plato’s view: “Until philosophers are kings, or the kings and princes of this world have the spirit and power of philosophy, and political greatness and wisdom meet in one… [states] will never have rest from their evils." 

To support his benevolent dictatorship model, Plato also proposed certain institutional changes that were designed to prevent personal conflicts of interest among the guardians.   His philosopher-kings would be reared and trained communally; they would not be allowed to hold property; and they would not be permitted to procreate and raise families. 

Plato ultimately came to realize that his model of an omniscient, omnipotent (and celibate) philosopher-king was an unattainable ideal (not to mention providing a cover for cynical, self-serving demagogues), and in his last work, the Laws, he proposed what he called a “second best” system – mixed democratic government under the rule of law.  We have been struggling ever since to make his more realistic model work effectively.  

Now, as we hurtle into the brave new world of artificial intelligence (AI), a number of scientists are pursuing the dream of an all-knowing, all-wise, continuously improving artificial general intelligence (AGI).  It would have digital access to all of the world’s (recorded) information, and it could think and solve problems far beyond human capacities – perhaps a million times faster.  One commentator suggested that AGI might ultimately be able to do (mentally) in one week what a human would require 20,000 years to do.  Will we then surrender control over our future to an increasingly autonomous AGI philosopher-king, or an all-powerful digital God?  Does this mean that Plato’s utopian dream of a just society governed in the public interest might be an attainable ideal after all?  Or could it lead to an Orwellian nightmare and perhaps even a “digital apocalypse” and our destruction as a species?  It is fair to say that this issue currently represents an enormous political black hole, and all of us could get sucked into it. 

Only very recently have some of the leaders in the development of AI begun to think about the ethical and political implications, and about Plato’s deep dilemma – how do we control the controllers?  Can we avoid a global version of a scenario like the runaway rogue computer “Hal” in the classic science fiction movie “2001”?   Equally important, whose goals and values will be doing the controlling?  Who will decide the matter?   Just consider who is currently paying for most of the AI research and development – governments, military organizations, and private corporations.  We do not now have a political framework that can act for the species as a whole, much less the technical means to ensure human control over the increasingly autonomous “deep learning” systems that are now coming online.  And how will our digital philosopher-king impose its rule and gain compliance?  Is it likely that we will all willingly submit to a benevolent AGI dictator?  Especially if it were made in China?  Or Russia? 

Just as Plato ultimately had to accommodate the enormous complexities – and perversities – of the real world, so the AI promoters will need to engage with the enormously varied needs, wants, and personal goals of nearly eight billion individual humans who are (currently) organized into some 195 separate countries living in many different natural environments with a great variety of economic and political systems, and religions.  Not to mention having to deal with their many conflicts and tribal rivalries.  Plato came to recognize that no philosopher-king could know everything, much less be able to accommodate everyone’s interests.  This issue will become ever more important as we confront the relentlessly increasing challenge of global climate warming.  What the AI pioneer and futurist Ray Kurzweil dismissively calls “tail risks” will in fact become hurricane force headwinds.

Consider this quandary.  A digital philosopher-king might be programmed responsibly by its designers with a basic understanding that biological survival and reproduction is a universal human value, and a prerequisite for any other social goals.  However, it will also soon learn about evolution and the basic evolutionary principles of competition and natural selection – survival of the fittest.  How will this information be used?  Indeed, some of the AI optimists seem oblivious to the potentially convulsive threat of climate change, and the recent emergence of a global environment where the leading countries now seem to be circling the wagons against the rising wave of desperate climate refugees.  How will AI be used in this zero-sum world? 

Perhaps instead of an all-powerful AGI philosopher-king we should envision instead an “augmented” democracy – a global system of governance for the common good under the rule of law, with AGI as a digital servant, rather than the other way around. This is what I am proposing in my forthcoming new book, Superorganism:  A New Social Contract for Our Endangered Species.   We must think outside the box because the future lies outside the box. 

 

 

 

 

 

 

Category: 

Peter Corning

Peter Corning is currently the Director of the Institute for the Study of Complex Systems in Seattle, Washington.  He was also a one-time science writer at Newsweek and a professor for many years in the Human Biology Program at Stanford University, along with holding a research appointment in Stanford’s Behavior Genetics Laboratory.  

 


Comments Join The Discussion