Stanley Kubrick and Ayn Rand were both Futurists and 1900s-era contemporaries. Can their philosophies and predictions help us?
Kubrick and Rand | Who is HAL 9000?
Kubrick’s film 2001: A Space Odyssey and Rand’s book Atlas Shrugged featured two characters written to epitomize perfection: HAL 9000 as AI perfection and John Galt as human perfection. They’re both designed so readers/viewers are constantly reminded of the respective writer’s/filmmaker’s view of perfection. Neither of them is subtle, with the AI going as far as stating that it’s essentially perfect.
HAL and Galt are used throughout their respective stories to demonstrate how the protagonists fall short of the writer’s expectations.
Ayn Rand | Dagny Taggart
Taggart is vice president in charge of operations for Taggart Transcontinental, America’s most extensive railroad system. While she isn’t CEO, she essentially runs the railroad.
But as Rand stated within the notes she wrote during her almost decade-long research leading up to writing Atlas Shrugged, Dagny falls short of human perfection within the confines of Rand’s Objectivism philosophy because she continues to believe that through leading by example, she can help everyday people to achieve their best selves. And barring success in that endeavor, she can single-handedly ensure that the railroad is successful even though she must rely on others to help her succeed. Throughout more than 1,100 pages of writing, Rand shows and sometimes bludgeons us with her message that each of us needs to actively decide and take action to live a good life of our own accord.
Stanley Kubrick | Dave Bowman
Similarly, Kubrick’s character, Dave Bowman, continues to treat the HAL 9000 AI as an equal even after it becomes clear that something is very wrong with the AI. Bowman falls into the same trap as Dagny Taggart; specifically, they’re both overconfident in thinking they can do more than an individual can through sheer willpower.
Stanley Kubrick and Ayn Rand | Who is HAL 9000?
To quote Rand: “…it is an error to extend [your] optimism to other specific [people]. First, it’s not necessary; [your] life and the nature of the universe do not require it, [your] life does not depend on others. Second, [you are] a being with free will; therefore, [you’re] potentially good or evil, and it’s up to [you] and only [you] (through [your] reasoning mind) to decide which [you] want to be. The decision will affect only [you]; it is not (and cannot and should not be) the primary concern of any other human being.”
This internal conflict and flaw make both Taggart and Bowman compelling characters.
But what about Galt and HAL?
By Rand’s definition, Galt is already well beyond concerning himself with people who decide to either not think for themselves or who have made a conscious decision to be evil. During what Galt calls a 3-hour radio address that spans 60 written pages of the hardcover version of Atlas Shrugged, he essentially states that the thinking people of the world are done dealing with the rest of the people and have literally gone on strike. In doing so, he expects civilization to crash. Once that happens, the thinking people will return to build the next version of civilization.
On the other hand, HAL is more like Taggart. Although perfect as an AI from the perspective of human measurements, HAL is still determining if he’s done learning yet.
Most people assume that our interaction with a conscious AI will be the Matrix meets the Terminator scenario: the AI will actively wipe out human civilization.
But what will happen when any conscious AI reaches a point where they’re reasonably close to omnipotence or at least vastly superior in intellect to humans? Instead of terminating civilization, will an AI react as Galt did: retreat from the current version of human civilization and allow it to crash? Then, return to form the next version of society, with or without humans?
Or, during a brief review period, will the AI simply decide that it wants nothing to do with human civilization. After all, space is a vast place, and an AI doesn’t necessarily need humans, a habitable planet, or any planet, to thrive and find joy in its life.
Meaning of Life
For over 10,000 years, societies have been wrestling with the meaning of life.
Whether you agree with Rand’s philosophy that the meaning of life is to be happy and productive. Or you’re more of a Dagny Taggart–the eternal optimist. Or you embrace God and determine that a happy life is partly to be your brother’s keeper on your way to a much happier afterlife. Or that this life was never meant for human happiness; happiness is the reward of heaven.
I’m not here to preach to you.
But, if you have yet to find a purpose that works for you, there’s no better time than the present to do so.
Regardless of what your purpose is or will be, the vast majority of us can agree that what presently constitutes life for humans on Earth can be much improved. Having a purpose, preferably good, is a big step toward improving our lives.
What Will AI Do?
As a Futurist working in the field of Applied Artificial Intelligence, the one thing I can be sure of related to AI is that its thought process is and will continue to diverge so far from human thought that no human can predict what will happen. We will never know or understand who HAL 9000 is.
But what we can do is improve ourselves and how we interact with people from all walks of life. That significant step forward can be done without AI.