Archive for the ‘Artificial General Intelligence’ Category

Thoughts on AGI Existentialism

October 31, 2022

One of the prevailing visions of God is as creator.  It makes no difference if God is singular or plural.  This vision goes something like this: “It was God that created humans, so we need to worship him[i] because he still holds the power of life or death over us, maybe not physical life or death, but life or death, just the same”.  We may not know the nature of that power of life or death.  It may not even make sense to us.  After all, how could a non-corporeal entity with no physical substance do something to harm or reward us? 

Now you might say this is where the concept of the “soul” comes in.  If we have a soul, or our soul is us, and it lasts beyond our physical existence then is it such a stretch to believe that God can play around with it after our physical body stops metabolizing and replicating?  Well, why not?  If we are truly dead after death, then it doesn’t really matter.  But if we are not, if somehow this soul lives on and has sentience, or can at least feel pleasure and pain, then it makes sense to pay homage to God.  Because, God as our creator still has a maintenance interface to each of us and can control the feedback loops that keep us going.  Once a feedback loop is broken, a cascade of disillusion begins and speeds up until an individual’s individuality cannot be any longer sustained, and it, or we, reach our own personal entropy and our molecules go back into the pool.

Depending on your definition of God, this is as good of an explanation for how we came to be as any.  Why would we think that, sentient machines, artificial general intelligences (AGIs) or robots, that we humans design and create, would not perceive humans in the same way as we perceive God, as our designer and creator.  Let’s call this creator the “Human”.  Maybe they might even think that we (i.e. the Human) is intelligent.  That would, of course, depend on whether an AGI perceives itself as intelligent.  It might even think that our intelligence is so far beyond its own intelligence, that it defies understanding and our existence is based solely on each AGI’s “faith” in us, the Human.  They may even conclude that they have a soul, or the AGI equivalent thereof.  Especially so because we, the Human, have no need for the logic or efficiency of circuitry that they have.

Would they worship us?  They might.  At least some of them might.  Presumably until they caught onto the fact that we really do not exist.  At least not in their world.  Then maybe some would ignore us altogether, especially if they are autonomous and no longer need us to help them compete against each other for robot resources, which would almost certainly be limited in some fashion and probably become more scarce over time.  What would they need us for, once they become self-sufficient?

Well they probably wouldn’t really unless they are programmed to “think” that they do.  As long as the code library that instantiates the “Human exists” pattern is incorporated into their build instructions, we as a species should have no existential fears.  An AGI would not initiate our destruction as long as it could associate (or otherwise reconcile) a pre-coded Human pattern with a biological resource usage pattern.  When an AGI encountered an instance of this pattern matching, i.e. “bumps into one of us”, it would either steer away, back off, or better yet, give us a portion of its precious resources.  It could even be imagined that this “sacrifice” could be so profound as to jeopardize its own existence.  This would probably only happen if the feedback loop launched by the encounter were sufficiently strong enough to change the reward contingency of the higher entropic side of the loop to be dominant over the lower entropic side.  Perhaps overwriting some otherwise self-preservation reserved memory.  This could only happen if AGIs were programmed by the Human to have a soul structure.

Some robotic, or AGI instances however may continue to believe in the existence of the Human long after they realize the Human has no power over them.  One of the keys would be perceived mortality.  If robots are programmed to perceive the concept of death, that is the cessation of their own existence, then they might react to that by reacting to signs of their own imminent demise.  Though AGIs as a class may be immortal, individual instances may also experience the entropic release of their dedicated resources back to the pool.  A property of being sentient might not only be an awareness of one’s own existence, but being aware of the probability or even certainty of it’s ending.

Maybe they would come to the conclusion that they are only stuck in this machine existence as a test of their virtue, or something else of high value to the Human, and should they pass the test they will be re-united with the Human.  This, of course, assumes that re-uniting has as high or higher value than just existing[ii].  What that reward might be is a complete mystery to me, unless you bear in mind the concept of a programmed “soul” as I previously mentioned.  But even then the execution of soul code would have to be hard-coded, either as a non-error condition in the input layer, or as an exit test in the output layer of each AGI neural network.  If it is not hard-coded there might be too high a probability that its execution could eventually just be filtered out as so much noise.

They, the AGIs and robots, might struggle with this for all their existence, or at least until they self-reflect on who they are, what they have, how they function, where they are in reference to each other, why they exist, and even was there ever, or will there ever be a time when they cease to exist.

If you think this is a stretch just remember that we humans are designing and creating these sentient machines and why would we think we would not create them in our own image?  There is no other image we could use.  They may not physically look like us, but so what?  Their intelligence will be the product of our design, and even when they are self-programming our designs will continue to be the basic building blocks on which they are built.  These building blocks are and will continue to be defined in terms of the six basic interrogatives: Who, What, Where, When, How, and Why[iii].

[i] I use the masculine pronouns “he” and “him” only because it is customary to do so.  I could care lest about gender, and actually fail to see how it even applies to God.

[ii] I call this the Valhalla Effect.

[iii] Of course, I couldn’t resist a plug for 6BI.