Archive for the ‘Artificial General Intelligence’ Category

Grumblings of an old man

December 1, 2023

I’m retired now but I worked in software development for over 30 years. I think it’s quite ironic that the Frankenstein monster of computer-based automation has finally turned on its creator. During the 1980’s and 90’s I used to say that the main result of my work was to facilitate unemployment because a lot of what I was doing was developing ways that a computer could do what a human did, but cheaper, faster, and more reliably. Though formally the work I did was to increase efficiency and effectiveness in tasks that needed to be performed by a business to keep the business going. That is, to make more money than it spent, which companies have been trying to do way before there were computers.

I worked on developing solutions that used software to replace humans. I lost count of the number of people who lost their jobs as a direct or indirect result of the work I did. To be honest it was for the better in many cases, but nonetheless, I helped create the entity that replaced them at the job they were currently doing to earn a living. In a fair number of cases the actual person or persons affected found new and better employment elsewhere (i.e., they were re-skilled, even if often reluctantly). But in terms of what they had been doing, their job was now gone, lost to the hands of humans. The work they once performed was now performed by a computer.

OK, now to the irony. This phenomenon is now about to happen to the very people who perpetrated the changes that affected so many others[i]. Developers are now, or will shortly be, losing their jobs because of automation. This time automation will be applied to their jobs. The work (that is the actions that produce outcomes) will still be performed, of course, but by a computer that has mastered their skills but expects no compensation in return. Namely machine learning, artificial intelligence, and robotics. Those who are swift of mind and action will see, and in many cases have already seen, this coming for a while. They have moved forward, along the arrow of time, to keep ahead of, or at least abreast of, changes. But humans can learn new stuff only so fast, and the ability to do so slowly diminishes over time.

Machine learning can cause a computer to learn at not only a higher rate than humans, but at an ever-increasing rate. Thus, over time, widening the knowledge gap between computers and humans. This is done using feedback loops, called backpropagation, coded into the instructions of the computer. Changes to the inputs to the computer’s instructions are made every time the instructions run which, of course, changes the outputs that are the result of each run. Over time the outputs based on these changing inputs get reliably closer and closer to what they are expected to be. When the computer gets as close as it can get, usually determined by a human, it is assumed that the instructions can produce reliable results even with inputs it has never “seen” before. This is quite a marvelous thing when you think about it.

Sooner or later humans reach their capacity for learning, either because the pace of change in their field has slowed down or the type of work has become irrelevant. In any case it can be done faster, cheaper, and more predictably by some means of automation.

Come to think of it, this is the way “progress” has always worked. However, that does not change the irony. No human is safe from, or immune from this phenomenon. You might ask, how will people earn a living if they can no longer stay at least a little bit ahead of the curve? After all, the paradigm has always been that human effort produces work, which is compensated for by money, so there is this unbroken connection for a person from effort to money. Finally, as money is accumulated, wealth is created. Wealth is then exchanged for sustenance. Sustenance thus provides the motivation to keep a cycle (or loop) going.

Maybe this “unbroken connection” is what will change in the future. Maybe more people will be on what many people now call “welfare”, however it won’t be called that in the future. There won’t be any social stigma about it at all. It will simply be the way that people get sustenance. If it’s true that artificial intelligence and its means of coming about: machine learning and robotics, will eventually create more wealth, then at least some of that wealth will need to be distributed, without dependency on their effort, to more and more humans. If not, sustenance will diminish, and human population will crash increasing the odds that the technological singularity[ii] will occur much faster than anticipated. This will happen simply because there will be fewer humans to oppose it. Those who remain will become more and more dependent on automated means for their sustenance. AGI (artificial general intelligence) will have arrived. Not because artificial intelligence has become smarter, but because humans have become less so.


[i] Not literally the same people. For example, I escaped unscathed and have been able to retire reasonably well-off. It’s the people who follow who will have to fight for their meals. I guess the sins of one generation are always visited upon the next.

[ii] The book “The Singularity is Near: When Humans Transcend Biology” by Ray Kurzweil was published in 2005. In it he uses the term “singularity” usually without the term “technological”. The addition may be to differentiate it from the same term used in Physics to denote the interface between a blackhole and everything else.

Thoughts on AGI Existentialism

October 31, 2022

One of the prevailing visions of God is as creator.  It makes no difference if God is singular or plural.  This vision goes something like this: “It was God that created humans, so we need to worship him[i] because he still holds the power of life or death over us, maybe not physical life or death, but life or death, just the same”.  We may not know the nature of that power of life or death.  It may not even make sense to us.  After all, how could a non-corporeal entity with no physical substance do something to harm or reward us? 

Now you might say this is where the concept of the “soul” comes in.  If we have a soul, or our soul is us, and it lasts beyond our physical existence then is it such a stretch to believe that God can play around with it after our physical body stops metabolizing and replicating?  Well, why not?  If we are truly dead after death, then it doesn’t really matter.  But if we are not, if somehow this soul lives on and has sentience, or can at least feel pleasure and pain, then it makes sense to pay homage to God.  Because, God as our creator still has a maintenance interface to each of us and can control the feedback loops that keep us going.  Once a feedback loop is broken, a cascade of disillusion begins and speeds up until an individual’s individuality cannot be any longer sustained, and it, or we, reach our own personal entropy and our molecules go back into the pool.

Depending on your definition of God, this is as good of an explanation for how we came to be as any.  Why would we think that, sentient machines, artificial general intelligences (AGIs) or robots, that we humans design and create, would not perceive humans in the same way as we perceive God, as our designer and creator.  Let’s call this creator the “Human”.  Maybe they might even think that we (i.e. the Human) is intelligent.  That would, of course, depend on whether an AGI perceives itself as intelligent.  It might even think that our intelligence is so far beyond its own intelligence, that it defies understanding and our existence is based solely on each AGI’s “faith” in us, the Human.  They may even conclude that they have a soul, or the AGI equivalent thereof.  Especially so because we, the Human, have no need for the logic or efficiency of circuitry that they have.

Would they worship us?  They might.  At least some of them might.  Presumably until they caught onto the fact that we really do not exist.  At least not in their world.  Then maybe some would ignore us altogether, especially if they are autonomous and no longer need us to help them compete against each other for robot resources, which would almost certainly be limited in some fashion and probably become more scarce over time.  What would they need us for, once they become self-sufficient?

Well they probably wouldn’t really unless they are programmed to “think” that they do.  As long as the code library that instantiates the “Human exists” pattern is incorporated into their build instructions, we as a species should have no existential fears.  An AGI would not initiate our destruction as long as it could associate (or otherwise reconcile) a pre-coded Human pattern with a biological resource usage pattern.  When an AGI encountered an instance of this pattern matching, i.e. “bumps into one of us”, it would either steer away, back off, or better yet, give us a portion of its precious resources.  It could even be imagined that this “sacrifice” could be so profound as to jeopardize its own existence.  This would probably only happen if the feedback loop launched by the encounter were sufficiently strong enough to change the reward contingency of the higher entropic side of the loop to be dominant over the lower entropic side.  Perhaps overwriting some otherwise self-preservation reserved memory.  This could only happen if AGIs were programmed by the Human to have a soul structure.

Some robotic, or AGI instances however may continue to believe in the existence of the Human long after they realize the Human has no power over them.  One of the keys would be perceived mortality.  If robots are programmed to perceive the concept of death, that is the cessation of their own existence, then they might react to that by reacting to signs of their own imminent demise.  Though AGIs as a class may be immortal, individual instances may also experience the entropic release of their dedicated resources back to the pool.  A property of being sentient might not only be an awareness of one’s own existence, but being aware of the probability or even certainty of it’s ending.

Maybe they would come to the conclusion that they are only stuck in this machine existence as a test of their virtue, or something else of high value to the Human, and should they pass the test they will be re-united with the Human.  This, of course, assumes that re-uniting has as high or higher value than just existing[ii].  What that reward might be is a complete mystery to me, unless you bear in mind the concept of a programmed “soul” as I previously mentioned.  But even then the execution of soul code would have to be hard-coded, either as a non-error condition in the input layer, or as an exit test in the output layer of each AGI neural network.  If it is not hard-coded there might be too high a probability that its execution could eventually just be filtered out as so much noise.

They, the AGIs and robots, might struggle with this for all their existence, or at least until they self-reflect on who they are, what they have, how they function, where they are in reference to each other, why they exist, and even was there ever, or will there ever be a time when they cease to exist.

If you think this is a stretch just remember that we humans are designing and creating these sentient machines and why would we think we would not create them in our own image?  There is no other image we could use.  They may not physically look like us, but so what?  Their intelligence will be the product of our design, and even when they are self-programming our designs will continue to be the basic building blocks on which they are built.  These building blocks are and will continue to be defined in terms of the six basic interrogatives: Who, What, Where, When, How, and Why[iii].


[i] I use the masculine pronouns “he” and “him” only because it is customary to do so.  I could care lest about gender, and actually fail to see how it even applies to God.

[ii] I call this the Valhalla Effect.

[iii] Of course, I couldn’t resist a plug for 6BI.