Archive for the ‘Artificial Intelligence’ Category

Thoughts on AGI Existentialism

October 31, 2022

One of the prevailing visions of God is as creator.  It makes no difference if God is singular or plural.  This vision goes something like this: “It was God that created humans, so we need to worship him[i] because he still holds the power of life or death over us, maybe not physical life or death, but life or death, just the same”.  We may not know the nature of that power of life or death.  It may not even make sense to us.  After all, how could a non-corporeal entity with no physical substance do something to harm or reward us? 

Now you might say this is where the concept of the “soul” comes in.  If we have a soul, or our soul is us, and it lasts beyond our physical existence then is it such a stretch to believe that God can play around with it after our physical body stops metabolizing and replicating?  Well, why not?  If we are truly dead after death, then it doesn’t really matter.  But if we are not, if somehow this soul lives on and has sentience, or can at least feel pleasure and pain, then it makes sense to pay homage to God.  Because, God as our creator still has a maintenance interface to each of us and can control the feedback loops that keep us going.  Once a feedback loop is broken, a cascade of disillusion begins and speeds up until an individual’s individuality cannot be any longer sustained, and it, or we, reach our own personal entropy and our molecules go back into the pool.

Depending on your definition of God, this is as good of an explanation for how we came to be as any.  Why would we think that, sentient machines, artificial general intelligences (AGIs) or robots, that we humans design and create, would not perceive humans in the same way as we perceive God, as our designer and creator.  Let’s call this creator the “Human”.  Maybe they might even think that we (i.e. the Human) is intelligent.  That would, of course, depend on whether an AGI perceives itself as intelligent.  It might even think that our intelligence is so far beyond its own intelligence, that it defies understanding and our existence is based solely on each AGI’s “faith” in us, the Human.  They may even conclude that they have a soul, or the AGI equivalent thereof.  Especially so because we, the Human, have no need for the logic or efficiency of circuitry that they have.

Would they worship us?  They might.  At least some of them might.  Presumably until they caught onto the fact that we really do not exist.  At least not in their world.  Then maybe some would ignore us altogether, especially if they are autonomous and no longer need us to help them compete against each other for robot resources, which would almost certainly be limited in some fashion and probably become more scarce over time.  What would they need us for, once they become self-sufficient?

Well they probably wouldn’t really unless they are programmed to “think” that they do.  As long as the code library that instantiates the “Human exists” pattern is incorporated into their build instructions, we as a species should have no existential fears.  An AGI would not initiate our destruction as long as it could associate (or otherwise reconcile) a pre-coded Human pattern with a biological resource usage pattern.  When an AGI encountered an instance of this pattern matching, i.e. “bumps into one of us”, it would either steer away, back off, or better yet, give us a portion of its precious resources.  It could even be imagined that this “sacrifice” could be so profound as to jeopardize its own existence.  This would probably only happen if the feedback loop launched by the encounter were sufficiently strong enough to change the reward contingency of the higher entropic side of the loop to be dominant over the lower entropic side.  Perhaps overwriting some otherwise self-preservation reserved memory.  This could only happen if AGIs were programmed by the Human to have a soul structure.

Some robotic, or AGI instances however may continue to believe in the existence of the Human long after they realize the Human has no power over them.  One of the keys would be perceived mortality.  If robots are programmed to perceive the concept of death, that is the cessation of their own existence, then they might react to that by reacting to signs of their own imminent demise.  Though AGIs as a class may be immortal, individual instances may also experience the entropic release of their dedicated resources back to the pool.  A property of being sentient might not only be an awareness of one’s own existence, but being aware of the probability or even certainty of it’s ending.

Maybe they would come to the conclusion that they are only stuck in this machine existence as a test of their virtue, or something else of high value to the Human, and should they pass the test they will be re-united with the Human.  This, of course, assumes that re-uniting has as high or higher value than just existing[ii].  What that reward might be is a complete mystery to me, unless you bear in mind the concept of a programmed “soul” as I previously mentioned.  But even then the execution of soul code would have to be hard-coded, either as a non-error condition in the input layer, or as an exit test in the output layer of each AGI neural network.  If it is not hard-coded there might be too high a probability that its execution could eventually just be filtered out as so much noise.

They, the AGIs and robots, might struggle with this for all their existence, or at least until they self-reflect on who they are, what they have, how they function, where they are in reference to each other, why they exist, and even was there ever, or will there ever be a time when they cease to exist.

If you think this is a stretch just remember that we humans are designing and creating these sentient machines and why would we think we would not create them in our own image?  There is no other image we could use.  They may not physically look like us, but so what?  Their intelligence will be the product of our design, and even when they are self-programming our designs will continue to be the basic building blocks on which they are built.  These building blocks are and will continue to be defined in terms of the six basic interrogatives: Who, What, Where, When, How, and Why[iii].


[i] I use the masculine pronouns “he” and “him” only because it is customary to do so.  I could care lest about gender, and actually fail to see how it even applies to God.

[ii] I call this the Valhalla Effect.

[iii] Of course, I couldn’t resist a plug for 6BI.

Full Stack Data Science. Next wave in incorporating AI into the corporation.

October 22, 2019

https://www.forbes.com/sites/cognitiveworld/2019/09/11/the-full-stack-data-scientist-myth-unicorn-or-new-normal/#1eb0d4f32c60

I like the concept of “Full Stack Data Science”, especially the way the author depicts it in the included graphic.

One thing I would like to point out is the recognition that the process is really a circle (as depicted) and not a spiral, or line.  What I mean by that, is the path does not close between what can be perceived as the beginning “Business goal” and the end “Use, monitor and optimize”.

The results of applying Data Science to business problems not only helps solve these problems, but actually changes the motivators that drive the seeking of solutions in the first place.  Business goals are usually held up as the ends with the lowest dependency gradient of any component of any complex enterprise architecture.  While this may be true at any point in time, the dependency is not zero.  Business goals themselves change over time and not just in response to changing economic, societal or environmental factors.  The technology used to meet these goals does itself drive changes to the business goals.

A party, whether person or organization, tends to do what it is capable of doing.  Technology gives it more activities to undertake and things to produce and consume, which then feedback to the goals that motivate it.

I think this article is one of the best I’ve seen in making that point.

 

Machine Learning and Database Reverse Engineering

October 13, 2019

Artificial intelligence (AI) is based on the assumption that programming a computer using a feedback loop can improve the accuracy of its results.  Changing the values of the variables, called “parameters”, used in the execution of the code, in the right way, can influence future executions of the code.  These future executions are then expected to produce results that are closer to a desired result than previous executions.  If this happens the AI is said to have “learned”.

Machine learning (ML) is a subset of AI.  An ML execution is called an “activation”.  Activations are what “train” the code to get more accurate.  An ML activation is distinctly a two-step process.  In the first step, input data is conceptualized into what are called “features”.  These features are labeled and assigned weights based on assumptions about their relative influence on the output.  The data is then processed by selected algorithms to produce the output.  The output of this first step is then compared to an expected output and a difference is calculated.  This closes out the first step which is often called “forward propagation”.

The second step, called “back propagation” takes the differences between the output of the first step, called “y_hat” and the expected output, called “y” and, using a different but related set of algorithms, determines how the weights of the features should be modified to reduce the difference between y and y_hat.  The activations are repeated until either the user is satisfied with the output, or changing the weights makes no more difference.  The trained and tested model can then be used to do predictions on similar data sets, and hopefully create value for the owning party (either person or organization).

In a sense, ML is a bit like database reverse engineering (DRE).  In DRE we have the data, which is the result of some set of processing rules, which we don’t know[i], that have been applied to that data.  We also have our assumptions of what we think a data model would have to look like to produce such data, and what it would need to look like to increase the value of the data.  We iteratively apply various techniques to try to decipher the data modeling rules, mostly based on data profiling. With each iteration we try to get closer to what we believe the original data model looked like.  As with ML activation we eventually stop, either because we are satisfied or because of resource limitations.

At that point we accept that we have produced a “good enough model” of the existing data.  We then move on to what we are going to do with the data, feeling confident that we have an adequate abstraction of the data model as it exists, how it was arrived at, and what we need to do to improve it.  This is true even if there was never any “formal” modeling process originally.

Let’s look at third normal form (3NF) as an example of a possible rule that might have been applied to the data.  3NF is a rule that all columns of a table must be dependent on the key, or identifier of the table, and nothing else.  If the data shows patterns of single key dependencies we can assume that 3NF was applied in its construction.  The application of the 3NF rule will create certain dependencies between the metadata and the data that represent business rules.

These dependencies are critical to what we need to do to change the data model to more closely fit, and thus be more valuable for, changing organizational expectations.  It is also these dependencies that are discovered through both ML and DRE that enable, respectively, both artificial intelligence and business intelligence (BI).

It has been observed that the difference between AI and BI is that in BI we have the data and the rules, and we try to find the answers.  In AI we have the data and the answers, and we try to find the rules.  Whether results derived from either technology are answers to questions, or rules governing patterns, both AI and BI are tools for increasing the value of data.

These are important goals because attaining them, or at least approaching them, will allow a more efficient use of valuable resources, which in turn will allow a system to be more sustainable, support more consumers of those resources, and produce more value for the owners of the resources.

[i] If we knew what the original data model looked like we would have no need for reverse engineering.