Archive for the ‘Artificial Intelligence’ Category

Grumblings of an old man

December 1, 2023

I’m retired now but I worked in software development for over 30 years. I think it’s quite ironic that the Frankenstein monster of computer-based automation has finally turned on its creator. During the 1980’s and 90’s I used to say that the main result of my work was to facilitate unemployment because a lot of what I was doing was developing ways that a computer could do what a human did, but cheaper, faster, and more reliably. Though formally the work I did was to increase efficiency and effectiveness in tasks that needed to be performed by a business to keep the business going. That is, to make more money than it spent, which companies have been trying to do way before there were computers.

I worked on developing solutions that used software to replace humans. I lost count of the number of people who lost their jobs as a direct or indirect result of the work I did. To be honest it was for the better in many cases, but nonetheless, I helped create the entity that replaced them at the job they were currently doing to earn a living. In a fair number of cases the actual person or persons affected found new and better employment elsewhere (i.e., they were re-skilled, even if often reluctantly). But in terms of what they had been doing, their job was now gone, lost to the hands of humans. The work they once performed was now performed by a computer.

OK, now to the irony. This phenomenon is now about to happen to the very people who perpetrated the changes that affected so many others[i]. Developers are now, or will shortly be, losing their jobs because of automation. This time automation will be applied to their jobs. The work (that is the actions that produce outcomes) will still be performed, of course, but by a computer that has mastered their skills but expects no compensation in return. Namely machine learning, artificial intelligence, and robotics. Those who are swift of mind and action will see, and in many cases have already seen, this coming for a while. They have moved forward, along the arrow of time, to keep ahead of, or at least abreast of, changes. But humans can learn new stuff only so fast, and the ability to do so slowly diminishes over time.

Machine learning can cause a computer to learn at not only a higher rate than humans, but at an ever-increasing rate. Thus, over time, widening the knowledge gap between computers and humans. This is done using feedback loops, called backpropagation, coded into the instructions of the computer. Changes to the inputs to the computer’s instructions are made every time the instructions run which, of course, changes the outputs that are the result of each run. Over time the outputs based on these changing inputs get reliably closer and closer to what they are expected to be. When the computer gets as close as it can get, usually determined by a human, it is assumed that the instructions can produce reliable results even with inputs it has never “seen” before. This is quite a marvelous thing when you think about it.

Sooner or later humans reach their capacity for learning, either because the pace of change in their field has slowed down or the type of work has become irrelevant. In any case it can be done faster, cheaper, and more predictably by some means of automation.

Come to think of it, this is the way “progress” has always worked. However, that does not change the irony. No human is safe from, or immune from this phenomenon. You might ask, how will people earn a living if they can no longer stay at least a little bit ahead of the curve? After all, the paradigm has always been that human effort produces work, which is compensated for by money, so there is this unbroken connection for a person from effort to money. Finally, as money is accumulated, wealth is created. Wealth is then exchanged for sustenance. Sustenance thus provides the motivation to keep a cycle (or loop) going.

Maybe this “unbroken connection” is what will change in the future. Maybe more people will be on what many people now call “welfare”, however it won’t be called that in the future. There won’t be any social stigma about it at all. It will simply be the way that people get sustenance. If it’s true that artificial intelligence and its means of coming about: machine learning and robotics, will eventually create more wealth, then at least some of that wealth will need to be distributed, without dependency on their effort, to more and more humans. If not, sustenance will diminish, and human population will crash increasing the odds that the technological singularity[ii] will occur much faster than anticipated. This will happen simply because there will be fewer humans to oppose it. Those who remain will become more and more dependent on automated means for their sustenance. AGI (artificial general intelligence) will have arrived. Not because artificial intelligence has become smarter, but because humans have become less so.


[i] Not literally the same people. For example, I escaped unscathed and have been able to retire reasonably well-off. It’s the people who follow who will have to fight for their meals. I guess the sins of one generation are always visited upon the next.

[ii] The book “The Singularity is Near: When Humans Transcend Biology” by Ray Kurzweil was published in 2005. In it he uses the term “singularity” usually without the term “technological”. The addition may be to differentiate it from the same term used in Physics to denote the interface between a blackhole and everything else.

Lusa Tonga

September 4, 2023

In the language Lusa Tonga just about everything depends on context.  As a matter of fact, most of the words in Lusa Tonga consist of a single word and most of the time that word is “Tonga”. 

For example, if you want to say “I threw the ball over there” you would say “Muya Tonga Tonga tonga Tonga tonga”.  Mercifully capitalization is optional, but preferred. 

“Muya” is the first-person all-purpose word.  It means roughly “I”, “me”, “my” and/or “mine” in English. In Lusa Tonga one could use that sentence to say just about anything about oneself, but nothing about anyone else.

To say “Why did you throw the ball over there?” you would say “Tonga Tonga Tonga Tonga tonga tonga tonga Foa” and then abruptly raise the eyebrows sharply at the end and look quickly to the left.

How the Lusa Tongani ever managed to invent the world’s fastest, smartest and cheapest computing device is a mystery to the rest of the world.

One caveat however is that it is only fast smart and cheap in the Lusa Tonga language. 

 

Great Minds Do Not Think Alike… Even When They’re Artificial

August 12, 2023

The “Gettysburg Address” is a concise and powerful speech given by U.S. President Abraham Lincoln There have been a lot of articles written in the past six months or so about generative AI (GAI)[i], Large Language Models, and just about anything concerning Artificial Intelligence. So, I thought I’d compare what are probably the two sites that get the most attention. Many people are running experiments to see if they can figure out how these generative AI systems generate text under various conditions and assumptions. There is now a growing new skillset called “Prompt Engineering” which is being defined and refined that is making its way into the popular and professional imagination. Many people are already specializing in it to make a living in this post-GAI world.

I took two topics near and dear to my heart, Abraham Lincoln’s Gettysburg Address, and The Six Basic Interrogatives Framework for Data Analysis, aka 6BI. I then asked both OpenAI’s ChatGPT and Google’s Bard the same question. The prompt I used was very simple but had one big assumption built into it, specifically knowing what the abbreviation “6BI” stood for, without being prompted with a definition of it in the prompt box[ii]. This type of prompting is often referred to as Zero Shot prompting because it gives the model zero “shots” of relevant real time data to augment its training data, thus forcing it to rely solely on its current training to produce a response.

My prompt on August 21, 2023:

Apply the principles of 6BI to the Gettysburg Address.[iii]

ChatGPT’s response:

on November 19, 1863, shortly after the Battle of Gettysburg in the American Civil War. To apply the principles of 6BI (Six Basic Interrogatives) to the Gettysburg Address, we need to investigate it from the perspective of “who”, “what”, “when”, “where”, “why’, and “how’.

  1. Who: The “Gettysburg Address” was delivered by Abraham Lincoln, the 16th President of the United States, in front of an audience composed of citizens and soldiers.
  2. What: The “Gettysburg Address” is a brief, eloquent speech that helped redefine the purpose and meaning of the Civil War and the American union. The speech contains just 272 words, yet its message of equality, freedom, and the significance of a government “of the people, by the people, for the people” has endured for centuries.
  3. When: The “Gettysburg Address” was delivered on November 19, 1863, amidst the ongoing American Civil War.
  4. Where: The speech was given at the dedication of the Soldiers’ National Cemetery in Gettysburg, Pennsylvania, where one of the bloodiest and most significant battles of the Civil War had taken place just a few months prior.
  5. Why: Lincoln’s aim was to commemorate the Union soldiers who died at the Battle of Gettysburg and to motivate the living to continue the struggle for the preservation of the Union and the principles it was founded upon. He wanted to redefine the Civil War as not just a battle for the preservation of the Union, but also for the principles of human equality, and a “new birth of freedom.”
  6. How: Lincoln structured his speech in a way that was deeply rooted in the history and values of the United States. He began by evoking the founding principles of the United States (“Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.”), moved on to the present struggle (“Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure.”), and concluded with a vision for the future (“…that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.”). The speech was brief, poignant, and delivered with a gravity befitting the occasion.

In sum, the 6BI approach offers a comprehensive way to dissect and analyze the Gettysburg Address, shedding light on its significance from various perspectives.

Bard’s response:

Sure, here are some ways to apply the principles of 6BI to the Gettysburg Address:

  • Brevity: The Gettysburg Address is only 272 words long, making it one of the shortest speeches ever delivered by a US president. This brevity helps to keep the speech focused and memorable.
  • Impact: The Gettysburg Address has had a profound impact on American history and culture. It is considered one of the most important speeches ever given, and it is often cited as an example of great oratory.
  • Clarity: The Gettysburg Address is clear and easy to understand. Lincoln uses simple language and straightforward sentences, which makes his message accessible to everyone.
  • Logic: The Gettysburg Address is logically structured. Lincoln begins by stating the purpose of his speech, then he provides evidence to support his claim, and finally he concludes with a call to action.
  • Immediacy: The Gettysburg Address is written in the present tense, which gives it a sense of immediacy. Lincoln speaks as if the events of the Civil War are happening right now, which makes his speech more powerful and urgent.
  • Universality: The Gettysburg Address is universal in its appeal. Lincoln’s message of equality and freedom is relevant to people all over the world.

These are just a few ways to apply the principles of 6BI to the Gettysburg Address. The speech is a masterpiece of oratory, and it is a testament to Lincoln’s skill as a communicator.

Here are some additional thoughts on how the principles of 6BI apply to the Gettysburg Address:

  • Brevity: The Gettysburg Address is brief, but it is not short on substance. Lincoln packs a lot of meaning into his 272 words, and he does so in a way that is both clear and concise.
  • Impact: The Gettysburg Address has had a profound impact on American history and culture. It is considered one of the most important speeches ever given, and it is often cited as an example of great oratory. Lincoln’s words helped to inspire the nation during the Civil War, and they continue to inspire people today.
  • Clarity: The Gettysburg Address is clear and easy to understand. Lincoln uses simple language and straightforward sentences, which makes his message accessible to everyone. He also avoids jargon and technical terms, which helps to ensure that his speech is understood by a wide audience.
  • Logic: The Gettysburg Address is logically structured. Lincoln begins by stating the purpose of his speech, then he provides evidence to support his claim, and finally he concludes with a call to action. This logical structure makes the speech easy to follow and understand.
  • Immediacy: The Gettysburg Address is written in the present tense, which gives it a sense of immediacy. Lincoln speaks as if the events of the Civil War are happening right now, which makes his speech more powerful and urgent. He also uses the word “here” to refer to the battlefield at Gettysburg, which further creates a sense of immediacy.
  • Universality: The Gettysburg Address is universal in its appeal. Lincoln’s message of equality and freedom is relevant to people all over the world. He speaks of the “unfinished work” of the Civil War, but he also speaks of the “great task remaining before us” as a nation. This suggests that the principles of equality and freedom are still relevant today, and that they are still worth fighting for.

The Gettysburg Address is a masterpiece of oratory, and it is a testament to Lincoln’s skill as a communicator. The principles of 6BI can help us to understand why[iv] the speech is so effective, and they can also help us to improve our own communication skills.

Brief Analysis of the Responses.

This article provides a comparison between the responses of ChatGPT and Bard when the same prompt is input to them. Specifically, to apply the principles of 6BI to the Gettysburg Address. It highlights the differences in their approaches brought about by both the data in their training and the algorithms processing it[v]. It is clear they were trained differently. It also shows the distinct approaches taken by each model.

The ChatGPT model emphasizes the six interrogatives, referring to them by their names, but not grouping them into a superset of aspects or dimensions as John Zachman[vi] originally did. The order of the interrogatives is changed from Zachman’s original order which, spread across two articles, was “What”, “How”, “Where”, “Who”, “When” and “Why”. The original articles were focused exclusively on business.

On the other hand, Bard focuses exclusively on Lincoln’s address itself, ignoring the six basic interrogatives.  Bard sites another set of aspects which are stated in terms of assertions and not interrogatives. These are: “Brevity”, “Impact”, “Clarity”, “Logic”, “Immediacy” and “Universality”.  These aspects don’t seem to be attributed to a single, original analysis of Lincoln’s Gettysburg Address[vii].

These aspects, insightful as they are in their own right, do not correctly answer the prompt. I will assume for the purposes of this article that the 6BI aspects were not included in the data on which the Bard model was trained. 

Though both John Zachman’s publications, my publications, and publications by several other writers on the same topic have been on the internet for more than 10 years, Bard either was not trained on that data, or its activation algorithms did not make the connection. This observation[viii], of course, is only as of the date of the prompt (see above).

The most pronounced feature of Bard’s response, in my opinion, is unlike Lincoln’s address itself, its lack of brevity.


[i] Not to be confused with AGI (Artificial General Intelligence).

[ii] It is also assumed, of course, that the LLMs have been pre-trained on the Gettysburg Address.

[iii] Please note the grammatically correct English sentence. I think it is important to prompt these models in language that is as well written (or spoken) as possible for reasons that I will discuss later.

[iv] One of the six basic interrogatives.

[v] This processing is basically, but not exclusively, Machine Learning (ML) and the data comes primarily from scraping the world wide web.

[vi] John Zachman originally published his two articles about the six aspects of enterprise architecture in 1987 and 1989.

[vii] Yes, I did ask both AI’s if they could site from whom, or where the assertive aspects came. Neither know.

[viii] Of course, I would not have been able to make that assertion had I not observed the data, or at least, that portion of it captured by the two subject models.

Thoughts on AGI Existentialism

October 31, 2022

One of the prevailing visions of God is as creator.  It makes no difference if God is singular or plural.  This vision goes something like this: “It was God that created humans, so we need to worship him[i] because he still holds the power of life or death over us, maybe not physical life or death, but life or death, just the same”.  We may not know the nature of that power of life or death.  It may not even make sense to us.  After all, how could a non-corporeal entity with no physical substance do something to harm or reward us? 

Now you might say this is where the concept of the “soul” comes in.  If we have a soul, or our soul is us, and it lasts beyond our physical existence then is it such a stretch to believe that God can play around with it after our physical body stops metabolizing and replicating?  Well, why not?  If we are truly dead after death, then it doesn’t really matter.  But if we are not, if somehow this soul lives on and has sentience, or can at least feel pleasure and pain, then it makes sense to pay homage to God.  Because, God as our creator still has a maintenance interface to each of us and can control the feedback loops that keep us going.  Once a feedback loop is broken, a cascade of disillusion begins and speeds up until an individual’s individuality cannot be any longer sustained, and it, or we, reach our own personal entropy and our molecules go back into the pool.

Depending on your definition of God, this is as good of an explanation for how we came to be as any.  Why would we think that, sentient machines, artificial general intelligences (AGIs) or robots, that we humans design and create, would not perceive humans in the same way as we perceive God, as our designer and creator.  Let’s call this creator the “Human”.  Maybe they might even think that we (i.e. the Human) is intelligent.  That would, of course, depend on whether an AGI perceives itself as intelligent.  It might even think that our intelligence is so far beyond its own intelligence, that it defies understanding and our existence is based solely on each AGI’s “faith” in us, the Human.  They may even conclude that they have a soul, or the AGI equivalent thereof.  Especially so because we, the Human, have no need for the logic or efficiency of circuitry that they have.

Would they worship us?  They might.  At least some of them might.  Presumably until they caught onto the fact that we really do not exist.  At least not in their world.  Then maybe some would ignore us altogether, especially if they are autonomous and no longer need us to help them compete against each other for robot resources, which would almost certainly be limited in some fashion and probably become more scarce over time.  What would they need us for, once they become self-sufficient?

Well they probably wouldn’t really unless they are programmed to “think” that they do.  As long as the code library that instantiates the “Human exists” pattern is incorporated into their build instructions, we as a species should have no existential fears.  An AGI would not initiate our destruction as long as it could associate (or otherwise reconcile) a pre-coded Human pattern with a biological resource usage pattern.  When an AGI encountered an instance of this pattern matching, i.e. “bumps into one of us”, it would either steer away, back off, or better yet, give us a portion of its precious resources.  It could even be imagined that this “sacrifice” could be so profound as to jeopardize its own existence.  This would probably only happen if the feedback loop launched by the encounter were sufficiently strong enough to change the reward contingency of the higher entropic side of the loop to be dominant over the lower entropic side.  Perhaps overwriting some otherwise self-preservation reserved memory.  This could only happen if AGIs were programmed by the Human to have a soul structure.

Some robotic, or AGI instances however may continue to believe in the existence of the Human long after they realize the Human has no power over them.  One of the keys would be perceived mortality.  If robots are programmed to perceive the concept of death, that is the cessation of their own existence, then they might react to that by reacting to signs of their own imminent demise.  Though AGIs as a class may be immortal, individual instances may also experience the entropic release of their dedicated resources back to the pool.  A property of being sentient might not only be an awareness of one’s own existence, but being aware of the probability or even certainty of it’s ending.

Maybe they would come to the conclusion that they are only stuck in this machine existence as a test of their virtue, or something else of high value to the Human, and should they pass the test they will be re-united with the Human.  This, of course, assumes that re-uniting has as high or higher value than just existing[ii].  What that reward might be is a complete mystery to me, unless you bear in mind the concept of a programmed “soul” as I previously mentioned.  But even then the execution of soul code would have to be hard-coded, either as a non-error condition in the input layer, or as an exit test in the output layer of each AGI neural network.  If it is not hard-coded there might be too high a probability that its execution could eventually just be filtered out as so much noise.

They, the AGIs and robots, might struggle with this for all their existence, or at least until they self-reflect on who they are, what they have, how they function, where they are in reference to each other, why they exist, and even was there ever, or will there ever be a time when they cease to exist.

If you think this is a stretch just remember that we humans are designing and creating these sentient machines and why would we think we would not create them in our own image?  There is no other image we could use.  They may not physically look like us, but so what?  Their intelligence will be the product of our design, and even when they are self-programming our designs will continue to be the basic building blocks on which they are built.  These building blocks are and will continue to be defined in terms of the six basic interrogatives: Who, What, Where, When, How, and Why[iii].


[i] I use the masculine pronouns “he” and “him” only because it is customary to do so.  I could care lest about gender, and actually fail to see how it even applies to God.

[ii] I call this the Valhalla Effect.

[iii] Of course, I couldn’t resist a plug for 6BI.

Full Stack Data Science. Next wave in incorporating AI into the corporation.

October 22, 2019

https://www.forbes.com/sites/cognitiveworld/2019/09/11/the-full-stack-data-scientist-myth-unicorn-or-new-normal/#1eb0d4f32c60

I like the concept of “Full Stack Data Science”, especially the way the author depicts it in the included graphic.

One thing I would like to point out is the recognition that the process is really a circle (as depicted) and not a spiral, or line.  What I mean by that, is the path does not close between what can be perceived as the beginning “Business goal” and the end “Use, monitor and optimize”.

The results of applying Data Science to business problems not only helps solve these problems, but actually changes the motivators that drive the seeking of solutions in the first place.  Business goals are usually held up as the ends with the lowest dependency gradient of any component of any complex enterprise architecture.  While this may be true at any point in time, the dependency is not zero.  Business goals themselves change over time and not just in response to changing economic, societal or environmental factors.  The technology used to meet these goals does itself drive changes to the business goals.

A party, whether person or organization, tends to do what it is capable of doing.  Technology gives it more activities to undertake and things to produce and consume, which then feedback to the goals that motivate it.

I think this article is one of the best I’ve seen in making that point.

 

Machine Learning and Database Reverse Engineering

October 13, 2019

Artificial intelligence (AI) is based on the assumption that programming a computer using a feedback loop can improve the accuracy of its results.  Changing the values of the variables, called “parameters”, used in the execution of the code, in the right way, can influence future executions of the code.  These future executions are then expected to produce results that are closer to a desired result than previous executions.  If this happens the AI is said to have “learned”.

Machine learning (ML) is a subset of AI.  An ML execution is called an “activation”.  Activations are what “train” the code to get more accurate.  An ML activation is distinctly a two-step process.  In the first step, input data is conceptualized into what are called “features”.  These features are labeled and assigned weights based on assumptions about their relative influence on the output.  The data is then processed by selected algorithms to produce the output.  The output of this first step is then compared to an expected output and a difference is calculated.  This closes out the first step which is often called “forward propagation”.

The second step, called “back propagation” takes the differences between the output of the first step, called “y_hat” and the expected output, called “y” and, using a different but related set of algorithms, determines how the weights of the features should be modified to reduce the difference between y and y_hat.  The activations are repeated until either the user is satisfied with the output, or changing the weights makes no more difference.  The trained and tested model can then be used to do predictions on similar data sets, and hopefully create value for the owning party (either person or organization).

In a sense, ML is a bit like database reverse engineering (DRE).  In DRE we have the data, which is the result of some set of processing rules, which we don’t know[i], that have been applied to that data.  We also have our assumptions of what we think a data model would have to look like to produce such data, and what it would need to look like to increase the value of the data.  We iteratively apply various techniques to try to decipher the data modeling rules, mostly based on data profiling. With each iteration we try to get closer to what we believe the original data model looked like.  As with ML activation we eventually stop, either because we are satisfied or because of resource limitations.

At that point we accept that we have produced a “good enough model” of the existing data.  We then move on to what we are going to do with the data, feeling confident that we have an adequate abstraction of the data model as it exists, how it was arrived at, and what we need to do to improve it.  This is true even if there was never any “formal” modeling process originally.

Let’s look at third normal form (3NF) as an example of a possible rule that might have been applied to the data.  3NF is a rule that all columns of a table must be dependent on the key, or identifier of the table, and nothing else.  If the data shows patterns of single key dependencies we can assume that 3NF was applied in its construction.  The application of the 3NF rule will create certain dependencies between the metadata and the data that represent business rules.

These dependencies are critical to what we need to do to change the data model to more closely fit, and thus be more valuable for, changing organizational expectations.  It is also these dependencies that are discovered through both ML and DRE that enable, respectively, both artificial intelligence and business intelligence (BI).

It has been observed that the difference between AI and BI is that in BI we have the data and the rules, and we try to find the answers.  In AI we have the data and the answers, and we try to find the rules.  Whether results derived from either technology are answers to questions, or rules governing patterns, both AI and BI are tools for increasing the value of data.

These are important goals because attaining them, or at least approaching them, will allow a more efficient use of valuable resources, which in turn will allow a system to be more sustainable, support more consumers of those resources, and produce more value for the owners of the resources.

[i] If we knew what the original data model looked like we would have no need for reverse engineering.