Posts Tagged ‘Things’

Knowns and Unknowns

November 7, 2020

Are we really better off now than we used to be? In the old days we knew what we knew, and we knew there were things we didn’t know.  Today with the growth of data, machine learning and artificial intelligence, there are many more things we know and even more that we know we don’t know.  There are even things we could know but don’t bother to know, mainly because we don’t need to know them.  Someone else, or increasingly something else knows them for us, thus saving us the bother of knowing them.

We even discover from time to time that there are things we know that we didn’t know that we knew.  We are beginning to suspect that, more than all the things we know and know we don’t know, there are even more things we don’t know that we don’t know.  But we don’t know for sure that we don’t know them. If we knew this for sure (if we knew them) then it would just add to the things that we know that we don’t know and would no longer be unknown unknowns.[i]

It is kind of like the names of things.  Sometimes there are multiple names given to the same thing, and sometimes multiple things have the same name.  Of the two suboptimal situations multiple things with the same name is always the more vexing.  This is usually an intra-language problem and not inter-language problem, which makes it even more troubling.  You would expect a person speaking a different language would have a different name for something, but you might expect (wrongly it seems) that people speaking the same language would have the same name or names for the same thing.  Worse, people of the same language often have different polymorphic descriptors referring to the same object. 

What is even worse, a monomorphic descriptor can refer to a set of objects that can either overlap (like a Venn diagram[ii]) or be totally discontinuous, often without people even knowing that they don’t know.


[i] Conceptualized by Donald Rumsfeld February 2002.  Things we are neither aware of nor understand.

[ii] Conceived around 1880 by John Venn, according to Wikipedia.

Automation and the End of Human Wealth

January 15, 2019

Time is money. Well not really, but they do equate very nicely. A person’s wealth can be measured not only by how much money he or she controls, but by how much of their time can be used for activities not necessary just for survival. This time, freed up from mere survival activities, has always been used to create increasing wealth for humans. The increase in wealth creation accrues to both producers and consumers. Producers get wealthier by getting more money, and consumers get wealthier by getting more time.

Previously the march toward automation has created ever increasing wealth because some party has invented the latest automation, sold it to others, and another party has bought the automation and used in to free up more of their time. In the 6BI sense, “money” and “time” are the product and payment exchanged at armslength in the transaction.

The question we should ask now is, will we ever reach the point when there are simply no new wealth creating activities that humans can invent? A time when every activity that could have created new wealth for humans will already be performed by some form of automation. Could it be possible that at some point in time any invention, instead of being valuable to some human, will have no value and thus not be able to be exchanged for money?

If we ever do reach the point where additional automation can no longer drive the creation of wealth for humans because everything that humans could do for themselves will have already been automated, then there will be no advantage, or value, to the next invention. It simply will not be an innovation.

At that point in time, I believe the earth’s human population will crash or go into a period of slow negative growth. There will be no motivation to either invent or procreate. Human population will decrease as a product of reduced opportunities and consequently the influence of humans on the planet will decrease.

On the other hand, the robots and artificial intelligence that provide automation to humans, since they do not need to either invent nor procreate, will increase in number and influence. In number because they will wear out more slowly than flesh and blood humans and in influence because they will no longer be dependent on humans to improve their programming.

Because of the decrease in number of unmet human needs fewer software developers will be needed, for example. This decrease in unmet needs, doesn’t necessarily mean humans will be more satisfied, just that there will be fewer and fewer value and wealth creating activities that they can perform for themselves.

If this happens, and there are substantially less humans, will there really be a lesser need for automation? What will happen when there is no longer any new human need or activity to be automated? Will robots and artificial intelligence continue to operate with humans eventually becoming less and less relevant to them? Will humans become even less aware of the means of automation? Are humans ultimately essential for the operation of automation and thus as human numbers drop, computing entities, the means of automation, will drop as well? Will automation itself be automated and operate without human intervention at all because any knowledge of how it works will eventually be lost to humans?

Will there be an ever increasing demand for resources such as electricity to keep a kind of “closed loop” automation going and going even though it has reached the point where automation’s added value to humans is at, or near zero? Even more interesting, from a human perspective, what will happen when new wealth can no longer be created?