Two Interesting Interrelated Topics

March 27, 2022

I have been hearing more and more about two interesting interrelated topics recently.

First is an article from my NPR news feed from a short while back, about governments introducing virtualized versions of their “centralized” currencies. As you might imagine China seems to be way out in front on this. It may be virtual and it may be crypto but it is still centralized. That means it is soverign currency and a government controls this “e-currency”.

https://www.npr.org/2022/02/06/1072406109/digital-dollar-federal-reserve-apple-pay-venmo-cbdc?sc=18&f=1001

Second and what might be even more interesting however is the rise in “decentralized” finance, or DeFi. Here, at least at first, a government does not necessarily control the currency. Here is a reference to a Coursera course produced by Duke University on this topic. This might be where the real action is going to be going forward.

https://www.coursera.org/learn/decentralized-finance-infrastructure-duke?specialization=decentralized-finance-duke

Don’t kill me and I won’t kill you

February 20, 2022

This is one of the best articles on COVID/Omicron I’ve read.  It makes lots of sense.  In order for the virus to infect more and more people, and thus increase its probability of surviving in a world full of enemies (eg. humans) it has to evolve to the point where it is not so severe and people just live with it.  Kind of like the flu.  People tolerate it and the virus continues to  propagate.  Our probiotic stomach bacteria learned this lesson millions of years ago.  Don’t kill them and they won’t kill you!.

https://www.nature.com/articles/d41586-022-00210-7?utm_source=Nature+Briefing&utm_campaign=f41aeff64d-briefing-dy-20220202&utm_medium=email&utm_term=0_c9dfd39373-f41aeff64d-42706715

Don’t Look Up

February 17, 2022

The title obviously is ripped off from the 2021 Oscar nominated movie, but I thought it fit. Here in the Earth’s gravity pool we are exposed to everything “out there”, even our own stuff which we have put out there, like satellites and space stations. 

Orbiting objects, especially the international space station, in orbiting the earth, slowly accumulate an energy debt. This is a result of it’s borrowing energy from its surroundings to prevent it from falling out of the sky. Sooner or later this debt gets too high for the object to carry and it balances the books by falling down the gravity well back to earth. The fireball produced by that plunge is the removal of that debt.

This is kind of like the biologic death that happens to all of us. Sooner or later, local entropy, as it increases, wittles away at the bonds that hold the matter of which we are made.  We breakup into smaller pieces of matter until the critical mass required to stay alive is passed and we “die”.

Here’s an interesting read.

https://theconversation.com/the-international-space-station-is-set-to-come-home-in-a-fiery-blaze-and-australia-will-likely-have-a-front-row-seat-176690?utm_medium=email&utm_campaign=Latest%20from%20The%20Conversation%20for%20February%2016%202022%20-%202202821848&utm_content=Latest%20from%20The%20Conversation%20for%20February%2016%202022%20-%202202821848+CID_8e10a0dcb5ad55d2fcc546e7c481066e&utm_source=campaign_monitor_uk&utm_term=The%20International%20Space%20Station%20is%20set%20to%20come%20home%20in%20a%20fiery%20blaze

Because we are usually much closer to the bottom of the gravity well, we break apart much more slowly than the international space station will. Our “death* is usually far less firey and observed and recorded by far fewer input sensors.

Wisdomtimes – 6 Interrogatives – The Mystery of Five W’s and One H

February 14, 2022

The following URL is a link to an article with a different and amusing take on The 6 Interrogatives:

Who, What, Where, When, Why and How. Hope you enjoy it.

http://www.wisdomtimes.com/blog/6-interrogatives-the-mystery-of-five-ws-and-one-h/

The merging of computer security and crypto-mining

January 7, 2022

I think it’s a great idea to merge cryptocurrency mining with other more consumer friendly software applications.  It just makes sense.  Its slightly incongruous to merge crypto-mining with computer security software, but the main idea is the same.  Some circumstances come to mind.  First it is a way to get your software application to “pay for itself”.  In theory, you could use any money (crypto or conventional) that you make to offset the cost of the software package/platform.  The more successful you become at mining, the closer to zero the net cost of the extended package will become.  It could even produce a positive income stream in your favor so that your software package becomes a profit (instead of cost) center for you.  Secondly it is an avenue to make crypto currency mining more democratized and within reach of less and less sophisticated users, which after all has been the trajectory of personal computing for the past 50 years anyway.  And, thirdly it is a strategy for third party software companies to stay in the game and not be relegated to the backwaters of the computing world by making themselves more relevant to modern computing trends.

Crypto is reshaping the world economy

August 14, 2021

Crypto is reshaping the world economy, 50 years after Nixon ended the dollar’s peg to gold. Here’s how some are playing it https://a.msn.com/r/2/AANjEqk?m=en-us&a=0

Regression to the Mean

May 20, 2021

Jack Bogle’s ghost warns about 401(k)s https://a.msn.com/r/2/BB1gUrmr?m=en-us&a=0  

As the article says “It’s worth taking a moment to reflect on just how good things have been for investors for a decade.”  But, “Enjoy making 12% a year, but don’t get used to it.”  It’s not going to last forever.  

The Data Processing Equation

April 29, 2021

The equation P(D) = R means the Processing of Data produces Results.  Where P is Processing, D is the Data, and R is the Results. Processing is a function that acts upon the Data, producing the Results.  This can be expressed as P of D yields R.

Algebraically we can solve for either of the variables (P, D or R).  We can solve for any one of the variables (designating it as X) which then becomes the dependent variable, as long as we know the value of the other two.  The other two are the independent variables.  One independent variable is the experimental variable and the other is the control variable or constant.

Solving for the Results (R) we have equation 1: P(D) = X.  This means that if we know the rules and procedures of the Processing (P) and we have the Data (D) we can calculate the Results (R).  This is the classic Business Intelligence (BI) paradigm.  In a classic star schema think of the fact and dimension tables as containing the Data and the various analyses and reporting as the Processing which produce the Results which are then used as a predictive model going forward.  This can be called a “Results Driven Predictive Model” (RDPM) because the predictive power of the model is derived from the Results, the R factor of our equation.  You use the Results (which you do not know ahead of time) which are derived from the interaction of Data and Processing, to inform your predictions.

Solving for the Processing (P) we have equation 2: X(D) = R.  This means that if we have the Results (R) and have the Data (D) we can discover the rules and procedures of the Processing (P) that was applied to the Data (D) to produce those Results.  This is the classic Machine Learning paradigm.  Here through progressively measuring how close each iteration of processing allows you to get to the Results (which you already know), given the Data, you can produce a predictive model going forward.  This is called a “Processing Driven Predictive Model” (PDPM). You use the rules and procedures of processing (which you do not know ahead of time) that produced the Results given the Data, to inform your predictions.

Solving for the Data (D), which is far less common than the previous two solutions, we have equation 3: P(X) = R.  This means that if we have the Results and know the rules and procedures of the Processing we can deduce the Data (D) that had to be used.  This equation has no classic application to what is typically thought of as business as far as I know.  But has application to scientific and historical endeavors.  It can be called the Historical paradigm.  In other words, what Data had to be processed according to the rules and procedures of the Processing to yield the observed Results.  This is called a “Data Driven Predictive Model” (DDPM).  You use the Data (which you do not know ahead of time) upon which the Processing was used to produce the Results, to inform your predictions.

We manipulate the experimental independent variable while holding the control independent variable constant.  This is done in order to observe and measure how changes in the experimental variable (the one being changed) effects the dependent variable.  For example, in equation 1 we can change the Processing (P) while leaving the Data constant and observe how the dependent Results change.  This of course is very common.  A constant set of data will almost always produce different results if processed according to a different way set of rules and procedures.

We can also change the Data (the D factor) in equation 1 to observe how that changes the Results while the Processing stays constant. This opens up many predictive possibilities like comparing the different Results when different Data sets are processed the same way by constant Processing.  

The same experimental design structure can be applied to equations 2 and 3 as well.  This becomes interesting when the Results are held constant.  That is, we know what we want to see in the Results. The Data may be out of our control (that is, it may be supplied by others) and we want to know how we can Process that Data to give us the Results we want.  This scenario is, in fact, the basis of fraud,

This examination, of course, is an oversimplification but I believe it captures to essential interdependency between Processing, Data and Results.  This interdependency follows the classic experimental model where we have two independent variables (one experimental and one control) and one dependent variable which is subject to the manipulation of either of the other two. 

Knowns and Unknowns

November 7, 2020

Are we really better off now than we used to be? In the old days we knew what we knew, and we knew there were things we didn’t know.  Today with the growth of data, machine learning and artificial intelligence, there are many more things we know and even more that we know we don’t know.  There are even things we could know but don’t bother to know, mainly because we don’t need to know them.  Someone else, or increasingly something else knows them for us, thus saving us the bother of knowing them.

We even discover from time to time that there are things we know that we didn’t know that we knew.  We are beginning to suspect that, more than all the things we know and know we don’t know, there are even more things we don’t know that we don’t know.  But we don’t know for sure that we don’t know them. If we knew this for sure (if we knew them) then it would just add to the things that we know that we don’t know and would no longer be unknown unknowns.[i]

It is kind of like the names of things.  Sometimes there are multiple names given to the same thing, and sometimes multiple things have the same name.  Of the two suboptimal situations multiple things with the same name is always the more vexing.  This is usually an intra-language problem and not inter-language problem, which makes it even more troubling.  You would expect a person speaking a different language would have a different name for something, but you might expect (wrongly it seems) that people speaking the same language would have the same name or names for the same thing.  Worse, people of the same language often have different polymorphic descriptors referring to the same object. 

What is even worse, a monomorphic descriptor can refer to a set of objects that can either overlap (like a Venn diagram[ii]) or be totally discontinuous, often without people even knowing that they don’t know.


[i] Conceptualized by Donald Rumsfeld February 2002.  Things we are neither aware of nor understand.

[ii] Conceived around 1880 by John Venn, according to Wikipedia.

Entropy and Consciousness, Part 2

September 25, 2020

This is a follow up to a previous article on consciousness and entropy:  https://birkdalecomputing.com/2020/05/03/entropy-and-consciousness/

We have entered into the age of uber-prediction.  No I don’t mean guessing when your hired ride will arrive, but an age when nearly everything is predicted.  Humans have, of course, always been predicting the outcomes of activities and events.  Predicting the future has been called an emergent behavior of intelligence. Our ancestors needed it to tell them the most likely places that the alpha predator might be hiding, as well as telling them which route would most likely be taken by the prey they are trying to catch.

There is a natural feedback loop between predicted outcomes and expected outcomes. If a predicted outcome is perceived to be within a certain margin of error of an expected outcome the prediction is said to have “worked” and this positive assessment (i.e. that the prediction worked) when it occurs tends to reinforce the use of predictive behavior in the future.  In other words it increases the occurrence of predictions and simultaneously increases the amount of data that can be used for future predictions.

In the past we did not have as much data or as much computing power to process the data as we have today. This had always acted as a constraint on, not only, the aspects of life that could be predicted (i.e. not enough data), but also on how quickly prediction worked in respect to the aspect of life being predicted (i.e. not enough processing power). Predictability now tends to “work” better than it ever did before because there is more data to use and faster ways to use it.  The success of prediction creates a virtuous cycle that reinforces the desire for more prediction.

The state of the world around us seems to be increasing in its predictability.  This leads me to believe that we must be pushing back more and more against entropy, which is defined by Claude Shannon in Information Theory as a state of zero predictability where all outcomes are equally likely. This means you need less new information to predict an outcome because the amount of ambient information is constantly increasing.  To make information work for you, often requires discovery of predictions made by others. Consequently you need to ask less questions to obtain a workable prediction. The less entropic a system, the less new information it contains.

Information is measured in the bit, or single unit of surprise. The more bits a system has the more possible surprises it can have and the more entropic it is. So it follows that the more information there is in a system, the more units of surprise it potentially has and the less likely it is to work as a predicted.