Evil Geniuses

July 22, 2022

Ad page purveyors are getting trickier. If you are like me, you probably get many, maybe dozens or even hundreds of unsolicited ad pages sent to your email every day. For many years I’ve just ignored them. Sometimes reading them, sometimes not and just deleting them. However, deleting does not make the sender go away. I had often read that one should not click on the unsubscribe link, as that only tells the sender, or more accurately his or her software, that a live person is at the other end of the connection.

Lately I’ve decided to just click “Unsubscribe” on the ad page… if I can find it… and see what happens. For the most part it works fine. Many senders really have stopped sending me their ads. Some have not, of course.

However, I’ve recently noticed a trickier thing they do. If appears that at least some percentage of the ad purveyors place the unsubscribe page outside of the security umbrella of their HTTPS link (i.e. the unsubscribe page has only an HTTP URL). This means that if you have a “watch dog” internet security system installed on your computing device you will get a message from it advising you that the page you requested is not secure, and do you want to proceed or go back. Some ad purveyors are now trying to scare users into staying subscribed.

If enough users, fearing what might happen if they venture into “unsecure” territory, choose to go back and not click the “Unsubscribe” button, then they (the ad purveyors) may not lose as many subscribers as they would if they had been just a little more transparent. In this way their ad site may not suffer the indignity of losing as many subscribers.

As always,

Have a rewarding compute

Starbucks Fantasy

July 15, 2022

She was a pretty girl.  Sweet actually.   A smile, bright eyes and a cheery acknowledgement of my presence across from her.  We made short term small talk and I asked her her name.  She told me and then, I could almost not believe it, she asked me mine.  My heart skipped a beat… she couldn’t really care what my name was, could she?

I told her… and she wrote it on the cup.

Two Interesting Interrelated Topics

March 27, 2022

I have been hearing more and more about two interesting interrelated topics recently.

First is an article from my NPR news feed from a short while back, about governments introducing virtualized versions of their “centralized” currencies. As you might imagine China seems to be way out in front on this. It may be virtual and it may be crypto but it is still centralized. That means it is soverign currency and a government controls this “e-currency”.


Second and what might be even more interesting however is the rise in “decentralized” finance, or DeFi. Here, at least at first, a government does not necessarily control the currency. Here is a reference to a Coursera course produced by Duke University on this topic. This might be where the real action is going to be going forward.


Don’t kill me and I won’t kill you

February 20, 2022

This is one of the best articles on COVID/Omicron I’ve read.  It makes lots of sense.  In order for the virus to infect more and more people, and thus increase its probability of surviving in a world full of enemies (eg. humans) it has to evolve to the point where it is not so severe and people just live with it.  Kind of like the flu.  People tolerate it and the virus continues to  propagate.  Our probiotic stomach bacteria learned this lesson millions of years ago.  Don’t kill them and they won’t kill you!.


Don’t Look Up

February 17, 2022

The title obviously is ripped off from the 2021 Oscar nominated movie, but I thought it fit. Here in the Earth’s gravity pool we are exposed to everything “out there”, even our own stuff which we have put out there, like satellites and space stations. 

Orbiting objects, especially the international space station, in orbiting the earth, slowly accumulate an energy debt. This is a result of it’s borrowing energy from its surroundings to prevent it from falling out of the sky. Sooner or later this debt gets too high for the object to carry and it balances the books by falling down the gravity well back to earth. The fireball produced by that plunge is the removal of that debt.

This is kind of like the biologic death that happens to all of us. Sooner or later, local entropy, as it increases, wittles away at the bonds that hold the matter of which we are made.  We breakup into smaller pieces of matter until the critical mass required to stay alive is passed and we “die”.

Here’s an interesting read.


Because we are usually much closer to the bottom of the gravity well, we break apart much more slowly than the international space station will. Our “death* is usually far less firey and observed and recorded by far fewer input sensors.

Wisdomtimes – 6 Interrogatives – The Mystery of Five W’s and One H

February 14, 2022

The following URL is a link to an article with a different and amusing take on The 6 Interrogatives:

Who, What, Where, When, Why and How. Hope you enjoy it.


The merging of computer security and crypto-mining

January 7, 2022

I think it’s a great idea to merge cryptocurrency mining with other more consumer friendly software applications.  It just makes sense.  Its slightly incongruous to merge crypto-mining with computer security software, but the main idea is the same.  Some circumstances come to mind.  First it is a way to get your software application to “pay for itself”.  In theory, you could use any money (crypto or conventional) that you make to offset the cost of the software package/platform.  The more successful you become at mining, the closer to zero the net cost of the extended package will become.  It could even produce a positive income stream in your favor so that your software package becomes a profit (instead of cost) center for you.  Secondly it is an avenue to make crypto currency mining more democratized and within reach of less and less sophisticated users, which after all has been the trajectory of personal computing for the past 50 years anyway.  And, thirdly it is a strategy for third party software companies to stay in the game and not be relegated to the backwaters of the computing world by making themselves more relevant to modern computing trends.

Crypto is reshaping the world economy

August 14, 2021

Crypto is reshaping the world economy, 50 years after Nixon ended the dollar’s peg to gold. Here’s how some are playing it https://a.msn.com/r/2/AANjEqk?m=en-us&a=0

Regression to the Mean

May 20, 2021

Jack Bogle’s ghost warns about 401(k)s https://a.msn.com/r/2/BB1gUrmr?m=en-us&a=0  

As the article says “It’s worth taking a moment to reflect on just how good things have been for investors for a decade.”  But, “Enjoy making 12% a year, but don’t get used to it.”  It’s not going to last forever.  

The Data Processing Equation

April 29, 2021

The equation P(D) = R means the Processing of Data produces Results.  Where P is Processing, D is the Data, and R is the Results. Processing is a function that acts upon the Data, producing the Results.  This can be expressed as P of D yields R.

Algebraically we can solve for either of the variables (P, D or R).  We can solve for any one of the variables (designating it as X) which then becomes the dependent variable, as long as we know the value of the other two.  The other two are the independent variables.  One independent variable is the experimental variable and the other is the control variable or constant.

Solving for the Results (R) we have equation 1: P(D) = X.  This means that if we know the rules and procedures of the Processing (P) and we have the Data (D) we can calculate the Results (R).  This is the classic Business Intelligence (BI) paradigm.  In a classic star schema think of the fact and dimension tables as containing the Data and the various analyses and reporting as the Processing which produce the Results which are then used as a predictive model going forward.  This can be called a “Results Driven Predictive Model” (RDPM) because the predictive power of the model is derived from the Results, the R factor of our equation.  You use the Results (which you do not know ahead of time) which are derived from the interaction of Data and Processing, to inform your predictions.

Solving for the Processing (P) we have equation 2: X(D) = R.  This means that if we have the Results (R) and have the Data (D) we can discover the rules and procedures of the Processing (P) that was applied to the Data (D) to produce those Results.  This is the classic Machine Learning paradigm.  Here through progressively measuring how close each iteration of processing allows you to get to the Results (which you already know), given the Data, you can produce a predictive model going forward.  This is called a “Processing Driven Predictive Model” (PDPM). You use the rules and procedures of processing (which you do not know ahead of time) that produced the Results given the Data, to inform your predictions.

Solving for the Data (D), which is far less common than the previous two solutions, we have equation 3: P(X) = R.  This means that if we have the Results and know the rules and procedures of the Processing we can deduce the Data (D) that had to be used.  This equation has no classic application to what is typically thought of as business as far as I know.  But has application to scientific and historical endeavors.  It can be called the Historical paradigm.  In other words, what Data had to be processed according to the rules and procedures of the Processing to yield the observed Results.  This is called a “Data Driven Predictive Model” (DDPM).  You use the Data (which you do not know ahead of time) upon which the Processing was used to produce the Results, to inform your predictions.

We manipulate the experimental independent variable while holding the control independent variable constant.  This is done in order to observe and measure how changes in the experimental variable (the one being changed) effects the dependent variable.  For example, in equation 1 we can change the Processing (P) while leaving the Data constant and observe how the dependent Results change.  This of course is very common.  A constant set of data will almost always produce different results if processed according to a different way set of rules and procedures.

We can also change the Data (the D factor) in equation 1 to observe how that changes the Results while the Processing stays constant. This opens up many predictive possibilities like comparing the different Results when different Data sets are processed the same way by constant Processing.  

The same experimental design structure can be applied to equations 2 and 3 as well.  This becomes interesting when the Results are held constant.  That is, we know what we want to see in the Results. The Data may be out of our control (that is, it may be supplied by others) and we want to know how we can Process that Data to give us the Results we want.  This scenario is, in fact, the basis of fraud,

This examination, of course, is an oversimplification but I believe it captures to essential interdependency between Processing, Data and Results.  This interdependency follows the classic experimental model where we have two independent variables (one experimental and one control) and one dependent variable which is subject to the manipulation of either of the other two.