Information Entropy

Is there a maximum amount of data about any given subject above which the incremental value of any additional data begins to approach zero? Even more starkly is there a point where more data about a given subject may actually begin to have a negative effect, in that it actually decreases the amount of information about the subject?

I don’t mean that newer data may prove that old data about the subject is no longer accurate and therefore render the old data out of date. In which case the old data would still exist but no longer be relevant. I’m talking about a situation where the informational content, i.e. the payload of the data, will actually decrease. This would be a situation when we know less about a subject at some point in the future than we knew in the past, based on all the data there is on the subject. I think it would have to have something to do with context, with the passage of time, and the associations between units of data about the subject and data about other subjects. The more connections, the more information a body of data has.

This is a tricky speculation. I mean, is it possible to actually know less at some point than was previously known? Not for just a single person, as sometimes happens as we age and simply forget the information we previously could recall about a subject. I am talking about the accumulated knowledge about a subject. It is kind of like a subject becoming simpler over time rather than more complex. Which is pretty much the opposite of observed reality.

In fact, this just may be what happens as we approach absolute and universal entropy. I am not a physicist but why would this not be the case? There would be two aspects to this, not only would universal entropy eliminate any differences between things, but the differences between parties, places and time would cease to exist as well. Not only would there be less information on all subjects, but there would be less and less subjects to have information about. Nor would there be anyone to have and be responsible for information, even if it did still exit.

Tags: ,

2 Responses to “Information Entropy”

  1. Entropy and Consciousness | Birkdale Computing Says:

    […] An earlier post by me about entropy and information loss can be found at: […]

  2. Daniel Kurtz Says:

    There are inklings here of interesting observations. The information content, or entropy, of a specific observed signal is determined by the probability of the value of the observation relative to the other possible values that it could have taken.

    More precisely, the “probability of possible values” is actually itself a time-varying function of the observer… That is, it is the observer’s own internal model of the observed, in the form of expected probability over a set of possible observation values.

    Further, the observer uses each observation to adjust its internal model – that is, it adjusts expectations about future observations based on prior observations. We can call this learning.

    If the “learning algorithm” is able to tune the expected model such that all future observations are as expected, and no additional “learning” is performed, then the information content of these observations for this observer is indeed none.

    Or another one:

    As you allude to: Phenomena may generate lower entropy signals over time.

    Consider a coin that starts initially fair. Each flip is 50/50 heads or tails. Such a coin has exactly 1 bit of entropy; this is the maximal entropy for a coin.

    Now image that the head side of this coin was made of slightly softer stuff, and every time it lands on heads, a bit rubs off such that over time the probability slowly shifts away from 50/50. The entropy if the coin will also necessarily decrease – each flip, even though still “random” will have a bias, and therefore will actually give less than 1 bit of information.

    Taken to the extreme, the coin could reach an “always heads” state, where each flip provides 0 bits of information.

    … But this is actually a very contrived example. In fact, the second law of thermodynamics suggests the opposite kind of scenario, with entropy always increasing in closed systems.

    That is, over time local concentrations tend to disperse, distributions tend towards the uniform, and entropy approaches maximum – i.e., the fair coin flip, where all structure is gone, everything that is possible is equally likely, each observation provides maximum information because all is completely random and impossible to model. In other words, total unpredictable, unlearnable chaos.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: