decreasing entropy



I was thinking about the director of this previous post and how she
could use a mechanical device to generate the fire alarm.



picture of cylinder



A long and narrow cylinder has a slowly moving particle A on one side and a particle B at rest on the other (*).
She sets everything up on Sunday and simply waits until she hears the 'click' of the two particles colliding, which then triggers the fire alarm.

Here is my problem: It would seem that the entropy of this closed system decreases until we hear the 'click'; The entropy due to the unknown location of particle B is
proportional to lnV, but the volume V decreases with time as A moves
from left to right.

Notice that she can make the cylinder (and thus the initial V) as large as she wants and she could use more than one particle at B so that the initial entropy kNlnV can be arbitrarily large(**); And if the velocity of particle A is very small this decrease can take a long time ...



added 1 day later: I think I (finally!) figured out where the problem is with this puzzle. See the comments if you want to know my solution (instead of finding out for yourself.)



(*) While the initial position and momentum of A are well known (the
particles are heavy enough that we don't have to worry about quantum effects), the position
of particle B is unknown (but we do know that it is at rest).



(**) Of course the effort to set up the device will increase entropy by an even larger quantity, but all this occurs already on Sunday.



added later: I am no longer sure about that. She might have simply picked a cylinder
of unknown length, but shorter than 5m. The (right) end of that cylinder and the particle B would be identical. Now she sets up particle A on the left side to move with a (constant) speed of 1m/day and when A hits the other end (=particle B) it triggers the alarm (at which point she then knows the length of the cylinder).

I don't see how the act of picking up a cylinder of unknown length increased the entropy on Sunday.



reading



I just came across the book Information, Physics and Computation, by Marc Mezard and Andrea Montanari, which was published just recently. The draft is still available as pdf files here. Now you know what I am currently reading.

And there is also this paper about
MAP estimation of Hidden Markov processes; I mention it as a follow-up to earlier posts.

"We reduce the MAP estimation to the energy minimization of an appropriately defined Ising spin model..." Sounds interesting.


time and uncertainty



I am sure you know this one already, but ...



The director announces that next week will be a fire drill. In order to make it more realistic the day of the drill will be a surprise.

Here is the problem: The drill cannot be on Friday (the last day of the work week), because everybody would know on Friday morning that if the drill did not happen yet it will have to be this day, so it would not be a surprise.

But for the same reason it cannot be on Thursday, because everybody knows it cannot be on Friday and on Thursday morning, knowing that it did not happen yet one would have to conclude it will be this day, so it would not be a surprise. etc.
Therefore the fire drill cannot be on any day.



But on Tuesday the alarm bell rings and of course nobody knew it would be that day...



C.F. v. Weizs├Ącker discussed the puzzle in his book 'Aufbau der Physik', assuming that it tells us something about the nature of time.

According to Wikipedia no consensus on its correct resolution has yet been established despite significant academic interest (*).



Maybe we should try to assign Bayesian probabilities. Obviously, we have p(Fri) = 0 but then it follows that ...



(*) Notice the citation of the famous remark made by Defense Secretary Donald Rumsfeld!


tbfkatbfka...tbfkaTSM



I assume that you pay attention and noticed that several weeks ago this blog changed its name to 'the blog formerly known as The Statistical Mechanic', which one may abbreviate as tbfkaTSM.

Yesterday it occurred to me that it is time to change the name once more, this time to 'the blog formerly known as the blog formerly known as The Statistical Mechanic' or short tbfkatbfkaTSM. Of course, thinking ahead, it was clear that a more future proof name would be tbfkatbfka...tbfkaTSM.

But then it dawned on me that tbfkatbfka...tbfkaTSM is actually equivalent to tbfkaTSM in a strange way. And so I had an opportunity to appreciate the axioms of logic, which allow one to compress unnecessarily long statements.

Like the axiom S5 of modal logic. I only mention it because Alvin Plantinga used it in his ontological proof, which is a variant of Anselm's proof.


brains



In yet another post Lubos Motl writes about Boltzmann brains and makes the following argument.



"The Boltzmann Brain hypotheses should already be expo-exponentially suppressed relatively to sane hypotheses. Since the people began to think about the world, they have made so many observations of the ordered real world that had to look like miracles from the Boltzmann Brain viewpoint that whatever the Boltzmann Brain prior were, they were already suppressed essentially to zero."



You have 10 sec to figure out what is wrong with this argument.