the Bayesian mathematician


I just came across this blog post which proposes some sort of Bayesian mathematics: "[..] Suppose you conjecture something as part of your research program. [..] you could use Bayes' theorem to give two estimates on the plausibility of your conjecture being true. One is giving the most generous probabilities given the evidence, and the other is giving the least generous. You’ll get some sort of Bayesian confidence interval of the probability of the conjecture being true."

Obviously, I was thinking about counter-examples to this proposal. As we all know from Kurt Gödel and the pop. sci. literature, mathematics cannot be automated due to the Halteproblem; So a natural starting point would be a Turing machine with n possible internal states. We assume that n is large enough and the machine complicated enough that our Bayesian mathematician cannot figure out right away if it will halt or run forever.
So she assigns a probability p(H) to the conjecture that the machine will at some point halt and begins with an uninformed prior p(H) = 1/2. Then the machine starts and writes and reads from its infinite tape and our Bayesian mathematician updates p(H). For a long while all she sees is that the machine continues and continues and so p(H) decreases, but suddenly after L steps the machine hits the stop state. Did p(H) help the Bayesian mathematician to anticipate this event?

So an interesting question would e.g. ask this: If the Bayesian mathematician is equivalent to a Turing machine with N internal states, can she use Bayesian updating so that p(H) increases and indeed reach 1 before the machine stops after L steps and at the same time be sure that p(H) will decrease and reach 0 if the machine never stops? I guess she could try to detect loops and other patterns, as long as they are not too complicated to be handled by an N state Turing machine...
Well, I am pretty sure if N is fixed and n is allowed to be arbitrarily large this would violate the spirit as well as the letter of the Halteproblem. But what about n < N? Do we know anything about that?


added later: After thinking about this some more I would like to clarify a few things: The detection of 'endless loops' or other patterns, which indicate that the machine can never halt, has nothing to do with Bayesian updating itself. The task to find such algorithms and pattern detectors is equivalent to proving a conjecture the old fashioned way.
Also, if one examines all Turing machines with n smaller than some given number, starting on a finite sample of tapes and running for less than L steps, one may find certain frequencies of halting and not-halting behavior and perhaps one can extrapolate to some extent to the case of much larger L and derive some bounds for p(H) that way. But again this would not have anything to do with Bayesian updating, but it would perhaps be a generalization of Chaitin's reasoning.
The more I think about it, it seems that Bayesian updating itself does not help at all with this problem...

added even later: Meanwhile I found some papers related to this question: 1, 2, 3. I noticed that "The major drawback of the AIXI model is that it is uncomputable".

added much later: Scott A. uses a probability argument about complexity theory in a comment on his blog (see also the following comments). But I don't find his argument very convincing.

ten years after


Cosma reminds us that a certain sloth is already ten years old; But as a true statistician he also gives us an error bar with this date ...
However, his list of the twenty best pieces misses the one I consider the best post. After all, the title of the paper it references inspired the name of this blog...
Well, congratulations and I am looking forward to the next ten years of slothing ...