Description:This book does an impressive job of replacing ad hoc rules of statistics with rigorous logic, but it is difficult enough to fully understand that most people will only use small parts of it.
He emphasizes that probability theory consists of logical reasoning about the imperfect information we have, and repeatedly rants against the belief that probabilities or randomness represent features of nature that exist independent of our knowledge. Even something seemingly simple such as a toss of an ordinary coin cannot have some objectively fixed frequency unless concepts such as "toss" are specified in unreasonable detail. What we think of as randomness is best thought of as a procedure for generating results of which we are ignorant.
He derives his methods from a few simple axioms which appear close to common sense, and don't look much like they are specifically designed to produce statistical rules.
He is careful to advocate Bayesian methods for an idealized robot, and avoids addressing questions of whether fallible humans should sometimes do something else. In particular, his axiom that the robot should never ignore information is a goal that will probably reduce the quality of human reasoning in some cases where there's too much information for humans to handle well.
I'm convinced that when his methods can be properly applied and produce different results than frequentist methods do, we should reject the frequentist results. But it's not obvious how easy it is to apply his methods properly, nor is it obvious whether he has accurately represented the beliefs of frequentists (who I suspect often don't think clearly enough about the issues he raises to be clearly pinned down).
He does a good job of clarifying the concept of "induction", showing that we shouldn't try to make it refer to some simple and clearly specified rule, but rather we should think of it as a large set of rules for logical reasoning, much like the concept of "science".