Abstract
Bayesians since Savage (1972) have appealed to asymptotic results to counter charges of excessive subjectivity. Their claim is that objectionable differences in prior probability judgments will vanish as agents learn from evidence, and individual agents will converge to the truth. Glymour (1980), Earman (1992) and others have voiced the complaint that the theorems used to support these claims tell us, not how probabilities updated on evidence will actually}behave in the limit, but merely how Bayesian agents believe they will behave, suggesting that the theorems are too weak to underwrite notions of scientific objectivity and intersubjective agreement. I investigate, in a very general framework, the conditions under which updated probabilities actually converge to a settled opinion and the conditions under which the updated probabilities of two agents actually converge to the same settled opinion. I call this mode of convergence deterministic, and derive results that extend those found in Huttegger (2015b). The results here lead to a simple characterization of deterministic convergence for Bayesian learners and give rise to an interesting argument for what I call strong regularity, the view that probabilities of non-empty events should be bounded away from zero.