This commonality across these very different statistical procedures suggests to me that thinking on parallel tracks is an important and fundamental property of statistics. Perhaps, rather than trying to systematize all statistical learning into a single inferential framework (whether it be Neyman-Pearson hypothesis testing, Bayesian inference over graphical models, or some purely predictive behavioristic approach), we would be better off embracing our twoishness.
You can read the rest here, and it's worth it. Though I have to say my immediate thought is "...well, if going from one to two models is good, what about adding a third for robustness?" Though perhaps really that's just another way of thinking about what we normally consider to be "sub-versions" of a model, e.g., deciding to cluster one's standard errors.