Recent events have led psychologists to acknowledge that the inherent uncertainty encapsulated in an inductive science is amplified by problematic research practices. This article provides a practical introduction to recently developed statistical tools that can be used to deal with these uncertainties when performing and evaluating research. In part 1, we discuss the importance of accurate and stable effect size estimates, and how to design studies to reach a corridor of stability around effect size estimates. In part 2, we explain how, given uncertain effect size estimates, well-powered studies can be designed using sequential analyses. In part 3, we explain what p-values convey about the likelihood that an effect is true, illustrate how the v statistic can be used to evaluate the accuracy of individual studies, and how the evidential value of multiple studies can be examined using a p-curve analysis. We end by discussing the consequences of incorporating our recommendations in terms of a reduced quantity, but increased quality of the research output. We hope the practical recommendations discussed in this article will provide researchers with the tools to make important steps towards a psychological science that allows us to differentiate between all possible truths based on their likelihood.