There is no such thing as "statistical inference"
In recent discussions about the replication crisis, statistical looms large; claims about the misuse of classical significance testing, lax statistical evidence standards, non-replication (defined in a variety of statistical ways), and meta-analysis --- statistical inference from statistical inferences --- all involve statistical inference in some way. This is not surprising, since statistical inference has become one of the main tools for scientists since Fisher made it popular in the early 20th century. Arguments over the "right" way of approaching statistical inference give it outsized importance. I argue that, in fact, we cannot make statistical inferences except in trivial cases, and that all meaningful scientific inferences are non-statistical in nature. There is no unique, or obvious, mapping between a statistical "inference" and a scientific one; unfortunately, scientists have largely offloaded responsibility for their scientific inferences onto statistical theories that were not meant for the job. This point is not really new (Fisher made it in attacking Neyman and Pearson in 1955), and researchers often pay lip service to it when convenient (e.g., quoting Box, 1976: “All models are wrong…”). Statistical inference should be regarded as a mechanism for generating useful toys (Hennig, 2020) to introduce scepticism into scientific inferences, and no more. This does not mean inferential statistics are mere descriptive statistics, but the primacy of inferential statistics in interpretation of scientific data must be questioned (see also Amrhein, Trafimow, & Greenland, 2019).