5.5 Take-Home Points

  • True effect size is the difference between the hypothesized value and the true (population) value.

  • Observed effect size is the difference between the hypothesized value and the observed (sample) value.

  • Effect size is related to practical relevance. Effect sizes are expressed by (standardized) mean differences, regression coefficients, and measures of association such as the correlation coefficient, R2, and eta2.

  • Statistical significance of a test depends on the observed effect size and sample size. Because sample size affects statistical significance, it is wrong to use significance or a p value as an indication of effect size.

  • If we do not reject a null hypothesis, this does not mean that the null hypothesis is true. We may make a Type II error: not rejecting a false null hypothesis. A researcher can make this error only if the null hypothesis is not rejected.

  • The probability of making a Type II error is commonly denoted with the Greek letter beta (\(\beta\)).

  • The probability of not making a Type II error is the power of the test.

  • The power of a test tells us the probability that we reject the null hypothesis if there is an effect of a particular size in the population. The larger this probability, the more confident we are that we do not overlook an effect when we do not reject the null hypothesis.

  • A practical way to increase test power: Draw a larger sample.