On This Page:ToggleWhat is effect size?Calculate and interpret effect sizesWhy report effect sizes?

On This Page:Toggle

On This Page:

Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude – not just does treatment affect people, but how much does it affect them.

What is effect size?

Effect size is a quantitative measure of the magnitude of the experimental effect. The larger the effect size the stronger the relationship between two variables.

You can look at the effect size when comparing any two groups to see how substantially different they are.

Typically, research studies will comprise anexperimental group and a control group. The experimental group may be an intervention or treatment which is expected to affect a specific outcome.

For example, we might want to know the effect of therapy on treating depression. The effect size value will show whether the therapy has had a small, medium, or large effect on depression.

Calculate and interpret effect sizes

Effect sizes either measure the sizes of associations between variables or the sizes of differences between group means.

Cohen’s d

Cohen’s d is an appropriate effect size for the comparison between two means. It can be used, for example, to accompany the reporting of t-test andANOVA results. It is also widely used in meta-analysis.

effect size formula for cohen

Pearson r

Cohen suggested thatd= 0.2 be considered a “small” effect size, 0.5 represents a “medium” effect size and 0.8 a “large” effect size. This means that if the difference between two groups” means is less than 0.2 standard deviations, the difference is negligible, even if it is statistically significant.

Pearson r correlation

This parameter of effect size summarises the strength of the bivariate relationship. The value of the effect size of Pearson r correlation varies between -1 (a perfect negative correlation) to +1 (a perfect positive correlation).

Pearson r

According to Cohen (1988, 1992), the effect size is low if the value of r varies around 0.1, medium if r varies around 0.3, and large if r varies more than 0.5.

small medium and large effect sizes r

Why report effect sizes?

Thep-value is not enoughA lowerp-value is sometimes interpreted as meaning there is a stronger relationship between two variables. However,statistical significancemeans that it is unlikely that the null hypothesis is true (less than 5%).Therefore, a significantp-value tells us that an intervention works, whereas an effect size tells us how much it works.

Thep-value is not enough

A lowerp-value is sometimes interpreted as meaning there is a stronger relationship between two variables. However,statistical significancemeans that it is unlikely that the null hypothesis is true (less than 5%).Therefore, a significantp-value tells us that an intervention works, whereas an effect size tells us how much it works.

A lowerp-value is sometimes interpreted as meaning there is a stronger relationship between two variables. However,statistical significancemeans that it is unlikely that the null hypothesis is true (less than 5%).

Therefore, a significantp-value tells us that an intervention works, whereas an effect size tells us how much it works.

It can be argued that emphasizing the size of the effect promotes a more scientific approach, as unlike significance tests, the effect size is independent of sample size.

To compare the results of studies done in different settingsUnlike ap-value, effect sizes can be used to quantitatively compare the results of studies done in a different setting. It is widely used in meta-analysis.

To compare the results of studies done in different settings

Unlike ap-value, effect sizes can be used to quantitatively compare the results of studies done in a different setting. It is widely used in meta-analysis.

Further InformationWhat ap-value Tells You About Statistical SignificanceCohen, J. (1992). A power primer. Psychological bulletin, 112(1), 155.Ferguson, C. J. (2016). An effect size primer: a guide for clinicians and researchers.Normal Distribution (Bell Curve)Z-Score: Definition, Calculation and InterpretationStatistics for PsychologyStatistics for Psychology Book Download

Further Information

What ap-value Tells You About Statistical SignificanceCohen, J. (1992). A power primer. Psychological bulletin, 112(1), 155.Ferguson, C. J. (2016). An effect size primer: a guide for clinicians and researchers.Normal Distribution (Bell Curve)Z-Score: Definition, Calculation and InterpretationStatistics for PsychologyStatistics for Psychology Book Download

Print Friendly, PDF & Email

Olivia Guy-Evans, MSc

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Saul McLeod, PhD

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.