The margin of error tells you how close to the actual population value your estimate is likely to be when you're looking at data from a sample instead of the whole population. Here's how to calculate the margin of error.
- Step 1: Multiply the standard error by the z-score that goes with the confidence level you want to have.
- TIP: If you don't have a z-score, you can make a rough estimate of margin of error by multiplying the standard error by two.
- Step 2: Find the range of error, or confidence interval, by first adding the result to and then subtracting it from the mean value of your data set. Or, you can simply express your statistic as the mean value, plus or minus the margin of error.
- FACT: Modern iris scanners can quickly identify a human with almost zero margin of error.
- TIP: The confidence level tells you how certain you can be that the actual statistic for the whole population will fall within the range of values you predict from your sample.
- Step 3: In a statistical table of critical values or z-scores, find the z-score that corresponds to the confidence level you want to use.
- Step 4: Calculate the square root of the sample size. Write this result beneath your standard deviation figure.
- TIP: Sample size is the number of data points in your data set. If you have test scores from a sample of 500 exam takers, then your sample size is 500.
- Step 5: Calculate the standard error by dividing the standard deviation by the square root of the sample size.
- TIP: The equation for standard error can be written as SE equals SD divided by the square root of N, where N is the sample size.
- Step 6: Use a calculator to determine the standard deviation of your data set, if it's not already given, and write down the result.