Problem setup
Sometimes you want to do a Z-test or a T-test, but for some reason these tests are not appropriate. Your data may be skewed, or from a distribution with outliers, or non-normal in some other important way. In these circumstances a sign test is appropriate.
For example, suppose you wander around Times Square and ask strangers for their salaries. Incomes are typically very skewed, and you might get a sample like:
If we look at a QQ plot, we see there are massive outliers:
incomes <- c(8478, 21564, 36562, 176602, 9395, 18320, 50000, 2, 40298, 39, 10780, 2268583, 3404930)
qqnorm(incomes)
qqline(incomes)
Luckily, the sign test only requires independent samples for valid inference (as a consequence, it has been low power).
Null hypothesis and test statistic
The sign test allows us to test whether the median of a distribution equals some hypothesized value. Let’s test whether our data is consistent with median of 50,000, which is close-ish to the median income in the U.S. if memory serves. That is
where stands for the population median. The test statistic is then
Here is the number of data points observed that are strictly greater than the median, and is sample size after exact ties with the median have been removed. Forgetting to remove exact ties is a very frequent mistake when students do this test in classes I TA.
If we sort the data we can see that and in our case:
sort(incomes)
#> [1] 2 39 8478 9395 10780 18320 21564 36562 40298
#> [10] 50000 176602 2268583 3404930
We can verify this with R as well:
Calculating p-values
To calculate a two-sided p-value, we need to find
To do this we need to c.d.f. of a binomial random variable:
library(distributions3)
X <- Binomial(n, 0.5)
2 * min(cdf(X, b), 1 - cdf(X, b - 1))
#> [1] 0.1459961
In practice computing the c.d.f. of binomial random variables is rather tedious and there aren’t great shortcuts for small samples. If you got a question like this on an exam, you’d want to use the binomial p.m.f. repeatedly, like this:
Finally, sometimes we are interest in one sided sign tests. For the test
the p-value is given by
which we calculate with
1 - cdf(X, b - 1)
#> [1] 0.9807129
For the test
the p-value is given by
which we calculate with
cdf(X, b)
#> [1] 0.07299805
Using the binom.test() function
To verify results we can use the binom.test()
from base
R. The x
argument gets the value of
,
n
the value of
,
and p = 0.5
for a test of the median.
That is, for we would use
binom.test(3, n = 12, p = 0.5)
#>
#> Exact binomial test
#>
#> data: 3 and 12
#> number of successes = 3, number of trials = 12, p-value = 0.146
#> alternative hypothesis: true probability of success is not equal to 0.5
#> 95 percent confidence interval:
#> 0.05486064 0.57185846
#> sample estimates:
#> probability of success
#> 0.25
For
binom.test(3, n = 12, p = 0.5, alternative = "greater")
#>
#> Exact binomial test
#>
#> data: 3 and 12
#> number of successes = 3, number of trials = 12, p-value = 0.9807
#> alternative hypothesis: true probability of success is greater than 0.5
#> 95 percent confidence interval:
#> 0.07187026 1.00000000
#> sample estimates:
#> probability of success
#> 0.25
For
binom.test(3, n = 12, p = 0.5, alternative = "less")
#>
#> Exact binomial test
#>
#> data: 3 and 12
#> number of successes = 3, number of trials = 12, p-value = 0.073
#> alternative hypothesis: true probability of success is less than 0.5
#> 95 percent confidence interval:
#> 0.0000000 0.5273266
#> sample estimates:
#> probability of success
#> 0.25
All of these results agree with our manual computations, which is reassuring.