Skip to contents

Please see the documentation of Normal() for some properties of the Normal distribution, as well as extensive examples showing to how calculate p-values and confidence intervals.

Usage

# S3 method for class 'Normal'
random(x, n = 1L, drop = TRUE, ...)

Arguments

x

A Normal object created by a call to Normal().

n

The number of samples to draw. Defaults to 1L.

drop

logical. Should the result be simplified to a vector if possible?

...

Unused. Unevaluated arguments will generate a warning to catch mispellings or other possible errors.

Value

In case of a single distribution object or n = 1, either a numeric vector of length n (if drop = TRUE, default) or a matrix with n columns (if drop = FALSE).

Examples


set.seed(27)

X <- Normal(5, 2)
X
#> [1] "Normal(mu = 5, sigma = 2)"

mean(X)
#> [1] 5
variance(X)
#> [1] 4
skewness(X)
#> [1] 0
kurtosis(X)
#> [1] 0

random(X, 10)
#>  [1] 8.814325 7.289754 3.470939 2.085135 2.813062 5.590482 5.013772 7.314822
#>  [9] 9.269276 5.475689

pdf(X, 2)
#> [1] 0.0647588
log_pdf(X, 2)
#> [1] -2.737086

cdf(X, 4)
#> [1] 0.3085375
quantile(X, 0.7)
#> [1] 6.048801

### example: calculating p-values for two-sided Z-test

# here the null hypothesis is H_0: mu = 3
# and we assume sigma = 2

# exactly the same as: Z <- Normal(0, 1)
Z <- Normal()

# data to test
x <- c(3, 7, 11, 0, 7, 0, 4, 5, 6, 2)
nx <- length(x)

# calculate the z-statistic
z_stat <- (mean(x) - 3) / (2 / sqrt(nx))
z_stat
#> [1] 2.371708

# calculate the two-sided p-value
1 - cdf(Z, abs(z_stat)) + cdf(Z, -abs(z_stat))
#> [1] 0.01770607

# exactly equivalent to the above
2 * cdf(Z, -abs(z_stat))
#> [1] 0.01770607

# p-value for one-sided test
# H_0: mu <= 3   vs   H_A: mu > 3
1 - cdf(Z, z_stat)
#> [1] 0.008853033

# p-value for one-sided test
# H_0: mu >= 3   vs   H_A: mu < 3
cdf(Z, z_stat)
#> [1] 0.991147

### example: calculating a 88 percent Z CI for a mean

# same `x` as before, still assume `sigma = 2`

# lower-bound
mean(x) - quantile(Z, 1 - 0.12 / 2) * 2 / sqrt(nx)
#> [1] 3.516675

# upper-bound
mean(x) + quantile(Z, 1 - 0.12 / 2) * 2 / sqrt(nx)
#> [1] 5.483325

# equivalent to
mean(x) + c(-1, 1) * quantile(Z, 1 - 0.12 / 2) * 2 / sqrt(nx)
#> [1] 3.516675 5.483325

# also equivalent to
mean(x) + quantile(Z, 0.12 / 2) * 2 / sqrt(nx)
#> [1] 3.516675
mean(x) + quantile(Z, 1 - 0.12 / 2) * 2 / sqrt(nx)
#> [1] 5.483325

### generating random samples and plugging in ks.test()

set.seed(27)

# generate a random sample
ns <- random(Normal(3, 7), 26)

# test if sample is Normal(3, 7)
ks.test(ns, pnorm, mean = 3, sd = 7)
#> 
#> 	Exact one-sample Kolmogorov-Smirnov test
#> 
#> data:  ns
#> D = 0.20352, p-value = 0.2019
#> alternative hypothesis: two-sided
#> 

# test if sample is gamma(8, 3) using base R pgamma()
ks.test(ns, pgamma, shape = 8, rate = 3)
#> 
#> 	Exact one-sample Kolmogorov-Smirnov test
#> 
#> data:  ns
#> D = 0.46154, p-value = 1.37e-05
#> alternative hypothesis: two-sided
#> 

### MISC

# note that the cdf() and quantile() functions are inverses
cdf(X, quantile(X, 0.7))
#> [1] 0.7
quantile(X, cdf(X, 7))
#> [1] 7