Package 'gofIG'

Title: Goodness-of-Fit Tests for the Inverse Gaussian Distribution
Description: We implement various tests for the composite hypothesis of testing the fit to the family of inverse Gaussian distributions. Included are methods presented by Allison, J.S., Betsch, S., Ebner, B., and Visagie, I.J.H. (2022) <doi:10.48550/arXiv.1910.14119>, as well as two tests from Henze and Klar (2002) <doi:10.1023/A:1022442506681>. Additionally, the package implements a test proposed by Baringhaus and Gaigall (2015) <doi:10.1016/j.jmva.2015.05.013>. For each test a parametric bootstrap procedure is implemented.
Authors: Bruno Ebner [aut, cre], Jaco Visagie [aut], Steffen Betsch [aut], James Allison [aut], Lucas Iglesias [ctb]
Maintainer: Bruno Ebner <[email protected]>
License: CC BY 4.0
Version: 1.0
Built: 2025-01-31 05:24:15 UTC
Source: https://github.com/cran/gofIG

Help Index


The first Allison-Betsch-Ebner-Visagie test statistic

Description

This function computes the first test statistic of the goodness-of-fit tests for the inverse Gaussian family due to Allison et al. (2022). Two different estimation procedures are implemented, namely the method of moment and the maximum likelihood method.

Usage

ABEV1(data, a = 10, meth = "MME")

Arguments

data

a vector of positive numbers.

a

positive tuning parameter.

meth

method of estimation used. Possible values are 'MME' for moment estimation and 'MLE' for maximum likelihood estimation.

Details

The numerically stable test statistic for the first Allison-Betsch-Ebner-Visagie test is defined as:

ABEV1n,a=14nj,k=1n(φ^n+3Yn,jφ^nYn,j2)(φ^n+3Yn,kφ^nYn,k2)h1,a(Yn,j,Yn,k)ABEV1_{n,a} = \frac{1}{4n} \sum_{j,k=1}^{n} \left( \hat{\varphi}_n + \frac{3}{Y_{n,j}} - \frac{\hat{\varphi}_n}{Y_{n,j}^2} \right) \left( \hat{\varphi}_n + \frac{3}{Y_{n,k}} - \frac{\hat{\varphi}_n}{Y_{n,k}^2} \right) h_{1,a}(Y_{n,j}, Y_{n,k})

2(φ^n+3Yn,jφ^nYn,j2)h2,a(Yn,j,Yn,k)- 2 \left( \hat{\varphi}_n + \frac{3}{Y_{n,j}} - \frac{\hat{\varphi}_n}{Y_{n,j}^2} \right) h_{2,a}(Y_{n,j}, Y_{n,k})

2(φ^n+3Yn,kφ^nYn,k2)h2,a(Yn,k,Yn,j)- 2 \left( \hat{\varphi}_n + \frac{3}{Y_{n,k}} - \frac{\hat{\varphi}_n}{Y_{n,k}^2} \right) h_{2,a}(Y_{n,k}, Y_{n,j})

+4aeamax(Yn,j,Yn,k),+ \frac{4}{a} e^{-a \max(Y_{n,j}, Y_{n,k})},

with φ^n=λ^nμ^n\hat{\varphi}_n = \frac{\hat{\lambda}_n}{\hat{\mu}_n}, where μ^n,λ^n\hat{\mu}_n,\hat{\lambda}_n are consistent estimators of μ,λ\mu, \lambda, respectively, the parameters of the inverse Gaussian distribution. Furthermore Yn,j=Xjμ^nY_{n,j} = \frac{X_j}{\hat{\mu}_n}, j=1,...,nj = 1,...,n, for (Xj)j=1,...,n(X_j)_{j = 1,...,n}, a sequence of independent observations of a positive random variable XX. The functions hi,a(s,t)h_{i,a}(s,t), i=1,2i = 1,2, are defined in Allison et al. (2022), section 5.1. The null hypothesis is rejected for large values of the test statistic ABEV1n,aABEV1_{n,a}.

Value

value of the test statistic.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK

Examples

ABEV1(rmutil::rinvgauss(20,2,1),a=10,meth='MLE')

The second Allison-Betsch-Ebner-Visagie test statistic

Description

This function computes the second test statistic of the goodness-of-fit tests for the inverse Gaussian family due to Allison et al. (2022). Two different estimation procedures are implemented, namely the method of moment and the maximum likelihood method.

Usage

ABEV2(data, a = 10, meth = "MME")

Arguments

data

a vector of positive numbers.

a

positive tuning parameter.

meth

method of estimation used. Possible values are 'MME' for moment estimation and 'MLE' for maximum likelihood estimation.

Details

The numerically stable test statistic for the second Allison-Betsch-Ebner-Visagie test is defined as:

ABEV2n,a=14nj,k=1n(φ^n+3Yn,jφ^nYn,j2)(φ^n+3Yn,kφ^nYn,k2)h~1,a(Yn,j,Yn,k)ABEV2_{n,a} = \frac{1}{4n} \sum_{j,k=1}^{n} \left( \hat{\varphi}_n + \frac{3}{Y_{n,j}} - \frac{\hat{\varphi}_n}{Y_{n,j}^2} \right) \left( \hat{\varphi}_n + \frac{3}{Y_{n,k}} - \frac{\hat{\varphi}_n}{Y_{n,k}^2} \right) \tilde{h}_{1,a}(Y_{n,j}, Y_{n,k})

2(φ^n+3Yn,jφ^nYn,j2)h~2,a(Yn,j,Yn,k)- 2 \left( \hat{\varphi}_n + \frac{3}{Y_{n,j}} - \frac{\hat{\varphi}_n}{Y_{n,j}^2} \right) \tilde{h}_{2,a}(Y_{n,j}, Y_{n,k})

2(φ^n+3Yn,kφ^nYn,k2)h~2,a(Yn,k,Yn,j)- 2 \left( \hat{\varphi}_n + \frac{3}{Y_{n,k}} - \frac{\hat{\varphi}_n}{Y_{n,k}^2} \right) \tilde{h}_{2,a}(Y_{n,k}, Y_{n,j})

+4πaΦ(2amax(Yn,j,Yn,k)),+ 4 \frac{\sqrt{\pi}}{a} \Phi \left( - \sqrt{2a} \max(Y_{n,j}, Y_{n,k}) \right),

with φ^n=λ^nμ^n\hat{\varphi}_n = \frac{\hat{\lambda}_n}{\hat{\mu}_n}, where μ^n,λ^n\hat{\mu}_n,\hat{\lambda}_n are consistent estimators of μ,λ\mu, \lambda, respectively, the parameters of the inverse Gaussian distribution. Furthermore Yn,j=Xjμ^nY_{n,j} = \frac{X_j}{\hat{\mu}_n}, j=1,...,nj = 1,...,n, for (Xj)j=1,...,n(X_j)_{j = 1,...,n}, a sequence of independent observations of a positive random variable XX. The functions h~i,a(s,t)\tilde{h}_{i,a}(s,t), i=1,2i = 1,2, are defined in Allison et al. (2022), section 5.1, and Φ\Phi denotes the distribution function of the standard normal distribution. The null hypothesis is rejected for large values of the test statistic ABEV2n,aABEV2_{n,a}.

Value

value of the test statistic.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK

Examples

ABEV2(rmutil::rinvgauss(20,2,1),a=10,meth='MLE')

The Baringhaus-Gaigall test statistic

Description

This function computes the test statistic of the goodness-of-fit test for the inverse Gaussian family due to Baringhaus and Gaigall (2015).

Usage

BG(data)

Arguments

data

a vector of positive numbers.

Details

The test statistic of the Baringhaus-Gaigall test is defined as:

BGn=n(n(n1))5μ,ν=1,μνn(N1(μ,ν)N4(μ,ν)N2(μ,ν)N3(μ,ν))2,BG_{n} = \frac{n}{(n(n-1))^5} \sum_{\mu, \nu = 1, \mu \neq \nu}^{n} \left( N_1(\mu, \nu)N_4(\mu, \nu) - N_2(\mu, \nu)N_3(\mu, \nu) \right)^2,

where

N1(μ,ν)=i,j=1,ijn1{Y~i,jY~μ,ν,Z~i,jZ~μ,ν},N_1(\mu, \nu) = \sum_{i,j = 1, i \neq j}^{n} \mathbf{1} \left\{ \tilde{Y}_{i,j} \leq \tilde{Y}_{\mu, \nu}, \tilde{Z}_{i,j} \leq \tilde{Z}_{\mu, \nu} \right\},

N2(μ,ν)=i,j=1,ijn1{Y~i,jY~μ,ν,Z~i,j>Z~μ,ν},N_2(\mu, \nu) = \sum_{i,j = 1, i \neq j}^{n} \mathbf{1} \left\{ \tilde{Y}_{i,j} \leq \tilde{Y}_{\mu, \nu}, \tilde{Z}_{i,j} > \tilde{Z}_{\mu, \nu} \right\},

N3(μ,ν)=i,j=1,ijn1{Y~i,j>Y~μ,ν,Z~i,jZ~μ,ν},N_3(\mu, \nu) = \sum_{i,j = 1, i \neq j}^{n} \mathbf{1} \left\{ \tilde{Y}_{i,j} > \tilde{Y}_{\mu, \nu}, \tilde{Z}_{i,j} \leq \tilde{Z}_{\mu, \nu} \right\},

N4(μ,ν)=i,j=1,ijn1{Y~i,j>Y~μ,ν,Z~i,j>Z~μ,ν},N_4(\mu, \nu) = \sum_{i,j = 1, i \neq j}^{n} \mathbf{1} \left\{ \tilde{Y}_{i,j} > \tilde{Y}_{\mu, \nu}, \tilde{Z}_{i,j} > \tilde{Z}_{\mu, \nu} \right\},

with 1\mathbf{1} being the indicator function. Let f(Xi,Xj)=(Xi+Xj)/2f(X_i,X_j) = (X_i + X_j)/2 and g(Xi,Xj)=(Xi1+Xj1)/2f(Xi,Xj)1g(X_i,X_j) = (X_i^{-1} + X_j^{-1})/2 - f(X_i,X_j)^{-1}, with X1,...,XnX_1,...,X_n positive, independent and identically distributed random variables with finite moments E[X12]\mathbb{E}[X_1^2] and E[X11]\mathbb{E}[X_1^{-1}]. Then (Y~i,j,Z~i,j)=(f(Xi,Xj),g(Xi,Xj)),1i,jn,ij(\tilde{Y}_{i,j}, \tilde{Z}_{i,j}) = (f(X_i,X_j), g(X_i,X_j)), 1 \leq i,j \leq n, i \neq j. Note that Y~i,j\tilde{Y}_{i,j} and Z~i,j\tilde{Z}_{i,j} are independent if, and only if X1,...,XnX_1,...,X_n are realized from an inverse Gaussian distribution.

Value

value of the test statistic.

References

Baringhaus, L. Gaigall, D. (2015). "On an independence test approach to the goodness-of-fit problem", Journal of Multivariate Analysis, 140, 193-208. doi:10.1016/j.jmva.2015.05.013

Examples

BG(rmutil::rinvgauss(20,2,1))

The Cramer-von Mises test statistic

Description

This function computes value of the test statistic of the goodness-of-fit test for the inverse Gaussian family in the spirit of Cramer and von Mises. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions.

Usage

CM(data)

Arguments

data

a vector of positive numbers.

Details

Let X(j)X_{(j)} denote the jjth order statistic of X1,,XnX_1, \ldots, X_n, a sequence of independent observations of a positive random variable XX. Furthermore, let F^(x)=F(x;μ^n,λ^n)\hat{F}(x) = F(x; \hat{\mu}_n, \hat{\lambda}_n), where FF is the distribution function of the inverse Gaussian distribution. Note that μ^n,λ^n\hat{\mu}_n,\hat{\lambda}_n are the maximum likelihood estimators for μ\mu and λ\lambda, respectively, the parameters of the inverse Gaussian distribution. The null hypothesis is rejected for large values of the test statistic:

CM=112n+j=1n(F^(X(j))2j12n)2.CM = \frac{1}{12n} + \sum_{j=1}^{n} \left( \hat{F}(X_{(j)}) - \frac{2j-1}{2n} \right)^2.

Value

value of the test statistic.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK

Examples

CM(rmutil::rinvgauss(20,2,1))

The first Henze-Klar test statistic

Description

This function computes the first test statistic of the goodness-of-fit test for the inverse Gaussian family due to Henze and Klar (2002).

Usage

HK1(data, a = 0)

Arguments

data

a vector of positive numbers.

a

positive tuning parameter.

Details

The representation of the first Henze-Klar test statistic used for computation is given by:

HKn,a(1)=φ^nnj,k=1nZ^jk1{1(Yj+Yk)(1+π2Z^jkerfce(Z^jk2))+(1+2Z^jk)YjYk},HK_{n,a}^{(1)}= \frac{\hat{\varphi}_n}{n} \sum_{j,k=1}^{n} \hat{Z}_{jk}^{-1} \left\{ 1 - (Y_j + Y_k) \left( 1 + \sqrt{\frac{\pi}{2\hat{Z}_{jk}}} \text{erfce}\left( \sqrt{\frac{\hat{Z}_{jk}}{2}} \right) \right) + \left( 1 + \frac{2}{\hat{Z}_{jk}} \right) Y_j Y_k \right\},

with φ^n=λ^nμ^n\hat{\varphi}_n = \frac{\hat{\lambda}_n}{\hat{\mu}_n}, where μ^n,λ^n\hat{\mu}_n,\hat{\lambda}_n are the maximum likelihood estimators for μ\mu and λ\lambda, respectively, the parameters of the inverse Gaussian distribution. Furthermore Z^jk=φ^n(Yj+Yk+a)\hat{Z}_{jk} = \hat{\varphi}_n(Y_j + Y_k +a), where Yi=Xiμ^nY_i = \frac{X_i}{\hat{\mu}_n} for (Xi)i=1,...,n(X_i)_{i = 1,...,n}, a sequence of independent observations of a nonnegative random variable XX. To ensure numerical stability of the implementation the exponentially scaled complementary error function erfce(x)\text{erfce}(x) is used: erfce(x)=exp(x2)erfc(x)\text{erfce}(x) = \exp{(x^2)}\text{erfc}(x), with erfc(x)=2xexp(t2)dt/π\text{erfc}(x) = 2\int_x^\infty \exp{(-t^2)}dt/\pi. The null hypothesis is rejected for large values of the test statistic HKn,a(1)HK_{n,a}^{(1)}.

Value

value of the test statistic

References

Henze, N. and Klar, B. (2002) "Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform", Annals of the Institute of Statistical Mathematics, 54(2):425-444. doi:10.1023/A:1022442506681

Examples

HK1(rmutil::rinvgauss(20,2,1))

The second Henze-Klar test statistic

Description

This function computes the test statistic of the second goodness-of-fit test for the inverse Gaussian family due to Henze and Klar (2002).

Usage

HK2(data)

Arguments

data

a vector of positive numbers.

Details

The representation of the second Henze-Klar test statistic used for computation (a=0)(a = 0) is given by:

HKn,0(2)=1nj,k=1nZjk12j=1nZj1{1πφ^n2Zjerfce(φ^n1/2(Zj+1)(2Zj)1/2)}+n1+2φ^n4φ^nHK_{n,0}^{(2)} = \frac{1}{n} \sum_{j,k=1}^{n} Z_{jk}^{-1} - 2 \sum_{j=1}^{n} Z_j^{-1} \left\{ 1 - \sqrt{\frac{\pi \hat{\varphi}_n}{2 Z_j}} \, \mathrm{erfce} \left( \frac{\hat{\varphi}_n^{1/2} (Z_j + 1)}{(2 Z_j)^{1/2}} \right) \right\} + n\frac{1 + 2 \hat{\varphi}_n}{4 \hat{\varphi}_n}

with φ^n=λ^nμ^n\hat{\varphi}_n = \frac{\hat{\lambda}_n}{\hat{\mu}_n}, where μ^n,λ^n\hat{\mu}_n,\hat{\lambda}_n are the maximum likelihood estimators for μ\mu and λ\lambda, respectively, the parameters of the inverse Gaussian distribution. Furthermore Zjk=(Yj+Yk)Z_{jk} = (Y_j + Y_k) and Zj=YjZ_j = Y_j, where Yi=Xiμ^nY_i = \frac{X_i}{\hat{\mu}_n} for (Xi)i=1,...,n(X_i)_{i = 1,...,n}, a sequence of independent observations of a nonnegative random variable XX. To ensure numerical stability of the implementation the exponentially scaled complementary error function erfce(x)\text{erfce}(x) is used: erfce(x)=exp(x2)erfc(x)\text{erfce}(x) = \exp{(x^2)}\text{erfc}(x), with erfc(x)=2xexp(t2)dt/π\text{erfc}(x) = 2\int_x^\infty \exp{(-t^2)}dt/\pi. The null hypothesis is rejected for large values of the test statistic HKn,a(2)HK_{n,a}^{(2)}.

Value

value of the test statistic.

References

Henze, N. and Klar, B. (2002) "Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform", Annals of the Institute of Statistical Mathematics, 54(2):425-444. doi:10.1023/A:1022442506681

Examples

HK2(rmutil::rinvgauss(20,2,1))

The Kolmogorov-Smirnov test statistic

Description

This function computes the test statistic of the goodness-of-fit test for the inverse Gaussian family in the spirit of Kolmogorov and Smirnov. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions.

Usage

KS(data)

Arguments

data

a vector of positive numbers.

Details

Let X(j)X_{(j)} denote the jjth order statistic of X1,,XnX_1, \ldots, X_n, a sequence of independent observations of a positive random variable XX. Furthermore, let F^(x)=F(x;μ^n,λ^n)\hat{F}(x) = F(x; \hat{\mu}_n, \hat{\lambda}_n), where FF is the distribution function of the inverse Gaussian distribution. Note that μ^n,λ^n\hat{\mu}_n,\hat{\lambda}_n are the maximum likelihood estimators for μ\mu and λ\lambda, respectively, the parameters of the inverse Gaussian distribution. The null hypothesis is rejected for large values of the test statistic:

KS=max(D+,D),KS = \max(D^+, D^-),

where

D+=maxj=1,,n(jnF^(X(j)))D^+ = \max_{j=1,\ldots,n} \left( \frac{j}{n} - \hat{F}(X_{(j)}) \right)

and

D=maxj=1,,n(F^(X(j))j1n).D^- = \max_{j=1,\ldots,n} \left( \hat{F}(X_{(j)}) - \frac{j-1}{n} \right).

Value

value of the test statistic.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK

Examples

KS(rmutil::rinvgauss(20,2,1))

Print method for tests of the inverse Gaussian distribution

Description

Printing objects of class "gofIG".

Usage

## S3 method for class 'gofIG'
print(x, ...)

Arguments

x

object of class "gofIG".

...

further arguments to be passed to or from methods.

Details

A gofIG object is a named list of numbers and character string, supplemented with test (the name of the teststatistic). test is displayed as a title. The remaining elements are given in an aligned "name = value" format.

Value

the argument x, invisibly, as for all print methods.

Examples

print(test.ABEV1(rgamma(20,1)))

The first Allison-Betsch-Ebner-Visagie goodness-of-fit test for the inverse Gaussian family

Description

This function computes the goodness-of-fit test for the inverse Gaussian family due to Allison et al. (2019). Two different estimation procedures are implemented, namely the method of moment and the maximum likelihood method.

Usage

test.ABEV1(data, a = 10, meth = "MME", B = 500)

Arguments

data

a vector of positive numbers.

a

positive tuning parameter.

meth

method of estimation used. Possible values are 'MME' for moment estimation and 'MLE' for maximum likelihood estimation.

B

number of bootstrap iterations used to obtain p value.

Details

The test is of weighted L2L^2 type and uses a characterization of the distribution function of the inverse Gaussian distribution. The p value is obtained by a parametric bootstrap procedure.

Value

a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:

$Test

the name of the used test statistic.

$parameter

the value of the tuning parameter.

$est.method

the estimation method used.

$T.value

the value of the test statistic.

$p.value

the approximated p value.

$par.est

the estimated parameters.

$boot.run

number of bootstrap iterations.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L2L^2-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK

Examples

test.ABEV1(rmutil::rinvgauss(20,2,1),B=100)

The second Allison-Betsch-Ebner-Visagie goodness-of-fit test for the inverse Gaussian family

Description

This function computes the goodness-of-fit test for the inverse Gaussian family due to Allison et al. (2019). Two different estimation procedures are implemented, namely the method of moment and the maximum likelihood method.

Usage

test.ABEV2(data, a = 10, meth = "MME", B = 500)

Arguments

data

a vector of positive numbers.

a

positive tuning parameter.

meth

method of estimation used. Possible values are 'MME' for moment estimation and 'MLE' for maximum likelihood estimation.

B

number of bootstrap iterations used to obtain p value.

Details

The test is of weighted L2L^2 type and uses a characterization of the distribution function of the inverse Gaussian distribution. The p value is obtained by a parametric bootstrap procedure.

Value

a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:

$Test

the name of the used test statistic.

$parameter

the value of the tuning parameter.

$est.method

the estimation method used.

$T.value

the value of the test statistic.

$p.value

the approximated p value.

$par.est

the estimated parameters.

$boot.run

number of bootstrap iterations.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L2L^2-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK

Examples

test.ABEV2(rmutil::rinvgauss(20,2,1),B=100)

The Anderson-Darling goodness-of-fit test for the inverse Gaussian family

Description

This function computes the goodness-of-fit test for the inverse Gaussian family in the spirit of Anderson and Darling. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions, i.e. a bootstrap procedure is implemented to perform the test.

Usage

test.AD(data, B = 500)

Arguments

data

a vector of positive numbers.

B

number of bootstrap iterations used to obtain p value.

Details

The Anderson-Darling test is computed as described in Allison et. al. (2019). The p value is obtained by a parametric bootstrap procedure.

Value

a list containing the value of the name of the test statistic, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:

$Test

the name of the used test statistic.

$T.value

the value of the test statistic.

$p.value

the approximated p value.

$par.est

the estimated parameters.

$boot.run

number of bootstrap iterations.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L2L^2-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK

Examples

test.AD(rmutil::rinvgauss(20,2,1),B=100)

The Baringhaus-Gaigall goodness-of-fit test for the inverse Gaussian family

Description

This function computes the goodness-of-fit test for the inverse Gaussian family due to Baringhaus and Gaigall (2015).

Usage

test.BG(data, B)

Arguments

data

a vector of positive numbers.

B

number of bootstrap iterations used to obtain p value.

Value

a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:

$Test

the name of the used test statistic.

$T.value

the value of the test statistic.

$p.value

the approximated p value.

$par.est

the estimated parameters.

$boot.run

number of bootstrap iterations.

References

Baringhaus, L. Gaigall, D. (2015). "On an independence test approach to the goodness-of-fit problem", Journal of Multivariate Analysis, 140, 193-208. doi:10.1016/j.jmva.2015.05.013

Examples

test.BG(rmutil::rinvgauss(20,2,1),B=100)

The Cramer-von Mises goodness-of-fit test for the inverse Gaussian family

Description

This function computes the goodness-of-fit test for the inverse Gaussian family in the spirit of Cramer and von Mises. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions, i.e. a bootstrap procedure is implemented to perform the test.

Usage

test.CM(data, B = 500)

Arguments

data

a vector of positive numbers.

B

number of bootstrap iterations used to obtain p value.

Details

The Cramer-von Mises test is computed as described in Allison et. al. (2019). The p value is obtained by a parametric bootstrap procedure.

Value

a list containing the value of the name of the test statistic, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:

$Test

the name of the used test statistic.

$T.value

the value of the test statistic.

$p.value

the approximated p value.

$par.est

the estimated parameters.

$boot.run

number of bootstrap iterations.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L2L^2-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK

Examples

test.CM(rmutil::rinvgauss(20,2,1),B=100)

The first Henze-Klar goodness-of-fit test for the inverse Gaussian family

Description

This function computes the goodness-of-fit test for the inverse Gaussian family due to Henze and Klar (2002).

Usage

test.HK1(data, a = 0, B = 500)

Arguments

data

a vector of positive numbers.

a

positive tuning parameter.

B

number of bootstrap iterations used to obtain p value.

Details

The test statistics is a weighted integral over the squared modulus of some measure of deviation of the empirical distribution of given data from the family of inverse Gaussian laws, expressed by means of the empirical Laplace transform.

Value

a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:

$Test

the name of the used test statistic.

$parameter

the value of the tuning parameter.

$T.value

the value of the test statistic.

$p.value

the approximated p value.

$par.est

the estimated parameters.

$boot.run

number of bootstrap iterations.

References

Henze, N. and Klar, B. (2002) "Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform", Annals of the Institute of Statistical Mathematics, 54(2):425-444. doi:10.1023/A:1022442506681

Examples

test.HK1(rmutil::rinvgauss(20,2,1),B=100)

The second Henze-Klar goodness-of-fit test for the inverse Gaussian family

Description

This function computes the goodness-of-fit test for the inverse Gaussian family due to Henze and Klar (2002).

Usage

test.HK2(data, B)

Arguments

data

a vector of positive numbers.

B

number of bootstrap iterations used to obtain p value.

Details

The test statistic is a weighted integral over the squared modulus of some measure of deviation of the empirical distribution of given data from the family of inverse Gaussian laws, expressed by means of the empirical Laplace transform.

Value

a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:

$Test

the name of the used test statistic.

$T.value

the value of the test statistic.

$p.value

the approximated p value.

$par.est

the estimated parameters.

$boot.run

number of bootstrap iterations.

References

Henze, N. and Klar, B. (2002) "Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform", Annals of the Institute of Statistical Mathematics, 54(2):425-444. doi:10.1023/A:1022442506681

Examples

test.HK2(rmutil::rinvgauss(20,2,1),B=100)

The Kolmogorov-Smirnov goodness-of-fit test for the inverse Gaussian family

Description

This function computes the goodness-of-fit test for the inverse Gaussian family in the spirit of Kolmogorov and Smirnov. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions, i.e. a bootstrap procedure is implemented to perform the test.

Usage

test.KS(data, B = 500)

Arguments

data

a vector of positive numbers.

B

number of bootstrap iterations used to obtain p value.

Details

The Kolmogorov Smirnov test is computed as described in Allison et. al. (2019). The p value is obtained by a parametric bootstrap procedure.

Value

a list containing the value of the name of the test statistic, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:

$Test

the name of the used test statistic.

$T.value

the value of the test statistic.

$p.value

the approximated p value.

$par.est

the estimated parameters.

$boot.run

number of bootstrap iterations.

References

Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L2L^2-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK

Examples

test.KS(rmutil::rinvgauss(20,2,1),B=100)