Sponsored Links

Selasa, 17 Juli 2018

Sponsored Links

Second generation noninvasive fetal genome analysis reveals de ...
src: www.pnas.org

The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald. Neyman and Pearson's 1933 result inspired Wald to reformulate it as a sequential analysis problem. The Neyman-Pearson lemma, by contrast, offers a rule of thumb for when all the data is collected (and its likelihood ratio known).

While originally developed for use in quality control studies in the realm of manufacturing, SPRT has been formulated for use in the computerized testing of human examinees as a termination criterion.


Video Sequential probability ratio test



Theory

As in classical hypothesis testing, SPRT starts with a pair of hypotheses, say H 0 {\displaystyle H_{0}} and H 1 {\displaystyle H_{1}} for the null hypothesis and alternative hypothesis respectively. They must be specified as follows:

H 0 : p = p 0 {\displaystyle H_{0}:p=p_{0}}
H 1 : p = p 1 {\displaystyle H_{1}:p=p_{1}}

The next step is to calculate the cumulative sum of the log-likelihood ratio, log ? i {\displaystyle \log \Lambda _{i}} , as new data arrive: with S 0 = 0 {\displaystyle S_{0}=0} , then, for i {\displaystyle i} =1,2,...,

S i = S i - 1 + log ? i {\displaystyle S_{i}=S_{i-1}+\log \Lambda _{i}}

The stopping rule is a simple thresholding scheme:

  • a < S i < b {\displaystyle a<S_{i}<b} : continue monitoring (critical inequality)
  • S i >= b {\displaystyle S_{i}\geq b} : Accept H 1 {\displaystyle H_{1}}
  • S i <= a {\displaystyle S_{i}\leq a} : Accept H 0 {\displaystyle H_{0}}

where a {\displaystyle a} and b {\displaystyle b} ( a < 0 < b < ? {\displaystyle a<0<b<\infty } ) depend on the desired type I and type II errors, ? {\displaystyle \alpha } and ? {\displaystyle \beta } . They may be chosen as follows:

a ? log ? 1 - ? {\displaystyle a\approx \log {\frac {\beta }{1-\alpha }}} and b ? log 1 - ? ? {\displaystyle b\approx \log {\frac {1-\beta }{\alpha }}}

In other words, ? {\displaystyle \alpha } and ? {\displaystyle \beta } must be decided beforehand in order to set the thresholds appropriately. The numerical value will depend on the application. The reason for using approximation signs is that, in the discrete case, the signal may cross the threshold between samples. Thus, depending on the penalty of making an error and the sampling frequency, one might set the thresholds more aggressively. Of course, the exact bounds may be used in the continuous case.


Maps Sequential probability ratio test



Example

A textbook example is parameter estimation of a probability distribution function. Let us consider the exponential distribution:

f ? ( x ) = ? - 1 e - x ? , x , ? > 0 {\displaystyle f_{\theta }(x)=\theta ^{-1}e^{-{\frac {x}{\theta }}},\qquad x,\theta >0}

The hypotheses are

{ H 0 : ? = ? 0 H 1 : ? = ? 1 ? 1 > ? 0 . {\displaystyle {\begin{cases}H_{0}:\theta =\theta _{0}\\H_{1}:\theta =\theta _{1}\end{cases}}\qquad \theta _{1}>\theta _{0}.}

Then the log-likelihood function (LLF) for one sample is

log ? ( x ) = log ( ? 1 - 1 e - x ? 1 ? 0 - 1 e - x ? 0 ) = log ( ? 0 ? 1 e x ? 0 - x ? 1 ) = log ( ? 0 ? 1 ) + log ( e x ? 0 - x ? 1 ) = - log ( ? 1 ? 0 ) + ( x ? 0 - x ? 1 ) = - log ( ? 1 ? 0 ) + ( ? 1 - ? 0 ? 0 ? 1 ) x {\displaystyle {\begin{aligned}\log \Lambda (x)&=\log \left({\frac {\theta _{1}^{-1}e^{-{\frac {x}{\theta _{1}}}}}{\theta _{0}^{-1}e^{-{\frac {x}{\theta _{0}}}}}}\right)\\&=\log \left({\frac {\theta _{0}}{\theta _{1}}}e^{{\frac {x}{\theta _{0}}}-{\frac {x}{\theta _{1}}}}\right)\\&=\log \left({\frac {\theta _{0}}{\theta _{1}}}\right)+\log \left(e^{{\frac {x}{\theta _{0}}}-{\frac {x}{\theta _{1}}}}\right)\\&=-\log \left({\frac {\theta _{1}}{\theta _{0}}}\right)+\left({\frac {x}{\theta _{0}}}-{\frac {x}{\theta _{1}}}\right)\\&=-\log \left({\frac {\theta _{1}}{\theta _{0}}}\right)+\left({\frac {\theta _{1}-\theta _{0}}{\theta _{0}\theta _{1}}}\right)x\end{aligned}}}

The cumulative sum of the LLFs for all x is

S n = ? i = 1 n log ? ( x i ) = - n log ( ? 1 ? 0 ) + ( ? 1 - ? 0 ? 0 ? 1 ) ? i = 1 n x i {\displaystyle S_{n}=\sum _{i=1}^{n}\log \Lambda (x_{i})=-n\log \left({\frac {\theta _{1}}{\theta _{0}}}\right)+\left({\frac {\theta _{1}-\theta _{0}}{\theta _{0}\theta _{1}}}\right)\sum _{i=1}^{n}x_{i}}

Accordingly, the stopping rule is:

a < - n log ( ? 1 ? 0 ) + ( ? 1 - ? 0 ? 0 ? 1 ) ? i = 1 n x i < b {\displaystyle a<-n\log \left({\frac {\theta _{1}}{\theta _{0}}}\right)+\left({\frac {\theta _{1}-\theta _{0}}{\theta _{0}\theta _{1}}}\right)\sum _{i=1}^{n}x_{i}<b}

After re-arranging we finally find

a + n log ( ? 1 ? 0 ) < ( ? 1 - ? 0 ? 0 ? 1 ) ? i = 1 n x i < b + n log ( ? 1 ? 0 ) {\displaystyle a+n\log \left({\frac {\theta _{1}}{\theta _{0}}}\right)<\left({\frac {\theta _{1}-\theta _{0}}{\theta _{0}\theta _{1}}}\right)\sum _{i=1}^{n}x_{i}<b+n\log \left({\frac {\theta _{1}}{\theta _{0}}}\right)}

The thresholds are simply two parallel lines with slope log ( ? 1 / ? 0 ) {\displaystyle \log(\theta _{1}/\theta _{0})} . Sampling should stop when the sum of the samples makes an excursion outside the continue-sampling region.


PPT - Does Math Matter to Gray Matter? (or, The Rewards of ...
src: image.slideserve.com


Applications

Manufacturing

The test is done on the proportion metric, and tests that a variable p is equal to one of two desired points, p1 or p2. The region between these two points is known as the indifference region (IR). For example, suppose you are performing a quality control study on a factory lot of widgets. Management would like the lot to have 3% or less defective widgets, but 1% or less is the ideal lot that would pass with flying colors. In this example, p1 = 0.01 and p2 = 0.03 and the region between them is the IR because management considers these lots to be marginal and is OK with them being classified either way. Widgets would be sampled one at a time from the lot (sequential analysis) until the test determines, within an acceptable error level, that the lot is ideal or should be rejected.

Testing of human examinees

The SPRT is currently the predominant method of classifying examinees in a variable-length computerized classification test (CCT). The two parameters are p1 and p2 are specified by determining a cutscore (threshold) for examinees on the proportion correct metric, and selecting a point above and below that cutscore. For instance, suppose the cutscore is set at 70% for a test. We could select p1 = 0.65 and p2 = 0.75 . The test then evaluates the likelihood that an examinee's true score on that metric is equal to one of those two points. If the examinee is determined to be at 75%, they pass, and they fail if they are determined to be at 65%.

These points are not specified completely arbitrarily. A cutscore should always be set with a legally defensible method, such as a modified Angoff procedure. Again, the indifference region represents the region of scores that the test designer is OK with going either way (pass or fail). The upper parameter p2 is conceptually the highest level that the test designer is willing to accept for a Fail (because everyone below it has a good chance of failing), and the lower parameter p1 is the lowest level that the test designer is willing to accept for a pass (because everyone above it has a decent chance of passing). While this definition may seem to be a relatively small burden, consider the high-stakes case of a licensing test for medical doctors: at just what point should we consider somebody to be at one of these two levels?

While the SPRT was first applied to testing in the days of classical test theory, as is applied in the previous paragraph, Reckase (1983) suggested that item response theory be used to determine the p1 and p2 parameters. The cutscore and indifference region are defined on the latent ability (theta) metric, and translated onto the proportion metric for computation. Research on CCT since then has applied this methodology for several reasons:

  1. Large item banks tend to be calibrated with IRT
  2. This allows more accurate specification of the parameters
  3. By using the item response function for each item, the parameters are easily allowed to vary between items.

Detection of anomalous medical outcomes

Spiegelhalter et al. have shown that SPRT can be used to monitor the performance of doctors, surgeons and other medical practitioners in such a way as to give early warning of potentially anomalous results. In their 2003 paper, they showed how it could have helped identify Harold Shipman as a murderer well before he was actually identified.


Second generation noninvasive fetal genome analysis reveals de ...
src: www.pnas.org


Extensions

MaxSPRT

More recently, in 2011, an extension of the SPRT method called Maximized Sequential Probability Ratio Test (MaxSPRT) was introduced. The salient feature of MaxSPRT is the allowance of a composite, one-sided alternative hypothesis, and the introduction of an upper stopping boundary. The method has been used in several medical research studies


continuous multi-alternative decisions | Open Science
src: rsos.royalsocietypublishing.org


See also

  • CUSUM
  • Computerized classification test
  • Wald test
  • Likelihood-ratio test

Sequential sampling of Bemisia tuberculata (Bondar, 1923 ...
src: www.scielo.br


References

  • Ghosh, Bhaskar Kumar (1970). Sequential Tests of Statistical Hypotheses. Reading: Addison-Wesley. 
  • Holger Wilker: Sequential-Statistik in der Praxis, BoD, Norderstedt 2012, ISBN 978-3848232529.

Optimizing Trial Design | Circulation
src: circ.ahajournals.org


External links

  • R Package: Wald's Sequential Probability Ratio Test by Stéphane Bottine

Source of the article : Wikipedia

Comments
0 Comments