Answers:
在分布相同的零假设下,两个样本都是从公共分布中随机且独立地获得的,我们可以算出所有(确定性)检验的大小,这些检验可通过将一个字母值与另一个字母值进行比较而得出。这些测试中的某些似乎具有检测分布差异的合理能力。
所述的原始定义任何有序批号的-letter摘要是下述[图基EDA 1977]:
对于任何数 in 定义
让。
令和
的 -letter摘要是集合 其元素分别称为最小,下铰链,中位数,上铰链和最大值。
例如,在一批数据,我们可以计算,,和,从那里
铰链接近四分位数(但通常不完全相同)。如果使用四分位数,请注意,通常它们将是两个阶数统计量的加权算术平均值,因此将位于区间,其中i可以从n和所用算法确定计算四分位数。通常,当q在[ i ,i + 1 ]区间内时,我将宽松地写x q来指代x i和。
对于两批数据和(y j,j = 1 ,… ,m ),有两个单独的五字母摘要。我们可以通过比较x字母x q之一和y字母y r之一来检验两个假设都是同分布F的iid随机样本的零假设。例如,我们可以比较x的上铰链到的下铰链,以观察x是否明显小于y。这就引出了一个明确的问题:如何计算这个机会,
对于分数和r,在不知道F的情况下是不可能的。然而,由于X q ≤ X ⌈ q ⌉和ÿ ⌊ ř ⌋ ≤ ÿ - [R ,然后更不用说
因此,我们可以通过计算右手概率(比较各个订单统计信息)来获得所需概率的通用上限(与无关)。 我们面前的普遍问题是
第最高的机会是多少 values will be less than the highest of values drawn iid from a common distribution?
Even this does not have a universal answer unless we rule out the possibility that probability is too heavily concentrated on individual values: in other words, we need to assume that ties are not possible. This means must be a continuous distribution. Although this is an assumption, it is a weak one and it is non-parametric.
The distribution plays no role in the calculation, because upon re-expressing all values by means of the probability transform , we obtain new batches
and
Moreover, this re-expression is monotonic and increasing: it preserves order and in so doing preserves the event Because is continuous, these new batches are drawn from a Uniform distribution. Under this distribution--and dropping the now superfluous "" from the notation--we easily find that has a Beta = Beta distribution:
Similarly the distribution of is Beta. By performing the double integration over the region we can obtain the desired probability,
Because all values are integral, all the values are really just factorials: for integral The little-known function is a regularized hypergeometric function. In this case it can be computed as a rather simple alternating sum of length , normalized by some factorials:
This has reduced the calculation of the probability to nothing more complicated than addition, subtraction, multiplication, and division. The computational effort scales as By exploiting the symmetry
the new calculation scales as allowing us to pick the easier of the two sums if we wish. This will rarely be necessary, though, because -letter summaries tend to be used only for small batches, rarely exceeding
Suppose the two batches have sizes and . The relevant order statistics for and are and respectively. Here is a table of the chance that with indexing the rows and indexing the columns:
q\r 1 3 6 9 12
1 0.4 0.807 0.9762 0.9987 1.
3 0.0491 0.2962 0.7404 0.9601 0.9993
5 0.0036 0.0521 0.325 0.7492 0.9856
7 0.0001 0.0032 0.0542 0.3065 0.8526
8 0. 0.0004 0.0102 0.1022 0.6
A simulation of 10,000 iid sample pairs from a standard Normal distribution gave results close to these.
To construct a one-sided test at size such as to determine whether the batch is significantly less than the batch, look for values in this table close to or just under . Good choices are at where the chance is at with a chance of , and at with a chance of Which one to use depends on your thoughts about the alternative hypothesis. For instance, the test compares the lower hinge of to the smallest value of and finds a significant difference when that lower hinge is the smaller one. This test is sensitive to an extreme value of ; if there is some concern about outlying data, this might be a risky test to choose. On the other hand the test compares the upper hinge of to the median of . This one is very robust to outlying values in the batch and moderately robust to outliers in . However, it compares middle values of to middle values of . Although this is probably a good comparison to make, it will not detect differences in the distributions that occur only in either tail.
Being able to compute these critical values analytically helps in selecting a test. Once one (or several) tests are identified, their power to detect changes is probably best evaluated through simulation. The power will depend heavily on how the distributions differ. To get a sense of whether these tests have any power at all, I conducted the test with the drawn iid from a Normal distribution: that is, its median was shifted by one standard deviation. In a simulation the test was significant of the time: that is appreciable power for datasets this small.
Much more can be said, but all of it is routine stuff about conducting two-sided tests, how to assess effects sizes, and so on. The principal point has been demonstrated: given the -letter summaries (and sizes) of two batches of data, it is possible to construct reasonably powerful non-parametric tests to detect differences in their underlying populations and in many cases we might even have several choices of test to select from. The theory developed here has a broader application to comparing two populations by means of a appropriately selected order statistics from their samples (not just those approximating the letter summaries).
These results have other useful applications. For instance, a boxplot is a graphical depiction of a -letter summary. Thus, along with knowledge of the sample size shown by a boxplot, we have available a number of simple tests (based on comparing parts of one box and whisker to another one) to assess the significance of visually apparent differences in those plots.
I'm pretty confident there isn't going to be one already in the literature, but if you seek a nonparametric test, it would have to be under the assumption of continuity of the underlying variable -- you could look at something like an ECDF-type statistic - say some equivalent to a Kolmogorov-Smirnov-type statistic or something akin to an Anderson-Darling statistic (though of course the distribution of the statistic will be very different in this case).
The distribution for small samples will depend on the precise definitions of the quantiles used in the five number summary.
Consider, for example, the default quartiles and extreme values in R (n=10):
> summary(x)[-4]
Min. 1st Qu. Median 3rd Qu. Max.
-2.33500 -0.26450 0.07787 0.33740 0.94770
compared to those generated by its command for the five number summary:
> fivenum(x)
[1] -2.33458172 -0.34739104 0.07786866 0.38008143 0.94774213
Note that the upper and lower quartiles differ from the corresponding hinges in the fivenum
command.
By contrast, at n=9 the two results are identical (when they all occur at observations)
(R comes with nine different definitions for quantiles.)
The case for all three quartiles occurring at observations (when n=4k+1, I believe, possibly under more cases under some definitions of them) might actually be doable algebraically and should be nonparametric, but the general case (across many definitions) may not be so doable, and may not be nonparametric (consider the case where you're averaging observations to produce quantiles in at least one of the samples ... in that case the probabilities of different arrangements of sample quantiles may no longer be unaffected by the distribution of the data).
Once a fixed definition is chosen, simulation would seem to be the way to proceed.
Because it will be nonparametric at a subset of possible values of , the fact that it's no longer distribution free for other values may not be such a big concern; one might say nearly distribution free at intermediate sample sizes, at least if 's are not too small.
Let's look at some cases that should be distribution free, and consider some small sample sizes. Say a KS-type statistic applied directly to the five number summary itself, for sample sizes where the five number summary values will be individual order statistics.
Note that this doesn't really 'emulate' the K-S test exactly, since the jumps in the tail are too large compared to the KS, for example. On the other hand, it's not easy to assert that the jumps at the summary values should be for all the values between them. Different sets of weights/jumps will have different type-I error characteristics and different power characteristics and I am not sure what is best to choose (choosing slightly different from equal values could help get a finer set of significance levels, though). My purpose, then is simply to show that the general approach may be feasible, not to recommend any specific procedure. An arbitrary set of weights to each value in the summary will still give a nonparametric test, as long as they're not taken with reference to the data.
Anyway, here goes:
Finding the null distribution/critical values via simulation
At n=5 and 5 in the two samples, we needn't do anything special - that's a straight KS test.
At n=9 and 9, we can do uniform simulation:
ks9.9 <- replicate(10000,ks.test(fivenum(runif(9)),fivenum(runif(9)))$statistic)
plot(table(ks9.9)/10000,type="h"); abline(h=0,col=8)
# Here's the empirical cdf:
cumsum(table(ks9.9)/10000)
0.2 0.4 0.6 0.8
0.3730 0.9092 0.9966 1.0000
so at , you can get roughly (), and roughly (). (We shouldn't expect nice alpha steps. When the 's are moderately large we should expect not to have anything but very big or very tiny choices for ).
has a nice near-5% significance level ()
has a nice near-2.5% significance level ()
At sample sizes near these, this approach should be feasible, but if both s are much above 21 ( and ), this won't work well at all.
--
A very fast 'by inspection' test
We see a rejection rule of coming up often in the cases we looked at. What sample arrangements lead to that? I think the following two cases:
(i) When the whole of one sample is on one side of the other group's median.
(ii) When the boxes (the range covered by the quartiles) don't overlap.
So there's a nice super-simple nonparametric rejection rule for you -- but it usually won't be at a 'nice' significance level unless the sample sizes aren't too far from 9-13.
Getting a finer set of possible levels
Anyway, producing tables for similar cases should be relatively straightforward. At medium to large , this test will only have very small possible levels (or very large) and won't be of practical use except for cases where the difference is obvious).
Interestingly, one approach to increasing the achievable levels would be to set the jumps in the 'fivenum' cdf according to a Golomb-ruler. If the cdf values were and , for example, then the difference between any pair of cdf-values would be different from any other pair. It might be worth seeing if that has much effect on power (my guess: probably not a lot).
Compared to these K-S like tests, I'd expect something more like an Anderson-Darling to be more powerful, but the question is how to weight for this five-number summary case. I imagine that can be tackled, but I'm not sure the extent to which it's worth it.
Power
Let's see how it goes on picking up a difference at . This is a power curve for normal data, and the effect, del, is in number of standard deviations the second sample is shifted up:
This seems like quite a plausible power curve. So it seems to work okay at least at these small sample sizes.
What about robust, rather than nonparametric?
If nonparametric tests aren't so crucial, but robust-tests are instead okay, we could instead look at some more direct comparison of the three quartile values in the summary, such as an interval for the median based off the IQR and the sample size (based off some nominal distribution around which robustness is desired, such as the normal -- this is the reasoning behind notched box plots, for example). This should tend to work much better at large sample sizes than the nonparametric test which will suffer from lack of appropriate significance levels.
I don't see how there could be such a test, at least without some assumptions.
You can have two different distributions that have the same 5 number summary:
Here is a trivial example, where I change only 2 numbers, but clearly more numbers could be changed
set.seed(123)
#Create data
x <- rnorm(1000)
#Modify it without changing 5 number summary
x2 <- sort(x)
x2[100] <- x[100] - 1
x2[900] <- x[900] + 1
fivenum(x)
fivenum(x2)