过去,我使用以下方法来适度有效地计算绝对偏差(请注意,这是程序员的方法,而不是统计学家,因此,毫无疑问,可能会有像shabbychef这样的聪明技巧,可能会更有效)。
警告:这不是一个在线算法。它需要O(n)
内存。此外,O(n)
对于像这样的数据集[1, -2, 4, -8, 16, -32, ...]
(即与完全重新计算相同),其最坏情况下的性能为。[1]
但是,由于它在许多用例中仍然表现良好,因此可能值得在此处发布。例如,为了计算每件物品到达时-100至100之间的10000个随机数的绝对偏差,我的算法用了不到一秒钟的时间,而完整的重新计算花费了17秒以上(在我的机器上,每台机器和根据输入数据)。但是,您需要将整个向量保留在内存中,这对于某些用途可能是一个约束。该算法的概述如下:
- 与其使用单个矢量来存储过去的测量值,不如使用三个排序的优先级队列(类似于最小/最大堆)。这三个列表将输入分为三个部分:大于平均值的项,小于平均值的项和等于平均值的项。
- (几乎)每次添加项时均值都会更改,因此我们需要重新分区。关键是分区的排序性质,这意味着我们不必扫描列表中的每个项目即可进行分区,而只需阅读要移动的那些项目。在最坏的情况下,这仍然需要
O(n)
移动操作,但在许多用例中,情况并非如此。
- 使用一些巧妙的簿记方法,我们可以确保在重新分区和添加新项目时始终正确计算出偏差。
下面是python中的一些示例代码。请注意,它仅允许将项目添加到列表,而不能删除。可以很容易地添加它,但是在我写这篇文章的时候,我并不需要它。而不是实现优先级队列自己,我已经使用了SortedList的丹尼尔Stutzbach的优秀blist包,其使用B +树的内部秒。
考虑一下此代码已获得MIT许可。它没有经过明显的优化或改进,但是在过去对我有用。新版本将在此处提供。让我知道您是否有任何疑问,或发现任何错误。
from blist import sortedlist
import operator
class deviance_list:
def __init__(self):
self.mean = 0.0
self._old_mean = 0.0
self._sum = 0L
self._n = 0 #n items
# items greater than the mean
self._toplist = sortedlist()
# items less than the mean
self._bottomlist = sortedlist(key = operator.neg)
# Since all items in the "eq list" have the same value (self.mean) we don't need
# to maintain an eq list, only a count
self._eqlistlen = 0
self._top_deviance = 0
self._bottom_deviance = 0
@property
def absolute_deviance(self):
return self._top_deviance + self._bottom_deviance
def append(self, n):
# Update summary stats
self._sum += n
self._n += 1
self._old_mean = self.mean
self.mean = self._sum / float(self._n)
# Move existing things around
going_up = self.mean > self._old_mean
self._rebalance(going_up)
# Add new item to appropriate list
if n > self.mean:
self._toplist.add(n)
self._top_deviance += n - self.mean
elif n == self.mean:
self._eqlistlen += 1
else:
self._bottomlist.add(n)
self._bottom_deviance += self.mean - n
def _move_eqs(self, going_up):
if going_up:
self._bottomlist.update([self._old_mean] * self._eqlistlen)
self._bottom_deviance += (self.mean - self._old_mean) * self._eqlistlen
self._eqlistlen = 0
else:
self._toplist.update([self._old_mean] * self._eqlistlen)
self._top_deviance += (self._old_mean - self.mean) * self._eqlistlen
self._eqlistlen = 0
def _rebalance(self, going_up):
move_count, eq_move_count = 0, 0
if going_up:
# increase the bottom deviance of the items already in the bottomlist
if self.mean != self._old_mean:
self._bottom_deviance += len(self._bottomlist) * (self.mean - self._old_mean)
self._move_eqs(going_up)
# transfer items from top to bottom (or eq) list, and change the deviances
for n in iter(self._toplist):
if n < self.mean:
self._top_deviance -= n - self._old_mean
self._bottom_deviance += (self.mean - n)
# we increment movecount and move them after the list
# has finished iterating so we don't modify the list during iteration
move_count += 1
elif n == self.mean:
self._top_deviance -= n - self._old_mean
self._eqlistlen += 1
eq_move_count += 1
else:
break
for _ in xrange(0, move_count):
self._bottomlist.add(self._toplist.pop(0))
for _ in xrange(0, eq_move_count):
self._toplist.pop(0)
# decrease the top deviance of the items remain in the toplist
self._top_deviance -= len(self._toplist) * (self.mean - self._old_mean)
else:
if self.mean != self._old_mean:
self._top_deviance += len(self._toplist) * (self._old_mean - self.mean)
self._move_eqs(going_up)
for n in iter(self._bottomlist):
if n > self.mean:
self._bottom_deviance -= self._old_mean - n
self._top_deviance += n - self.mean
move_count += 1
elif n == self.mean:
self._bottom_deviance -= self._old_mean - n
self._eqlistlen += 1
eq_move_count += 1
else:
break
for _ in xrange(0, move_count):
self._toplist.add(self._bottomlist.pop(0))
for _ in xrange(0, eq_move_count):
self._bottomlist.pop(0)
# decrease the bottom deviance of the items remain in the bottomlist
self._bottom_deviance -= len(self._bottomlist) * (self._old_mean - self.mean)
if __name__ == "__main__":
import random
dv = deviance_list()
# Test against some random data, and calculate result manually (nb. slowly) to ensure correctness
rands = [random.randint(-100, 100) for _ in range(0, 1000)]
ns = []
for n in rands:
dv.append(n)
ns.append(n)
print("added:%4d, mean:%3.2f, oldmean:%3.2f, mean ad:%3.2f" %
(n, dv.mean, dv._old_mean, dv.absolute_deviance / dv.mean))
assert sum(ns) == dv._sum, "Sums not equal!"
assert len(ns) == dv._n, "Counts not equal!"
m = sum(ns) / float(len(ns))
assert m == dv.mean, "Means not equal!"
real_abs_dev = sum([abs(m - x) for x in ns])
# Due to floating point imprecision, we check if the difference between the
# two ways of calculating the asb. dev. is small rather than checking equality
assert abs(real_abs_dev - dv.absolute_deviance) < 0.01, (
"Absolute deviances not equal. Real:%.2f, calc:%.2f" % (real_abs_dev, dv.absolute_deviance))
[1]如果症状持续,请去看医生。