慢熊猫DataFrame MultiIndex重新索引


13

我有一个形式的熊猫DataFrame:

                       id                start_time  sequence_no    value
0                      71 2018-10-17 20:12:43+00:00       114428        3
1                      71 2018-10-17 20:12:43+00:00       114429        3
2                      71 2018-10-17 20:12:43+00:00       114431       79
3                      71 2019-11-06 00:51:14+00:00       216009      100
4                      71 2019-11-06 00:51:14+00:00       216011      150
5                      71 2019-11-06 00:51:14+00:00       216013      180
6                      92 2019-12-01 00:51:14+00:00       114430       19
7                      92 2019-12-01 00:51:14+00:00       114433       79
8                      92 2019-12-01 00:51:14+00:00       114434      100

我正在试图做的是在缺少填充sequence_no id / start_time组合。例如,id/ start_time配对的712018-10-17 20:12:43+00:00,缺少sequence_no 114430.对于每个加入缺少sequence_no,我还需要平均/内插缺失value列值。因此,以上数据的最终处理将看起来像:

                       id                start_time  sequence_no    value
0                      71 2018-10-17 20:12:43+00:00       114428        3
1                      71 2018-10-17 20:12:43+00:00       114429        3
2                      71 2018-10-17 20:12:43+00:00       114430       41  **
3                      71 2018-10-17 20:12:43+00:00       114431       79
4                      71 2019-11-06 00:51:14+00:00       216009      100  
5                      71 2019-11-06 00:51:14+00:00       216010      125  **
6                      71 2019-11-06 00:51:14+00:00       216011      150
7                      71 2019-11-06 00:51:14+00:00       216012      165  **
8                      71 2019-11-06 00:51:14+00:00       216013      180
9                      92 2019-12-01 00:51:14+00:00       114430       19
10                     92 2019-12-01 00:51:14+00:00       114431       39  **
11                     92 2019-12-01 00:51:14+00:00       114432       59  **
12                     92 2019-12-01 00:51:14+00:00       114433       79
13                     92 2019-12-01 00:51:14+00:00       114434      100

**添加到新插入的行的右侧,以便于阅读)

我执行此操作的原始解决方案在很大程度上依赖于大型数据表上的Python循环,因此这似乎是使numpy和pandas发光的理想场所。靠着像Pandas这样的答案:创建行来填补数字空白,我想到了:

import pandas as pd
import numpy as np

# Generate dummy data
df = pd.DataFrame([
    (71, '2018-10-17 20:12:43+00:00', 114428, 3),
    (71, '2018-10-17 20:12:43+00:00', 114429, 3),
    (71, '2018-10-17 20:12:43+00:00', 114431, 79),
    (71, '2019-11-06 00:51:14+00:00', 216009, 100),
    (71, '2019-11-06 00:51:14+00:00', 216011, 150),
    (71, '2019-11-06 00:51:14+00:00', 216013, 180),
    (92, '2019-12-01 00:51:14+00:00', 114430, 19),
    (92, '2019-12-01 00:51:14+00:00', 114433, 79),
    (92, '2019-12-01 00:51:14+00:00', 114434, 100),   
], columns=['id', 'start_time', 'sequence_no', 'value'])

# create a new DataFrame with the min/max `sequence_no` values for each `id`/`start_time` pairing
by_start = df.groupby(['start_time', 'id'])
ranges = by_start.agg(
    sequence_min=('sequence_no', np.min), sequence_max=('sequence_no', np.max)
)
reset = ranges.reset_index()

mins = reset['sequence_min']
maxes = reset['sequence_max']

# Use those min/max values to generate a sequence with ALL values in that range
expanded = pd.DataFrame(dict(
    start_time=reset['start_time'].repeat(maxes - mins + 1),
    id=reset['id'].repeat(maxes - mins + 1),
    sequence_no=np.concatenate([np.arange(mins, maxes + 1) for mins, maxes in zip(mins, maxes)])
))

# Use the above generated DataFrame as an index to generate the missing rows, then interpolate
expanded_index = pd.MultiIndex.from_frame(expanded)
df.set_index(
    ['start_time', 'id', 'sequence_no']
).reindex(expanded_index).interpolate()

输出是正确的,但其运行速度几乎与我的大量python循环解决方案相同。我敢肯定,有些地方可以减少一些步骤,但测试中最慢的部分似乎是reindex。鉴于现实世界中的数据几乎包含一百万行(经常进行操作),是否有任何明显的方法可以取得比我已经写过的性能更高的性能?有什么方法可以加快这种转变?

更新9/12/2019

当在足够大的数据集上进行测试时,将来自此答案的合并解决方案与扩展数据帧的原始结构结合起来,可以得到迄今为止最快的结果:

import pandas as pd
import numpy as np

# Generate dummy data
df = pd.DataFrame([
    (71, '2018-10-17 20:12:43+00:00', 114428, 3),
    (71, '2018-10-17 20:12:43+00:00', 114429, 3),
    (71, '2018-10-17 20:12:43+00:00', 114431, 79),
    (71, '2019-11-06 00:51:14+00:00', 216009, 100),
    (71, '2019-11-06 00:51:14+00:00', 216011, 150),
    (71, '2019-11-06 00:51:14+00:00', 216013, 180),
    (92, '2019-12-01 00:51:14+00:00', 114430, 19),
    (92, '2019-12-01 00:51:14+00:00', 114433, 79),
    (92, '2019-12-01 00:51:14+00:00', 114434, 100),   
], columns=['id', 'start_time', 'sequence_no', 'value'])

# create a ranges df with groupby and agg
ranges = df.groupby(['start_time', 'id'])['sequence_no'].agg([
    ('sequence_min', np.min), ('sequence_max', np.max)
])
reset = ranges.reset_index()

mins = reset['sequence_min']
maxes = reset['sequence_max']

# Use those min/max values to generate a sequence with ALL values in that range
expanded = pd.DataFrame(dict(
    start_time=reset['start_time'].repeat(maxes - mins + 1),
    id=reset['id'].repeat(maxes - mins + 1),
    sequence_no=np.concatenate([np.arange(mins, maxes + 1) for mins, maxes in zip(mins, maxes)])
))

# merge expanded and df
merge = expanded.merge(df, on=['start_time', 'id', 'sequence_no'], how='left')
# interpolate and assign values 
merge['value'] = merge['value'].interpolate()

Answers:


8

使用merge的,而不是reindex可以加快速度。同样,也可以使用map而不是列表理解。

# Generate dummy data
df = pd.DataFrame([
    (71, '2018-10-17 20:12:43+00:00', 114428, 3),
    (71, '2018-10-17 20:12:43+00:00', 114429, 3),
    (71, '2018-10-17 20:12:43+00:00', 114431, 79),
    (71, '2019-11-06 00:51:14+00:00', 216009, 100),
    (71, '2019-11-06 00:51:14+00:00', 216011, 150),
    (71, '2019-11-06 00:51:14+00:00', 216013, 180),
    (92, '2019-12-01 00:51:14+00:00', 114430, 19),
    (92, '2019-12-01 00:51:14+00:00', 114433, 79),
    (92, '2019-12-01 00:51:14+00:00', 114434, 100),   
], columns=['id', 'start_time', 'sequence_no', 'value'])

# create a ranges df with groupby and agg
ranges = df.groupby(['start_time', 'id'])['sequence_no'].agg([('sequence_min', np.min), ('sequence_max', np.max)])
# map with range to create the sequence number rnage
ranges['sequence_no'] = list(map(lambda x,y: range(x,y), ranges.pop('sequence_min'), ranges.pop('sequence_max')+1))
# explode you DataFrame
new_df = ranges.explode('sequence_no')
# merge new_df and df
merge = new_df.reset_index().merge(df, on=['start_time', 'id', 'sequence_no'], how='left')
# interpolate and assign values 
merge['value'] = merge['value'].interpolate()

                   start_time  id sequence_no  value
0   2018-10-17 20:12:43+00:00  71      114428    3.0
1   2018-10-17 20:12:43+00:00  71      114429    3.0
2   2018-10-17 20:12:43+00:00  71      114430   41.0
3   2018-10-17 20:12:43+00:00  71      114431   79.0
4   2019-11-06 00:51:14+00:00  71      216009  100.0
5   2019-11-06 00:51:14+00:00  71      216010  125.0
6   2019-11-06 00:51:14+00:00  71      216011  150.0
7   2019-11-06 00:51:14+00:00  71      216012  165.0
8   2019-11-06 00:51:14+00:00  71      216013  180.0
9   2019-12-01 00:51:14+00:00  92      114430   19.0
10  2019-12-01 00:51:14+00:00  92      114431   39.0
11  2019-12-01 00:51:14+00:00  92      114432   59.0
12  2019-12-01 00:51:14+00:00  92      114433   79.0
13  2019-12-01 00:51:14+00:00  92      114434  100.0

这是“向前迈出,向后迈一步”的有趣案例。您是正确的,它的merge速度明显快于reindex,但事实证明,explode在较大的数据集上,速度非常慢。将您的合并与扩展数据集的原始结构组合在一起时,我们将获得迄今为止最快的实现方式(请参阅问题的
9/12/2019

1
@MBrizzle另外,我应该注意,将参数添加copy=False到合并中可以使速度加快一点,并且可以避免不必要的数据复制。merge = expanded.merge(df, on=['start_time', 'id', 'sequence_no'], how='left', copy=False)
Yo_Chris

3

merge解决方案的简短版本:

df.groupby(['start_time', 'id'])['sequence_no']\
.apply(lambda x: np.arange(x.min(), x.max() + 1))\
.explode().reset_index()\
.merge(df, on=['start_time', 'id', 'sequence_no'], how='left')\
.interpolate()

输出:

                   start_time  id sequence_no  value
0   2018-10-17 20:12:43+00:00  71      114428    3.0
1   2018-10-17 20:12:43+00:00  71      114429    3.0
2   2018-10-17 20:12:43+00:00  71      114430   41.0
3   2018-10-17 20:12:43+00:00  71      114431   79.0
4   2019-11-06 00:51:14+00:00  71      216009  100.0
5   2019-11-06 00:51:14+00:00  71      216010  125.0
6   2019-11-06 00:51:14+00:00  71      216011  150.0
7   2019-11-06 00:51:14+00:00  71      216012  165.0
8   2019-11-06 00:51:14+00:00  71      216013  180.0
9   2019-12-01 00:51:14+00:00  92      114430   19.0
10  2019-12-01 00:51:14+00:00  92      114431   39.0
11  2019-12-01 00:51:14+00:00  92      114432   59.0
12  2019-12-01 00:51:14+00:00  92      114433   79.0
13  2019-12-01 00:51:14+00:00  92      114434  100.0

1

reindex不使用的另一种解决方案explode

result = (df.groupby(["id","start_time"])
          .apply(lambda d: d.set_index("sequence_no")
          .reindex(range(min(d["sequence_no"]),max(d["sequence_no"])+1)))
          .drop(["id","start_time"],axis=1).reset_index()
          .interpolate())

print (result)

#
    id                 start_time  sequence_no  value
0   71  2018-10-17 20:12:43+00:00       114428    3.0
1   71  2018-10-17 20:12:43+00:00       114429    3.0
2   71  2018-10-17 20:12:43+00:00       114430   41.0
3   71  2018-10-17 20:12:43+00:00       114431   79.0
4   71  2019-11-06 00:51:14+00:00       216009  100.0
5   71  2019-11-06 00:51:14+00:00       216010  125.0
6   71  2019-11-06 00:51:14+00:00       216011  150.0
7   71  2019-11-06 00:51:14+00:00       216012  165.0
8   71  2019-11-06 00:51:14+00:00       216013  180.0
9   92  2019-12-01 00:51:14+00:00       114430   19.0
10  92  2019-12-01 00:51:14+00:00       114431   39.0
11  92  2019-12-01 00:51:14+00:00       114432   59.0
12  92  2019-12-01 00:51:14+00:00       114433   79.0
13  92  2019-12-01 00:51:14+00:00       114434  100.0
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.