在Pandas DataFrame中,我想根据另一列的值有条件地创建一个新列.在我的应用程序中,DataFrame通常有几百万行,并且唯一条件值的数量很小,大约为1.性能非常重要:生成新列的最快方法是什么?
我在下面创建了一个示例案例,并尝试并比较了不同的方法.在该示例中,条件填充由基于列的值的字典查找表示label
(这里:一个1, 2, 3
).
lookup_dict = {
1: 100, # arbitrary
2: 200, # arbitrary
3: 300, # arbitrary
}
然后我希望我的DataFrame被填充为:
label output
0 3 300
1 2 200
2 3 300
3 3 300
4 2 200
5 2 200
6 1 100
7 1 100
以下是在10M行上测试的6种不同方法(Nlines
测试代码中的参数):
方法1: pandas.groupby().apply()
方法2: pandas.groupby().indices.items()
方法3: pandas.Series.map
方法4:用于标签上的循环
方法5: numpy.select
方法6:numba
完整的代码在答案的最后提供,包含所有方法的运行时.在比较性能之前,断言每种方法的输出相等.
方法1:
pandas.groupby().apply()
我使用pandas.groupby()
上label
,然后填充以利用相同的值的每个块apply()
.
def fill_output(r):
''' called by groupby().apply(): all r.label values are the same '''
r.loc[:, 'output'] = lookup_dict[r.iloc[0]['label']]
return r
df = df.groupby('label').apply(fill_output)
我明白了
>>> method_1_groupby ran in 2.29s (average over 3 iterations)
请注意,groupby().apply()在第一个组上运行两次以确定要使用的代码路径(请参阅Pandas#2936).这可以减少少数群体的速度.我欺骗方法1可以添加第一个虚拟组,但我没有得到太多改进.
方法2:
pandas.groupby().indices.items()
第二个是变体:而不是使用apply
我访问指数directy groupby().indices.items()
.这最终是方法1的两倍,这是我用了很长时间的方法
dgb = df.groupby('label')
for label, idx in dgb.indices.items():
df.loc[idx, 'output'] = lookup_dict[label]
拿到:
method_2_indices ran in 1.21s (average over 3 iterations)
方法3:
pandas.Series.map
我用过Pandas.Series.map.
df['output'] = df.label.map(lookup_dict.get)
在类似情况下,我得到了非常好的结果,其中查找值的数量与行数相当.在目前的情况下,map
最终速度是方法1的两倍.
method_3_map在3.07s中运行(平均超过3次迭代)
我将其归因于少量的查找值,但我实现它的方式可能存在问题.
方法4:用于标签上的循环
第四种方法很天真:我只是遍历所有标签并选择DataFrame的匹配部分.
for label, value in lookup_dict.items():
df.loc[df.label == label, 'output'] = value
但令人惊讶的是,在之前的案例中,我得到了更快的结果.我期望groupby
基础解决方案比这个更快,因为Pandas必须与df.label == label
此进行三次比较.结果证明我错了:
method_4_forloop ran in 0.54s (average over 3 iterations)
方法5:
numpy.select
第五种方法使用numpy select
函数,基于此StackOverflow答案.
cOnditions= [df.label == k for k in lookup_dict.keys()]
choices = list(lookup_dict.values())
df['output'] = np.select(conditions, choices)
这产生了最好的结果:
method_5_select ran in 0.29s (average over 3 iterations)
最后,我numba
在方法6中尝试了一种方法.
方法6:numba
仅仅为了示例,条件填充值是编译函数中的硬编码.我不知道如何给Numba一个列表作为运行时常量:
@jit(int64[:](int64[:]), nopython=True)
def hardcoded_conditional_filling(column):
output = np.zeros_like(column)
i = 0
for c in column:
if c == 1:
output[i] = 100
elif c == 2:
output[i] = 200
elif c == 3:
output[i] = 300
i += 1
return output
df['output'] = hardcoded_conditional_filling(df.label.values)
我最好的时间比方法5快了50%.
method_6_numba ran in 0.19s (average over 3 iterations)
由于上述原因,我没有实现这个:我不知道如何给Numba一个列表作为运行时常量而不会导致性能大幅下降.
完整代码
import pandas as pd
import numpy as np
from timeit import timeit
from numba import jit, int64
lookup_dict = {
1: 100, # arbitrary
2: 200, # arbitrary
3: 300, # arbitrary
}
Nlines = int(1e7)
# Generate
label = np.round(np.random.rand(Nlines)*2+1).astype(np.int64)
df0 = pd.DataFrame(label, columns=['label'])
# Now the goal is to assign the look_up_dict values to a new column 'output'
# based on the value of label
# Method 1
# using groupby().apply()
def method_1_groupby(df):
def fill_output(r):
''' called by groupby().apply(): all r.label values are the same '''
#print(r.iloc[0]['label']) # activate to reveal the #2936 issue in Pandas
r.loc[:, 'output'] = lookup_dict[r.iloc[0]['label']]
return r
df = df.groupby('label').apply(fill_output)
return df
def method_2_indices(df):
dgb = df.groupby('label')
for label, idx in dgb.indices.items():
df.loc[idx, 'output'] = lookup_dict[label]
return df
def method_3_map(df):
df['output'] = df.label.map(lookup_dict.get)
return df
def method_4_forloop(df):
''' naive '''
for label, value in lookup_dict.items():
df.loc[df.label == label, 'output'] = value
return df
def method_5_select(df):
''' Based on answer from
/sf/ask/17360801/
'''
cOnditions= [df.label == k for k in lookup_dict.keys()]
choices = list(lookup_dict.values())
df['output'] = np.select(conditions, choices)
return df
def method_6_numba(df):
''' This works, but it is hardcoded and i don't really know how
to make it compile with list as runtime constants'''
@jit(int64[:](int64[:]), nopython=True)
def hardcoded_conditional_filling(column):
output = np.zeros_like(column)
i = 0
for c in column:
if c == 1:
output[i] = 100
elif c == 2:
output[i] = 200
elif c == 3:
output[i] = 300
i += 1
return output
df['output'] = hardcoded_conditional_filling(df.label.values)
return df
df1 = method_1_groupby(df0)
df2 = method_2_indices(df0.copy())
df3 = method_3_map(df0.copy())
df4 = method_4_forloop(df0.copy())
df5 = method_5_select(df0.copy())
df6 = method_6_numba(df0.copy())
# make sure we havent modified the input (would bias the results)
assert 'output' not in df0.columns
# Test validity
assert (df1 == df2).all().all()
assert (df1 == df3).all().all()
assert (df1 == df4).all().all()
assert (df1 == df5).all().all()
assert (df1 == df6).all().all()
# Compare performances
Nites = 3
print('Compare performances for {0:.1g} lines'.format(Nlines))
print('-'*30)
for method in [
'method_1_groupby', 'method_2_indices',
'method_3_map', 'method_4_forloop',
'method_5_select', 'method_6_numba']:
print('{0} ran in {1:.2f}s (average over {2} iterations)'.format(
method,
timeit("{0}(df)".format(method), setup="from __main__ import df0, {0}; df=df0.copy()".format(method), number=Nites)/Nites,
Nites))
输出:
Compare performances for 1e+07 lines
------------------------------
method_1_groupby ran in 2.29s (average over 3 iterations)
method_2_indices ran in 1.21s (average over 3 iterations)
method_3_map ran in 3.07s (average over 3 iterations)
method_4_forloop ran in 0.54s (average over 3 iterations)
method_5_select ran in 0.29s (average over 3 iterations)
method_6_numba ran in 0.19s (average over 3 iterations)
我会对能够产生更好性能的任何其他解决方案感兴趣.我最初在寻找基于Pandas的方法,但我也接受基于numba/cython的解决方案.
编辑
添加Chrisb的比较方法:
def method_3b_mapdirect(df):
''' Suggested by /sf/ask/17360801/'''
df['output'] = df.label.map(lookup_dict)
return df
def method_7_take(df):
''' Based on answer from
/sf/ask/17360801/
Exploiting that labels are continuous integers
'''
lookup_arr = np.array(list(lookup_dict.values()))
df['output'] = lookup_arr.take(df['label'] - 1)
return df
运行时间为:
method_3_mapdirect ran in 0.23s (average over 3 iterations)
method_7_take ran in 0.11s (average over 3 iterations)
这使得#3比任何其他方法(#6除外)更快,也是最优雅的.如果您的用户案例兼容,请使用#7.
1> chrisb..:
我会考虑.map
(#3)这样做的惯用方法 - 但是不要通过.get
- 单独使用字典,并且应该看到相当显着的改进.
df = pd.DataFrame({'label': np.random.randint(, 4, size=1000000, dtype='i8')})
%timeit df['output'] = df.label.map(lookup_dict.get)
261 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit df['output'] = df.label.map(lookup_dict)
69.6 ms ± 3.08 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
如果条件的数量很少,并且比较便宜(即整数和你的查找表),那么直接比较值(4和尤其是5)会比.map
这更快,但这并不总是正确的,例如,如果你有一组字符串.
如果您的查找标签确实是连续的整数,您可以利用它并使用a进行查找take
,这应该与numba一样快.我认为这基本上和这个一样快 - 可以在cython中写出等价物,但不会更快.
%%timeit
lookup_arr = np.array(list(lookup_dict.values()))
df['output'] = lookup_arr.take(df['label'] - 1)
8.68 ms ± 332 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)