scikit-learn中的分层训练/测试拆分


88

我需要将数据分为训练集(75%)和测试集(25%)。我目前使用以下代码进行操作:

X, Xt, userInfo, userInfo_train = sklearn.cross_validation.train_test_split(X, userInfo)   

但是,我想对训练数据集进行分层。我怎么做?我一直在研究该StratifiedKFold方法,但不允许我指定75%/ 25%的划分,而是仅对训练数据集进行分层。

Answers:


153

[更新为0.17]

请参阅以下文档sklearn.model_selection.train_test_split

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
                                                    stratify=y, 
                                                    test_size=0.25)

[/更新为0.17]

有一个拉要求在这里。但是,您可以根据需要简单地进行train, test = next(iter(StratifiedKFold(...))) 训练和测试索引的使用。


1
@AndreasMueller是否有一种简单的方法可以对回归数据进行分层?
约旦

3
@Jordan在scikit-learn中没有实现。我不知道标准的方式。我们可以使用百分位。
Andreas Mueller

@AndreasMueller您是否见过这种行为比StratifiedShuffleSplit慢得多的行为?我正在使用MNIST数据集。
snymkpr

@activatedgeek看起来很奇怪,因为train_test_split(... stratify =)只是调用StratifiedShuffleSplit并进行第一次拆分。可以通过可重复的示例随意在跟踪器上打开问题。
Andreas Mueller

@AndreasMueller我实际上没有解决任何问题,因为我强烈感觉自己做错了什么(即使只有两行)。但是,如果今天我仍然能够多次复制它,那我一定会做!
snymkpr

29

TL; DR:将StratifiedShuffleSplittest_size=0.25

Scikit-learn提供了两个用于分层拆分的模块:

  1. StratifiedKFold:此模块可用作直接的k折交叉验证运算符:因为它将建立n_folds训练/测试集,从而使类在两者中均等平衡。

这是一些代码(直接来自上述文档)

>>> skf = cross_validation.StratifiedKFold(y, n_folds=2) #2-fold cross validation
>>> len(skf)
2
>>> for train_index, test_index in skf:
...    print("TRAIN:", train_index, "TEST:", test_index)
...    X_train, X_test = X[train_index], X[test_index]
...    y_train, y_test = y[train_index], y[test_index]
...    #fit and predict with X_train/test. Use accuracy metrics to check validation performance
  1. StratifiedShuffleSplit:此模块创建具有相等平衡(分层)类的单个训练/测试集。本质上,这就是您想要的n_iter=1。您可以在此处提及测试尺寸train_test_split

码:

>>> sss = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=0)
>>> len(sss)
1
>>> for train_index, test_index in sss:
...    print("TRAIN:", train_index, "TEST:", test_index)
...    X_train, X_test = X[train_index], X[test_index]
...    y_train, y_test = y[train_index], y[test_index]
>>> # fit and predict with your classifier using the above X/y train/test

5
请注意,截至0.18.xn_itern_splitsStratifiedShuffleSplit -,并且有一个稍微不同的API:scikit-learn.org/stable/modules/generation/…–
lollercoaster

2
如果y是Pandas系列,请使用y.iloc[train_index], y.iloc[test_index]
Owlright,

1
@Owlright我尝试使用熊猫数据框,并且StratifiedShuffleSplit返回的索引不是数据框中的索引。 dataframe index: 2,3,5 the first split in sss:[(array([2, 1]), array([0]))]:(
Meghna Natraj '18

2
@tangy为什么这是一个for循环?是不是在X_train, X_test = X[train_index], X[test_index]调用某行时它覆盖了X_trainX_test?为什么不只是一个next(sss)
BartekWójcik18年

13

这是连续/回归数据的示例(直到GitHub上的此问题解决为止)。

min = np.amin(y)
max = np.amax(y)

# 5 bins may be too few for larger datasets.
bins     = np.linspace(start=min, stop=max, num=5)
y_binned = np.digitize(y, bins, right=True)

X_train, X_test, y_train, y_test = train_test_split(
    X, 
    y, 
    stratify=y_binned
)
  • 哪里startstop您连续目标的最小值和最大值?
  • 如果您未设置,right=True那么它将或多或少地使最大值成为一个单独的仓,并且拆分将始终失败,因为该额外仓中的样本太少。



0
#train_size is 1 - tst_size - vld_size
tst_size=0.15
vld_size=0.15

X_train_test, X_valid, y_train_test, y_valid = train_test_split(df.drop(y, axis=1), df.y, test_size = vld_size, random_state=13903) 

X_train_test_V=pd.DataFrame(X_train_test)
X_valid=pd.DataFrame(X_valid)

X_train, X_test, y_train, y_test = train_test_split(X_train_test, y_train_test, test_size=tst_size, random_state=13903)

0

从上方将@tangy答案更新为scikit-learn的当前版本:0.23.2(StratifiedShuffleSplit文档)。

from sklearn.model_selection import StratifiedShuffleSplit

n_splits = 1  # We only want a single split in this case
sss = StratifiedShuffleSplit(n_splits=n_splits, test_size=0.25, random_state=0)

for train_index, test_index in sss.split(X, y):
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = y[train_index], y[test_index]
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.