我需要将数据分为训练集(75%)和测试集(25%)。我目前使用以下代码进行操作:
X, Xt, userInfo, userInfo_train = sklearn.cross_validation.train_test_split(X, userInfo)
但是,我想对训练数据集进行分层。我怎么做?我一直在研究该StratifiedKFold
方法,但不允许我指定75%/ 25%的划分,而是仅对训练数据集进行分层。
Answers:
[更新为0.17]
请参阅以下文档sklearn.model_selection.train_test_split
:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
stratify=y,
test_size=0.25)
[/更新为0.17]
有一个拉要求在这里。但是,您可以根据需要简单地进行train, test = next(iter(StratifiedKFold(...)))
训练和测试索引的使用。
TL; DR:将StratifiedShuffleSplit与test_size=0.25
Scikit-learn提供了两个用于分层拆分的模块:
n_folds
训练/测试集,从而使类在两者中均等平衡。这是一些代码(直接来自上述文档)
>>> skf = cross_validation.StratifiedKFold(y, n_folds=2) #2-fold cross validation
>>> len(skf)
2
>>> for train_index, test_index in skf:
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
... #fit and predict with X_train/test. Use accuracy metrics to check validation performance
n_iter=1
。您可以在此处提及测试尺寸train_test_split
码:
>>> sss = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=0)
>>> len(sss)
1
>>> for train_index, test_index in sss:
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
>>> # fit and predict with your classifier using the above X/y train/test
0.18.x
,n_iter
应n_splits
为StratifiedShuffleSplit
-,并且有一个稍微不同的API:scikit-learn.org/stable/modules/generation/…–
y
是Pandas系列,请使用y.iloc[train_index], y.iloc[test_index]
dataframe index: 2,3,5
the first split in sss:[(array([2, 1]), array([0]))]
:(
X_train, X_test = X[train_index], X[test_index]
调用某行时它覆盖了X_train
和X_test
?为什么不只是一个next(sss)
?
这是连续/回归数据的示例(直到GitHub上的此问题解决为止)。
min = np.amin(y)
max = np.amax(y)
# 5 bins may be too few for larger datasets.
bins = np.linspace(start=min, stop=max, num=5)
y_binned = np.digitize(y, bins, right=True)
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
stratify=y_binned
)
start
是stop
您连续目标的最小值和最大值?right=True
那么它将或多或少地使最大值成为一个单独的仓,并且拆分将始终失败,因为该额外仓中的样本太少。您可以train_test_split()
使用Scikit学习中可用的方法简单地做到这一点:
from sklearn.model_selection import train_test_split
train, test = train_test_split(X, test_size=0.25, stratify=X['YOUR_COLUMN_LABEL'])
我还准备了一个简短的GitHub Gist,它显示了stratify
选项如何工作:
https://gist.github.com/SHi-ON/63839f3a3647051a180cb03af0f7d0d9
除了@Andreas Mueller接受的答案外,只需将其添加为上述@tangy:
StratifiedShuffleSplit最类似于train_test_split(stratify = y),具有以下新增功能:
#train_size is 1 - tst_size - vld_size
tst_size=0.15
vld_size=0.15
X_train_test, X_valid, y_train_test, y_valid = train_test_split(df.drop(y, axis=1), df.y, test_size = vld_size, random_state=13903)
X_train_test_V=pd.DataFrame(X_train_test)
X_valid=pd.DataFrame(X_valid)
X_train, X_test, y_train, y_test = train_test_split(X_train_test, y_train_test, test_size=tst_size, random_state=13903)
从上方将@tangy答案更新为scikit-learn的当前版本:0.23.2(StratifiedShuffleSplit文档)。
from sklearn.model_selection import StratifiedShuffleSplit
n_splits = 1 # We only want a single split in this case
sss = StratifiedShuffleSplit(n_splits=n_splits, test_size=0.25, random_state=0)
for train_index, test_index in sss.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]