Answers:
您可以手动使用pd.DataFrame
构造函数,使用numpy数组(data
)和列名列表(columns
)。要将所有内容都放在一个DataFrame中,可以使用np.c_[...]
(注意[]
)将要素和目标连接到一个numpy数组中:
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
# save load_iris() sklearn dataset to iris
# if you'd like to check dataset type use: type(load_iris())
# if you'd like to view list of attributes use: dir(load_iris())
iris = load_iris()
# np.c_ is the numpy concatenate function
# which is used to concat iris['data'] and iris['target'] arrays
# for pandas column argument: concat iris['feature_names'] list
# and string list (in this case one string); you can make this anything you'd like..
# the original dataset would probably call this ['Species']
data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
load_boston()
。这个答案更普遍地起作用:stackoverflow.com/a/46379878/1840471
from sklearn.datasets import load_iris
import pandas as pd
data = load_iris()
df = pd.DataFrame(data.data, columns=data.feature_names)
df.head()
本教程可能很有趣:http : //www.neural.cz/dataset-exploration-boston-house-pricing.html
对于scikit-learn中的所有数据集,TOMDLt的解决方案不够通用。例如,它不适用于波士顿住房数据集。我提出了另一种更通用的解决方案。也无需使用numpy。
from sklearn import datasets
import pandas as pd
boston_data = datasets.load_boston()
df_boston = pd.DataFrame(boston_data.data,columns=boston_data.feature_names)
df_boston['target'] = pd.Series(boston_data.target)
df_boston.head()
作为一般功能:
def sklearn_to_df(sklearn_dataset):
df = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names)
df['target'] = pd.Series(sklearn_dataset.target)
return df
df_boston = sklearn_to_df(datasets.load_boston())
花了我2个小时来解决这个问题
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
iris = load_iris()
##iris.keys()
df= pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
df['species'] = pd.Categorical.from_codes(iris.target, iris.target_names)
为我的熊猫找回物种
否则,使用海洋数据集(它们是实际的熊猫数据框):
import seaborn
iris = seaborn.load_dataset("iris")
type(iris)
# <class 'pandas.core.frame.DataFrame'>
与scikit学习数据集比较:
from sklearn import datasets
iris = datasets.load_iris()
type(iris)
# <class 'sklearn.utils.Bunch'>
dir(iris)
# ['DESCR', 'data', 'feature_names', 'filename', 'target', 'target_names']
这对我有用。
dataFrame = pd.dataFrame(data = np.c_[ [iris['data'],iris['target'] ],
columns=iris['feature_names'].tolist() + ['target'])
可以使用组合特征和目标变量的其他方法np.column_stack
(细节)
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
data = load_iris()
df = pd.DataFrame(np.column_stack((data.data, data.target)), columns = data.feature_names+['target'])
print(df.head())
结果:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target
0 5.1 3.5 1.4 0.2 0.0
1 4.9 3.0 1.4 0.2 0.0
2 4.7 3.2 1.3 0.2 0.0
3 4.6 3.1 1.5 0.2 0.0
4 5.0 3.6 1.4 0.2 0.0
如果您需要的字符串标签target
,则可以replace
通过转换target_names
为dictionary
并添加新列来使用:
df['label'] = df.target.replace(dict(enumerate(data.target_names)))
print(df.head())
结果:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target label
0 5.1 3.5 1.4 0.2 0.0 setosa
1 4.9 3.0 1.4 0.2 0.0 setosa
2 4.7 3.2 1.3 0.2 0.0 setosa
3 4.6 3.1 1.5 0.2 0.0 setosa
4 5.0 3.6 1.4 0.2 0.0 setosa
基本上,您需要的是“数据”,并且已将其存储在scikit包中,现在只需要其中的“目标”(预测)即可。
因此,只需合并这两个即可使数据完整
data_df = pd.DataFrame(cancer.data,columns=cancer.feature_names)
target_df = pd.DataFrame(cancer.target,columns=['target'])
final_df = data_df.join(target_df)
您可以使用该参数as_frame=True
获取熊猫数据帧。
from sklearn import datasets
X,y = datasets.load_iris(return_X_y=True) # numpy arrays
dic_data = datasets.load_iris(as_frame=True)
print(dic_data.keys())
df = dic_data['frame'] # pandas dataframe data + target
df_X = dic_data['data'] # pandas dataframe data only
ser_y = dic_data['target'] # pandas series target only
dic_data['target_names'] # numpy array
from sklearn import datasets
fnames = [ i for i in dir(datasets) if 'load_' in i]
print(fnames)
fname = 'load_boston'
loader = getattr(datasets,fname)()
df = pd.DataFrame(loader['data'],columns= loader['feature_names'])
df['target'] = loader['target']
df.head(2)
解决最佳答案并解决我的评论,这是转换的功能
def bunch_to_dataframe(bunch):
fnames = bunch.feature_names
features = fnames.tolist() if isinstance(fnames, np.ndarray) else fnames
features += ['target']
return pd.DataFrame(data= np.c_[bunch['data'], bunch['target']],
columns=features)
无论TomDLT回答什么,它可能对某些人都不起作用,因为
data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
因为iris ['feature_names']返回一个numpy数组。在numpy数组中,仅通过+运算符不能添加数组和列表['target']。因此,您需要先将其转换为列表,然后添加。
你可以做
data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= list(iris['feature_names']) + ['target'])
这将正常工作
也许有更好的方法,但是这是我过去所做的,并且效果很好:
items = data.items() #Gets all the data from this Bunch - a huge list
mydata = pd.DataFrame(items[1][1]) #Gets the Attributes
mydata[len(mydata.columns)] = items[2][1] #Adds a column for the Target Variable
mydata.columns = items[-1][1] + [items[2][0]] #Gets the column names and updates the dataframe
现在,mydata将拥有您所需的一切-属性,目标变量和列名
mydata = pd.DataFrame(items[1][1])
引发TypeError: 'dict_items' object does not support indexing
该片段只是基于TomDLT和rolyat已经贡献和解释的语法糖。唯一的区别是将返回一个元组而不是一个字典,并且列名被枚举。load_iris
df = pd.DataFrame(np.c_[load_iris(return_X_y=True)])
import pandas as pd
from sklearn.datasets import load_iris
iris = load_iris()
X = iris['data']
y = iris['target']
iris_df = pd.DataFrame(X, columns = iris['feature_names'])
iris_df.head()
最好的方法之一:
data = pd.DataFrame(digits.data)
Digits是sklearn数据框,我将其转换为pandas DataFrame
我从您的答案中汲取了一些想法,但我不知道如何缩短它的时间:)
import pandas as pd
from sklearn.datasets import load_iris
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris['feature_names'])
df['target'] = iris['target']
这将提供一个带有feature_names和target作为列以及RangeIndex(start = 0,stop = len(df),step = 1)的Pandas DataFrame。我想有一个较短的代码,可以直接添加“目标”。
该API比建议的响应要干净一些。在这里,请使用as_frame
并确保还包括一个响应列。
import pandas as pd
from sklearn.datasets import load_wine
features, target = load_wine(as_frame=True).data, load_wine(as_frame=True).target
df = features
df['target'] = target
df.head(2)
from sklearn.datasets import load_iris
import pandas as pd
iris_dataset = load_iris()
datasets = pd.DataFrame(iris_dataset['data'], columns =
iris_dataset['feature_names'])
target_val = pd.Series(iris_dataset['target'], name =
'target_values')
species = []
for val in target_val:
if val == 0:
species.append('iris-setosa')
if val == 1:
species.append('iris-versicolor')
if val == 2:
species.append('iris-virginica')
species = pd.Series(species)
datasets['target'] = target_val
datasets['target_name'] = species
datasets.head()