我正在使用包含大量文档的nltk
图书馆movie_reviews
语料库。我的任务是在没有数据预处理的情况下获得这些评论的预测性能。但是有一个问题,在列表中documents
,documents2
我有相同的文档,因此我需要对它们进行混洗,以便在两个列表中保持相同的顺序。我无法分别对它们进行洗牌,因为每次我对列表进行洗牌时,都会得到其他结果。这就是为什么我需要用相同的顺序立即将其洗牌,因为最后需要比较它们(取决于顺序)。我正在使用python 2.7
示例(实际上是字符串标记化的字符串,但不是相对的):
documents = [(['plot : two teen couples go to a church party , '], 'neg'),
(['drink and then drive . '], 'pos'),
(['they get into an accident . '], 'neg'),
(['one of the guys dies'], 'neg')]
documents2 = [(['plot two teen couples church party'], 'neg'),
(['drink then drive . '], 'pos'),
(['they get accident . '], 'neg'),
(['one guys dies'], 'neg')]
在将两个列表都混洗后,我需要得到以下结果:
documents = [(['one of the guys dies'], 'neg'),
(['they get into an accident . '], 'neg'),
(['drink and then drive . '], 'pos'),
(['plot : two teen couples go to a church party , '], 'neg')]
documents2 = [(['one guys dies'], 'neg'),
(['they get accident . '], 'neg'),
(['drink then drive . '], 'pos'),
(['plot two teen couples church party'], 'neg')]
我有这个代码:
def cleanDoc(doc):
stopset = set(stopwords.words('english'))
stemmer = nltk.PorterStemmer()
clean = [token.lower() for token in doc if token.lower() not in stopset and len(token) > 2]
final = [stemmer.stem(word) for word in clean]
return final
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
documents2 = [(list(cleanDoc(movie_reviews.words(fileid))), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle( and here shuffle documents and documents2 with same order) # or somehow