从ElasticSearch获取某个索引的所有_id的最快方法是什么?通过使用简单查询可以吗?我的索引之一包含大约20,000个文档。
Answers:
编辑:请也阅读@Aleck Landgraf的答案
您只想要elasticsearch-internal_id
字段吗?还是id
文档中的字段?
对于前者,请尝试
curl http://localhost:9200/index/type/_search?pretty=true -d '
{
"query" : {
"match_all" : {}
},
"stored_fields": []
}
'
Note 2017更新:该帖子最初包含在内,"fields": []
但此后名称已更改,并且stored_fields
是新值。
结果将仅包含文档的“元数据”
{
"took" : 7,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 4,
"max_score" : 1.0,
"hits" : [ {
"_index" : "index",
"_type" : "type",
"_id" : "36",
"_score" : 1.0
}, {
"_index" : "index",
"_type" : "type",
"_id" : "38",
"_score" : 1.0
}, {
"_index" : "index",
"_type" : "type",
"_id" : "39",
"_score" : 1.0
}, {
"_index" : "index",
"_type" : "type",
"_id" : "34",
"_score" : 1.0
} ]
}
}
对于后者,如果要包括文档中的字段,只需将其添加到fields
数组中
curl http://localhost:9200/index/type/_search?pretty=true -d '
{
"query" : {
"match_all" : {}
},
"fields": ["document_field_to_be_returned"]
}
'
fields
已删除,而是添加"_source": false
参数。
最好使用滚动和扫描来获取结果列表,这样弹性搜索就不必对结果进行排名和排序。
使用elasticsearch-dsl
python lib可以通过以下方式完成:
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search
es = Elasticsearch()
s = Search(using=es, index=ES_INDEX, doc_type=DOC_TYPE)
s = s.fields([]) # only get ids, otherwise `fields` takes a list of field names
ids = [h.meta.id for h in s.scan()]
控制台日志:
GET http://localhost:9200/my_index/my_doc/_search?search_type=scan&scroll=5m [status:200 request:0.003s]
GET http://localhost:9200/_search/scroll?scroll=5m [status:200 request:0.005s]
GET http://localhost:9200/_search/scroll?scroll=5m [status:200 request:0.005s]
GET http://localhost:9200/_search/scroll?scroll=5m [status:200 request:0.003s]
GET http://localhost:9200/_search/scroll?scroll=5m [status:200 request:0.005s]
...
注意:滚动从查询中提取批次结果,并使光标保持打开一定时间(1分钟,2分钟,可以更新);扫描禁用排序。该scan
辅助函数返回一个python发生器可通过安全地重复。
对于elasticsearch 5.x,您可以使用“ _source ”字段。
GET /_search
{
"_source": false,
"query" : {
"term" : { "user" : "kimchy" }
}
}
"fields"
已不推荐使用。(错误:“字段[fields]不再受支持,请使用[stored_fields]检索存储的字段;如果未存储该字段,请使用_source过滤”)
另外的选择
curl 'http://localhost:9200/index/type/_search?pretty=true&fields='
将返回_index,_type,_id和_score。
stored_fields
而不是fields
使用新版本
详细说明@ Robert-Lujo和@ Aleck-Landgraf的2个答案(拥有权限的人可以很乐意将其移至注释):如果您不想打印,但是从返回的生成器中获取所有内容,则为这我用:
from elasticsearch import Elasticsearch,helpers
es = Elasticsearch(hosts=[YOUR_ES_HOST])
a=helpers.scan(es,query={"query":{"match_all": {}}},scroll='1m',index=INDEX_NAME)#like others so far
IDs=[aa['_id'] for aa in a]
受@ Aleck-Landgraf答案的启发,对我来说,它是通过在标准elasticsearch python API中使用直接扫描功能来工作的:
from elasticsearch import Elasticsearch
from elasticsearch.helpers import scan
es = Elasticsearch()
for dobj in scan(es,
query={"query": {"match_all": {}}, "fields" : []},
index="your-index-name", doc_type="your-doc-type"):
print dobj["_id"],
对于Python用户:Python Elasticsearch客户端为滚动API提供了方便的抽象:
from elasticsearch import Elasticsearch, helpers
client = Elasticsearch()
query = {
"query": {
"match_all": {}
}
}
scan = helpers.scan(client, index=index, query=query, scroll='1m', size=100)
for doc in scan:
# do something
我知道这篇文章有很多答案,但是我想结合几个答案来证明我发现最快的(无论如何在Python中)。我正在处理的是数亿文档,而不是数千。
的helpers
类可以用使用切片滚动,因此允许多线程执行。就我而言,我还有一个高基数字段可提供(acquired_at
)。你会看到我定的max_workers
为14,但是您可能要根据计算机的不同而不同。
此外,我以压缩格式存储文档ID。如果您感到好奇,则可以检查您的文档ID将有多少个字节,并估算最终转储大小。
# note below I have es, index, and cluster_name variables already set
max_workers = 14
scroll_slice_ids = list(range(0,max_workers))
def get_doc_ids(scroll_slice_id):
count = 0
with gzip.open('/tmp/doc_ids_%i.txt.gz' % scroll_slice_id, 'wt') as results_file:
query = {"sort": ["_doc"], "slice": { "field": "acquired_at", "id": scroll_slice_id, "max": len(scroll_slice_ids)+1}, "_source": False}
scan = helpers.scan(es, index=index, query=query, scroll='10m', size=10000, request_timeout=600)
for doc in scan:
count += 1
results_file.write((doc['_id'] + '\n'))
results_file.flush()
return count
if __name__ == '__main__':
print('attempting to dump doc ids from %s in %i slices' % (cluster_name, len(scroll_slice_ids)))
with futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
doc_counts = executor.map(get_doc_ids, scroll_slice_ids)
如果要跟踪文件中有多少个ID,可以使用unpigz -c /tmp/doc_ids_4.txt.gz | wc -l
。
工作正常!
def select_ids(self, **kwargs):
"""
:param kwargs:params from modules
:return: array of incidents
"""
index = kwargs.get('index')
if not index:
return None
# print("Params", kwargs)
query = self._build_query(**kwargs)
# print("Query", query)
# get results
results = self._db_client.search(body=query, index=index, stored_fields=[], filter_path="hits.hits._id")
print(results)
ids = [_['_id'] for _ in results['hits']['hits']]
return ids
Url -> http://localhost:9200/<index>/<type>/_query
http method -> GET
Query -> {"query": {"match_all": {}}, "size": 30000, "fields": ["_id"]}