我有一个包含8列和约1,670万条记录的表。我需要在列上运行一组if-else方程。我已经使用UpdateCursor模块编写了一个脚本,但是在记录了几百万条之后,它的内存不足。我想知道是否有更好的方法来处理这1670万条记录。
import arcpy
arcpy.TableToTable_conversion("combine_2013", "D:/mosaic.gdb", "combo_table")
c_table = "D:/mosaic.gdb/combo_table"
fields = ['dev_agg', 'herb_agg','forest_agg','wat_agg', 'cate_2']
start_time = time.time()
print "Script Started"
with arcpy.da.UpdateCursor(c_table, fields) as cursor:
for row in cursor:
# row's 0,1,2,3,4 = dev, herb, forest, water, category
#classficiation water = 1; herb = 2; dev = 3; forest = 4
if (row[3] >= 0 and row[3] > row[2]):
row[4] = 1
elif (row[2] >= 0 and row[2] > row[3]):
row[4] = 4
elif (row[1] > 180):
row[4] = 2
elif (row[0] > 1):
row[4] = 3
cursor.updateRow(row)
end_time = time.time() - start_time
print "Script Complete - " + str(end_time) + " seconds"
更新1
我在具有40 gb RAM的计算机上运行了相同的脚本(原始计算机只有12 gb RAM)。约16小时后,它成功完成。我觉得16小时太长了,但是我从来没有使用过如此大的数据集,所以我不知道会发生什么。此脚本唯一的新增功能是arcpy.env.parallelProcessingFactor = "100%"
。我正在尝试两种建议的方法(1)批量处理100万条记录,(2)使用SearchCursor并将输出写入csv。我将尽快报告进度。
更新#2
SearchCursor和CSV更新非常出色!我没有确切的运行时间,明天明天上班时我会更新职位,但我会说大约5-6分钟的运行时间非常令人印象深刻。我没想到。我正在分享我未修饰的代码,欢迎任何评论和改进:
import arcpy, csv, time
from arcpy import env
arcpy.env.parallelProcessingFactor = "100%"
arcpy.TableToTable_conversion("D:/mosaic.gdb/combine_2013", "D:/mosaic.gdb", "combo_table")
arcpy.AddField_management("D:/mosaic.gdb/combo_table","category","SHORT")
# Table
c_table = "D:/mosaic.gdb/combo_table"
fields = ['wat_agg', 'dev_agg', 'herb_agg','forest_agg','category', 'OBJECTID']
# CSV
c_csv = open("D:/combine.csv", "w")
c_writer = csv.writer(c_csv, delimiter= ';',lineterminator='\n')
c_writer.writerow (['OID', 'CATEGORY'])
c_reader = csv.reader(c_csv)
start_time = time.time()
with arcpy.da.SearchCursor(c_table, fields) as cursor:
for row in cursor:
#skip file headers
if c_reader.line_num == 1:
continue
# row's 0,1,2,3,4,5 = water, dev, herb, forest, category, oid
#classficiation water = 1; dev = 2; herb = 3; ; forest = 4
if (row[0] >= 0 and row[0] > row[3]):
c_writer.writerow([row[5], 1])
elif (row[1] > 1):
c_writer.writerow([row[5], 2])
elif (row[2] > 180):
c_writer.writerow([row[5], 3])
elif (row[3] >= 0 and row[3] > row[0]):
c_writer.writerow([row[5], 4])
c_csv.close()
end_time = time.time() - start_time
print str(end_time) + " - Seconds"
更新#3 最终更新。该脚本的总运行时间约为199.6秒/ 3.2分钟。