Grega提交的答案帮助我解决了问题。我正在从Docker容器内的python脚本本地运行Spark。最初,在Spark中处理某些数据时,我遇到了Java内存不足错误。但是,我可以通过在脚本中添加以下行来分配更多的内存:
conf=SparkConf()
conf.set("spark.driver.memory", "4g")
这是我用来启动Spark的python脚本的完整示例:
import os
import sys
import glob
spark_home = '<DIRECTORY WHERE SPARK FILES EXIST>/spark-2.0.0-bin-hadoop2.7/'
driver_home = '<DIRECTORY WHERE DRIVERS EXIST>'
if 'SPARK_HOME' not in os.environ:
os.environ['SPARK_HOME'] = spark_home
SPARK_HOME = os.environ['SPARK_HOME']
sys.path.insert(0,os.path.join(SPARK_HOME,"python"))
for lib in glob.glob(os.path.join(SPARK_HOME, "python", "lib", "*.zip")):
sys.path.insert(0,lib);
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.sql import SQLContext
conf=SparkConf()
conf.set("spark.executor.memory", "4g")
conf.set("spark.driver.memory", "4g")
conf.set("spark.cores.max", "2")
conf.set("spark.driver.extraClassPath",
driver_home+'/jdbc/postgresql-9.4-1201-jdbc41.jar:'\
+driver_home+'/jdbc/clickhouse-jdbc-0.1.52.jar:'\
+driver_home+'/mongo/mongo-spark-connector_2.11-2.2.3.jar:'\
+driver_home+'/mongo/mongo-java-driver-3.8.0.jar')
sc = SparkContext.getOrCreate(conf)
spark = SQLContext(sc)