熊猫UDF和pyarrow 0.15.0


12

最近,我开始pyspark在EMR群集上运行的许多作业中遇到一堆错误。错误是

java.lang.IllegalArgumentException
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)
    at org.apache.arrow.vector.ipc.message.MessageSerializer.readMessage(MessageSerializer.java:543)
    at org.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:58)
    at org.apache.arrow.vector.ipc.ArrowStreamReader.readSchema(ArrowStreamReader.java:132)
    at org.apache.arrow.vector.ipc.ArrowReader.initialize(ArrowReader.java:181)
    at org.apache.arrow.vector.ipc.ArrowReader.ensureInitialized(ArrowReader.java:172)
    at org.apache.arrow.vector.ipc.ArrowReader.getVectorSchemaRoot(ArrowReader.java:65)
    at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:162)
    at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at org.apache.spark.sql.execution.python.ArrowEvalPythonExec$$anon$2.<init>(ArrowEvalPythonExec.scala:98)
    at org.apache.spark.sql.execution.python.ArrowEvalPythonExec.evaluate(ArrowEvalPythonExec.scala:96)
    at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:127)...

它们似乎都发生在apply熊猫系列的功能中。我发现的唯一更改是pyarrow在星期六(05/10/2019)更新的。测试似乎适用于0.14.1

因此,我的问题是,是否有人知道这是新更新的pyarrow中的错误,还是有一些重大更改会导致pandasUDF将来难以使用?

Answers:


15

这不是错误。我们在0.15.0中进行了重要的协议更改,使pyarrow的默认行为与Java中的较旧版本的Arrow不兼容-您的Spark环境似乎正在使用较旧的版本。

您的选择是

  • ARROW_PRE_0_15_IPC_FORMAT=1从使用Python的位置设置环境变量
  • 现在降级为pyarrow <0.15.0。

希望Spark社区能够很快用Java升级到0.15.0,从而使此问题消失。

http://arrow.apache.org/blog/2019/10/06/0.15.0-release/中对此进行了讨论

By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.