【问题标题】:PSQLException: ERROR: syntax error at or near "WITH" [duplicate]PSQLException:错误:“WITH”处或附近的语法错误[重复]
【发布时间】:2020-01-03 14:24:14
【问题描述】:

我有一个更大的查询,其中包含几个通过 WITH 子句构建的窗口函数。 此查询在使用 Pandas SQL 连接器或任何 SQL 浏览器的 Python 脚本执行的 amazon-rds 和 amazon-redshift 数据库上运行得非常好。 但是,如果我通过 Spark(来自 Pyspark)jdbs 连接器运行此查询,则会失败。 而且我找不到任何提示为什么 Spark 不吃这个查询。 欢迎任何提示。 谢谢 亚历克斯

我尝试了 Pandas 的 sql 和几个 SQL 浏览器 -> 效果很好 我用其他没有 WITH 子句语法的 SQL 语句尝试了 spark SQL 连接器 --> 效果很好

下面是简化的代码示例:

简化的 sql 查询

mysql_test="""
WITH my_raw_table AS

(
    SELECT 
        created_utc || '@' || sub_order_nr AS order_column, 
        operation_type, 
        id_in,
        id_type_in,
        created_utc
    FROM sample.table
)

SELECT DISTINCT 
    operation_type
    ,ROW_NUMBER() OVER window_desc AS row_number
    ,FIRST_VALUE(created_utc) OVER window_desc AS created_utc_first
    ,FIRST_VALUE(created_utc) OVER window_desc AS created_utc_last
    ,FIRST_VALUE(order_column) OVER window_desc AS order_column_first
    ,FIRST_VALUE(order_column) OVER window_desc AS order_column_last
FROM my_raw_table
WINDOW
    window_desc AS (
        PARTITION BY operation_type,id_type_in,id_in
        ORDER BY order_column DESC
        ),
    window_asc AS (
        PARTITION BY operation_type,id_type_in,id_in
        ORDER BY order_column ASC
        )
ORDER BY 
    operation_type
    ,order_column_last
"""

适用于 Pandas 的代码

conn=my_modul.get_my_connection()

my_result = pd.read_sql(mysql_test,conn)

conn.close()

my_result.head()

使用 Spark jdbc 失败的代码

conn=my_modul.get_my_connection()

my_result = spark.read.jdbc(url=conn['url'], table=mysql_test, properties= conn['properties'])

my_result.show()

主要问题是它声称 WITH 为语法错误

Py4JJavaError: An error occurred while calling o551.jdbc.
: org.postgresql.util.PSQLException: ERROR: syntax error at or near "WITH"

我不明白为什么。

完整的错误信息是:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-40-353e32a024e8> in <module>
     11 
     12 
---> 13 verbauwege_spark_sql = spark.read.jdbc(url=conn['url'], table=mysql_test, properties= conn['properties'])
     14 
     15 row_count=verbauwege_spark_sql.count()

~/anaconda3/envs/Spark_Python3/lib/python3.7/site-packages/pyspark/sql/readwriter.py in jdbc(self, url, table, column, lowerBound, upperBound, numPartitions, predicates, properties)
    554             jpredicates = utils.toJArray(gateway, gateway.jvm.java.lang.String, predicates)
    555             return self._df(self._jreader.jdbc(url, table, jpredicates, jprop))
--> 556         return self._df(self._jreader.jdbc(url, table, jprop))
    557 
    558 

~/anaconda3/envs/Spark_Python3/lib/python3.7/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

~/anaconda3/envs/Spark_Python3/lib/python3.7/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

~/anaconda3/envs/Spark_Python3/lib/python3.7/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o551.jdbc.
: org.postgresql.util.PSQLException: ERROR: syntax error at or near "WITH"
  Position: 15
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2468)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2211)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:309)
    at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:446)
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:370)
    at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:149)
    at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:108)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:61)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
    at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:238)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:745)

【问题讨论】:

  • Spark 将表名按字面意思放在 SELECT ... FROM $table ... 语句中,因此您必须将查询括在大括号中并将其转换为可以代替表名的有效子查询。
  • @HristoIliev 啊谢谢,这个提示解决了问题

标签: apache-spark pyspark apache-spark-sql amazon-redshift amazon-rds


【解决方案1】:

解决方案是将完整的 sql 括在大括号中并给它一个别名,以便 spark jdbc 可以处理它

mysql_test="""
(
WITH my_raw_table AS

(
    SELECT 
        created_utc || '@' || sub_order_nr AS order_column, 
        operation_type, 
        id_in,
        id_type_in,
        created_utc
    FROM sample.table
)

SELECT DISTINCT 
    operation_type
    ,ROW_NUMBER() OVER window_desc AS row_number
    ,FIRST_VALUE(created_utc) OVER window_desc AS created_utc_first
    ,FIRST_VALUE(created_utc) OVER window_desc AS created_utc_last
    ,FIRST_VALUE(order_column) OVER window_desc AS order_column_first
    ,FIRST_VALUE(order_column) OVER window_desc AS order_column_last
FROM my_raw_table
WINDOW
    window_desc AS (
        PARTITION BY operation_type,id_type_in,id_in
        ORDER BY order_column DESC
        ),
    window_asc AS (
        PARTITION BY operation_type,id_type_in,id_in
        ORDER BY order_column ASC
        )
ORDER BY 
    operation_type
    ,order_column_last
) as my_redshift_result
"""

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2017-03-16
    • 1970-01-01
    • 2017-02-20
    • 1970-01-01
    • 2018-08-21
    • 2017-07-15
    • 2010-12-24
    相关资源
    最近更新 更多