【问题标题】:org.apache.spark.sql.AnalysisException: cannot resolve '`sub_tot`' given input columns in pysparkorg.apache.spark.sql.AnalysisException:无法解析 '`sub_tot`' 给定 pyspark 中的输入列
【发布时间】:2020-05-23 19:27:54
【问题描述】:

我无法在 Dataframe 中使用 Select 来选择所需的列。如果我从 df_ord DataFrame 中选择一列,结果会显示 df_ord 中的一列并重命名不正确的 df_od_item 数据列。请参考所附截图。

另外,当我从两个 Dataframe 中选择多个列时,我得到了一个错误。请帮忙。

Py4JJavaError                             Traceback (most recent call last)
/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py in deco(*a, **kw)
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:

/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    319                     "An error occurred while calling {0}{1}{2}.\n".
--> 320                     format(target_id, ".", name), value)
    321             else:

Py4JJavaError: An error occurred while calling o79.select.
: org.apache.spark.sql.AnalysisException: cannot resolve '`sub_tot`' given input columns: [ord_id, ord_dt, cust_id, ord_status];;
'Project [ord_id#122, 'sub_tot]
+- AnalysisBarrier
      +- Project [_c0#114 AS ord_id#122, _c1#115 AS ord_dt#123, _c2#116 AS cust_id#124, _c3#117 AS ord_status#125]
         +- Relation[_c0#114,_c1#115,_c2#116,_c3#117] csv

    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:88)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:107)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:107)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:106)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:118)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1$1.apply(QueryPlan.scala:122)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:122)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:127)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:127)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:80)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:80)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
    at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
    at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:3295)
    at org.apache.spark.sql.Dataset.select(Dataset.scala:1307)
    at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)


During handling of the above exception, another exception occurred:

AnalysisException                         Traceback (most recent call last)
<ipython-input-33-20400ec965ec> in <module>
----> 1 df_ord.select("ord_id","sub_tot"). \
      2         where("ord_status in ('COMPLETE','CLOSED')"). \
      3         join(df_ord_item,df_ord.ord_id == df_ord_item.ord_item_ord_id).show()

/usr/hdp/current/spark2-client/python/pyspark/sql/dataframe.py in select(self, *cols)
   1200         [Row(name=u'Alice', age=12), Row(name=u'Bob', age=15)]
   1201         """
-> 1202         jdf = self._jdf.select(self._jcols(*cols))
   1203         return DataFrame(jdf, self.sql_ctx)
   1204 

/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1158         answer = self.gateway_client.send_command(command)
   1159         return_value = get_return_value(
-> 1160             answer, self.gateway_client, self.target_id, self.name)
   1161 
   1162         for temp_arg in temp_args:

/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py in deco(*a, **kw)
     67                                              e.java_exception.getStackTrace()))
     68             if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     70             if s.startswith('org.apache.spark.sql.catalyst.analysis'):
     71                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)

AnalysisException: "cannot resolve '`sub_tot`' given input columns: [ord_id, ord_dt, cust_id, ord_status];;\n'Project [ord_id#122, 'sub_tot]\n+- AnalysisBarrier\n      +- Project [_c0#114 AS ord_id#122, _c1#115 AS ord_dt#123, _c2#116 AS cust_id#124, _c3#117 AS ord_status#125]\n         +- Relation[_c0#114,_c1#115,_c2#116,_c3#117] csv\n"

【问题讨论】:

    标签: pyspark


    【解决方案1】:

    您混淆了哪些方法应用于哪些数据帧。

    此语句从df_ord 中选择ord_id 列,并从df_ord_item 数据框中选择所有列:

    (df_ord
     .select("ord_id")  # <- select only the ord_id column from df_ord
     .join(df_ord_item) # <- join this 1 column dataframe with the 6 column data frame df_ord_item
     .show()            # <- show the resulting 7 column dataframe
    

    此语句仅选择连接后的ord_id 列:

    (df_ord             # <- select all 4 columns from the df_ord dataframe
     .join(df_ord_item) # <- join this 4 column dataframe with the 6 column data frame df_ord_item
     .select("ord_id")  # <- select only the ord_id column from the 10 column joined dataframe
     .show()            # <- show the resulting 1 column dataframe
    

    fluent interfaces 视为管道。链中后面调用的方法发生在前面调用的方法之后。

    【讨论】:

    • 非常感谢 Dave 的澄清。我不知道第二种方式。这解决了我的目的。再次感谢您的及时回复。
    • 乐于助人。 Consider upvoting 您接受的答案。这有助于他们在搜索结果中返回更高的位置,以便与您有相同问题的人更快地找到您的问题。
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2017-06-06
    • 2018-07-11
    • 2021-02-07
    • 2020-10-09
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多