【发布时间】:2024-04-27 22:10:02
【问题描述】:
我在 Azure 突触中有一个数据库,其中只有一列数据类型为 datetime2(7)。 在 Azure Databricks 中,我有一个具有以下架构的表。
df.schema
StructType(List(StructField(dates_tst,TimestampType,true)))
当我尝试在 Synapse 上保存时,我收到一条错误消息
Py4JJavaError: An error occurred while calling o535.save.: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 15.0 failed 4 times, most recent failure: Lost task 3.3 in stage 15.0 (TID 46) (10.139.64.5 executor 0): com.microsoft.sqlserver.jdbc.SQLServerException: 110802;An internal DMS error occurred that caused this operation to fail
SqlNativeBufferBufferBulkCopy.WriteTdsDataToServer, error in OdbcDone: SqlState: 42000, NativeError: 4816, 'Error calling: bcp_done(this->GetHdbc()) | SQL Error Info: SrvrMsgState: 1, SrvrSeverity: 16, Error <1>: ErrorMsg: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid column type from bcp client for colid 1. | Error calling: pConn->Done() | state: FFFF, number: 75205, active connections: 35', Connection String: Driver={pdwodbc17e};app=TypeD00-DmsNativeWriter:DB2\mpdwsvc (56768)-ODBC;autotranslate=no;trusted_connection=yes;server=\\.\pipe\DB.2-e2f5d1c1f0ba-0\sql\query;database=Distribution_24
编辑:运行时版本 9.1 LTS(包括 Apache Spark 3.1.2、Scala 2.12)
编辑 2: 可以解决,错误是:
- 在写入选项中使用了不正确的格式,我使用的是“com.microsoft.sqlserver.jdbc.spark”并将其更改为“com.databricks.spark.sqldw”。
- 范围凭据中也存在错误
【问题讨论】:
标签: azure apache-spark azure-databricks azure-synapse