【问题标题】:cassandra error "Batch too large" using pythoncassandra错误“批量太大”使用python
【发布时间】:2019-12-21 04:48:48
【问题描述】:

我正在尝试通过使用 json 模块从 json 中检索列然后使用准备好的语句将其插入到 cassandra 中来将 json 文件插入到 cassandra 表中。该文件准确地说是 2778ko。我不知道如何插入它。请帮忙!!!

query = """
        INSERT INTO profile9 (id,profilelegacy,profilealternative,aboutlegacy,skills,recommendations,accomplishments,peoplealsoviewed,volunteerExperience,profile)
          VALUES (?,?,?,?,?,?,?,?,?,?);

          """

        insert_user = session.prepare(query)
        batch = BatchStatement(consistency_level=ConsistencyLevel.ONE)
        batch.add(insert_user, (idd, profileLegacy, profilealternative, aboutlegacy, skills,
                                recommendations, accomplishments, peopleAlsoviewed, volunteerExperience, profile,))
        log = logging.getLogger()
        log.info('Batch Insert Completed')
        session.execute(batch)

我收到此错误

line 64, in parsing
    session.execute(batch)
  File "C:\Python\Python37\lib\site-packages\cassandra\cluster.py", line 2240, in execute
    timeout, execution_profile, paging_state, host).result()
  File "C:\Python\Python37\lib\site-packages\cassandra\cluster.py", line 4198, in result
    raise self._final_exception
cassandra.InvalidRequest: Error from server: code=2200 [Invalid query] message="Batch too large"

【问题讨论】:

    标签: python json cassandra cqlsh


    【解决方案1】:

    这里没有理由使用批处理,它只会让事情变慢并施加大小限制。只需将其更改为:

    # only prepare this once
    prepared = session.prepare(query)
    ...
    session.execute(prepared.bind((idd, profileLegacy, profilealternative, aboutlegacy, skills,
                            recommendations, accomplishments, peopleAlsoviewed, volunteerExperience, profile,)))
    

    【讨论】:

      猜你喜欢
      • 2016-08-05
      • 2017-05-10
      • 2021-08-20
      • 2017-05-24
      • 2016-07-26
      • 2015-01-18
      • 2015-05-31
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多