【发布时间】:2017-02-09 08:00:46
【问题描述】:
我对 cassandra 有疑问:
如果我执行 nodetool -h 10.169.20.8 cfstats name.name -H
我得到的结果和统计数据是这样的:
Read Count: 0
Read Latency: NaN ms.
Write Count: 739812
Write Latency: 0.038670616318740435 ms.
Pending Flushes: 0
Table: name
SSTable count: 10
Space used (live): 1.48 GB
Space used (total): 1.48 GB
Space used by snapshots (total): 0 bytes
Off heap memory used (total): 3.04 MB
SSTable Compression Ratio: 0.5047407001982581
Number of keys (estimate): 701190
Memtable cell count: 22562
Memtable data size: 14.12 MB
Memtable off heap memory used: 0 bytes
Memtable switch count: 7
Local read count: 0
Local read latency: NaN ms
Local write count: 739812
Local write latency: 0.043 ms
Pending flushes: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 2.39 MB
Bloom filter off heap memory used: 2.39 MB
Index summary off heap memory used: 302.03 KB
Compression metadata off heap memory used: 366.3 KB
Compacted partition minimum bytes: 87 bytes
Compacted partition maximum bytes: 3.22 MB
Compacted partition mean bytes: 2.99 KB
Average live cells per slice (last five minutes): 1101.2357892212697
Maximum live cells per slice (last five minutes): 1109
Average tombstones per slice (last five minutes): 271.6848030693603
Maximum tombstones per slice (last five minutes): 1109
Dropped Mutations: 0 bytes
为什么墓碑统计不为 0?我们这里只写入Cassandra,没有人删除记录。我们不使用 TTL,它们设置为默认设置。
第二个问题(可能与问题有关)——表格的行数随机变化,我们不明白是怎么回事。
【问题讨论】:
-
如果没有您的代码或架构,这是一个非常难以弄清楚的问题。很多人最终会使用集合来做事,如果您进行覆盖,这些集合会隐式创建墓碑。这只是一个猜测
标签: cassandra datastax datastax-enterprise spark-cassandra-connector