我正在运行spark-cassandra-connector,遇到了一个奇怪的问题:我将spark-shell运行为:
bin/spark-shell --packages datastax:spark-cassandra-connector:2.0.0-M2-s_2.1然后我运行以下命令:
import com.datastax.spark.connector._
val rdd = sc.cassandraTable("test_spark", "test")
println(rdd.first)
# CassandraRow{id: 2, name: john, age: 29}问题是以下命令会给出一个错误:
rdd.take(1).foreach(println)
# CassandraRow{id: 2, name: john, age: 29}
rdd.take(2).foreach(println)
# Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_ONE (1 required but only 0 alive)
# at com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:128)
# at com.datastax.driver.core.Responses$Error.asException(Responses.java:114)
# at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:467)
# at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1012)
# at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:935)
# at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)下面的命令挂起了:
println(rdd.count)我的Cassandra密匙空间似乎有正确的复制因子:
describe test_spark;
CREATE KEYSPACE test_spark WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'} AND durable_writes = true;如何修复上述两个错误?
发布于 2017-02-25 04:11:09
我假设您在使用LOCAL_ONE (spark connector default)一致性时遇到了SimpleStrategy和多dc的问题。它将在本地DC中查找要向其发出请求的节点,但有可能所有副本都存在于不同的DC中,无法满足要求。(CASSANDRA-12053)
如果你change your consistency level (input.consistency.level to ONE),我想它会被解决的。您还应该考虑改用网络拓扑策略。
https://stackoverflow.com/questions/42446887
复制相似问题