在CQL脚本中使用CQL命令时,是否有一种方法可以在CQL命令中传递变量,如:
select * from "Column Family Name" where "ColumnName"='A variable which takes different values';欢迎提出任何建议。
发布于 2015-05-21 23:26:00
不,CQL确实没有办法定义变量,运行循环,以及基于这些变量进行更新/查询。
作为另一种选择,我通常使用DataStax Python driver来执行类似下面这样的简单任务/脚本。这是我以前使用的Python脚本的摘录,用于从CSV文件填充产品颜色。
# connect to Cassandra
auth_provider = PlainTextAuthProvider(username='username', password='currentHorseBatteryStaple')
cluster = Cluster(['127.0.0.1'], auth_provider=auth_provider)
session = cluster.connect('products')
# prepare statements
preparedUpdate = session.prepare(
"""
UPDATE products.productsByItemID SET color=? WHERE itemid=? AND productid=?;
"""
)
# end prepare statements
counter = 0
# read csv file
dataFile = csv.DictReader(csvfilename, delimiter=',')
for csvRow in dataFile:
itemid = csvRow['itemid']
color = csvRow['customcolor']
productid = csvRow['productid']
#update product color
session.execute(preparedUpdate,[color,itemid,productid])
counter = counter + 1
# close Cassandra connection
session.cluster.shutdown()
session.shutdown()
print "updated %d colors" % (counter)有关更多信息,请查看DataStax教程Getting Started with Apache Cassandra and Python。
发布于 2016-04-21 16:13:20
可以,您可以通过以下方式传递变量:
import com.datastax.spark.connector.{SomeColumns, _}
import org.apache.spark.{SparkConf, SparkContext}
import com.datastax.spark.connector.cql.CassandraConnector
import org.apache.spark.SparkConf
import com.datastax.spark.connector
import com.datastax.spark.connector._
import org.apache.spark.{Logging, SparkConf}
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.{Row, SQLContext, DataFrame}
import org.apache.spark.sql.cassandra._
val myvar=1
csc.setKeyspace("test_keyspace")
val query="""select a.col1, c.col4, b.col2 from test_keyspace.table1 a inner join test_keyspace.table2 b on a.col1=b.col2 inner join test_keyspace.table3 c on b.col3=c.col4 where a.col1="""+myvar.toString
val results=csc.sql(query)
results.show()https://stackoverflow.com/questions/30373893
复制相似问题