我用debezium得到了混乱的代码:
"doulist_name": "2013 豆瓣电影ã€�å�£ç¢‘榜】" mysql数据库中有中文单词,我用debezium把数据发送给kafka。我发现在消费消息时,中文字变成了乱码,我该如何解决这个问题呢?有什么我可以使用的配置吗?
当我使用flume和kafka生成器生成中文单词时,它工作得很好。
配置的一部分:
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
connector.class=io.debezium.connector.mysql.MySqlConnector
database.server.id=18405
database.server.name=mysqlfullfillment
database.whitelist=test
database.history.kafka.bootstrap.servers=192.168.0.100:9092
database.history.kafka.topic=dbhistory.fullfillment-local
include.schema.changes=true
transforms=unwrap
transforms.unwrap.type=io.debezium.transforms.UnwrapFromEnvelopemysql字符集: utf8 mysql config picture
版本: debezium v0.7.5,kafka v1.1.1
添加:
当我用控制台./kafka-console-consumer.sh --zookeeper 192.168.0.100:2181 --topic mysqlfullfillment.test.doulist测试它时,我得到了混乱的代码
"doulist_name": "2013 豆瓣电影ã€�å�£ç¢‘榜】"在我的spark代码中,我得到了同样混乱的代码:
def main(args: Array[String]) {
val spark = SparkSession
.builder()
.master("local")
.appName("KafkaWordCount")
.config("spark.streaming.stopGracefullyOnShutdown", "true")
.getOrCreate()
simpleTestCode(spark)
}
def simpleTestCode(spark: SparkSession): Unit = {
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "KafkaWordCountgroup",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (true: java.lang.Boolean)
)
val topics = Array("mysqlfullfillment.test.doulist")
val ssc = new StreamingContext(spark.sparkContext, Seconds(2))
ssc.checkpoint("/home/feng/software/code/bigdata/spark-warehouse")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferBrokers,
Subscribe[String, String](topics, kafkaParams)
)
stream.map(mapFunc = record => (record.key, record.value)).foreachRDD(
r => r.collect().foreach(t => print("message:" + t)))
ssc.start()
ssc.awaitTermination()
}发布于 2018-10-27 19:55:32
我解决了这个问题。
当我在debezium中使用JsonConverter时:
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter它将使用JsonSerializer来序列化数据,所以我必须在kafka中使用JsonDeserializer
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> CommonUtil.getKafkaServers,
"key.deserializer" -> classOf[JsonDeserializer],
"value.deserializer" -> classOf[JsonDeserializer],
"group.id" -> groupId,
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)https://stackoverflow.com/questions/52837776
复制相似问题