Checkpoint /export/server/flink/bin/flink savepoint 702b872ef80f08854c946a544f2ee1a5 hdfs://node1:8020/flink-checkpoint 702b872ef80f08854c946a544f2ee1a5 # 重新启动job,手动加载savepoint数据 /export/server/flink/bin/flink run -s hdfs://node1:8020/flink-checkpoint
1s执行checkpoint操作 //状态后端 env.setStateBackend(new FsStateBackend("hdfs://hadoop10:8020/flink-checkpoint 关闭该作业,重新输入启动命令[root@hadoop10 app]# flink run -c day160616.CheckPointTest -s hdfs://hadoop10:8020/flink-checkpoint
:///D:/ckp")); }else{ env.setStateBackend(new FsStateBackend("hdfs://node1:8020/flink-checkpoint cn.checkpoint.CheckpointDemo01 5.取消任务 6.重新启动任务并指定从哪恢复 cn.itcast.checkpoint.CheckpointDemo01 hdfs://node1:8020/flink-checkpoint
对于Kafka的权限在章节1.1已经获取,另外要保证有yarn资源的使用权限,还需要对HDFS的/flink、/flink-checkpoint目录获取权限,保证读,写,执行。
data\\ckp")); } else { env.setStateBackend(new FsStateBackend("hdfs://node1:8020/flink-checkpoint
//D:/ckp")); } else { env.setStateBackend(new FsStateBackend("hdfs://node1:8020/flink-checkpoint
StreamExecutionEnvironment.getExecutionEnvironment env.setStateBackend(new FsStateBackend("hdfs://hadoop01:9000/flink-checkpoint