scala-2.11/jars/apacheds-kerberos-codec-2.0.0-M15.jar /root/tx/spark-all/spark/assembly/target/scala- /assembly/target/scala-2.11/jars/chill-java-0.9.3.jar /root/tx/spark-all/spark/assembly/target/scala- /scala-2.11/jars/commons-httpclient-3.1.jar /root/tx/spark-all/spark/assembly/target/scala-2.11/jars/ /target/scala-2.11/jars/curator-client-2.7.1.jar /root/tx/spark-all/spark/assembly/target/scala-2.11/ /target/scala-2.11/jars/gson-2.2.4.jar /root/tx/spark-all/spark/assembly/target/scala-2.11/jars/guava
安装完成后运行scala -version可以看到安装的Scala的版本,我现在2.11版,安装目录是在/usr/share/scala-2.11 。 接下来下载Spark。 spark-env.sh.template spark-env.sh 然后编辑该文件 vi spark-env.sh 在文件的末尾我们添加上以下内容: export SCALA_HOME=/usr/share/scala
org.apache.spark:spark-streaming-kafka-0-8_2.11:2.0.0 --master spark://$(hostname):7077 --class ConsumerApp target/scala /target/scala-2.11/kafka-sample-app_2.11-1.0.jar:$KAFKA_HOME/libs/* ProducerApp # or # $KAFKA_HOME/bin /target/scala-2.11/kafka-sample-app_2.11-1.0.jar:$KAFKA_HOME/libs/* ProducerApp 然后:看看Consumer应用是否收到了消息
Server应用 运行: $SPARK_HOME/bin/spark-submit --master spark://$(hostname):7077 --class ServerApp target/scala /target/scala-2.11/akka-sample-app_2.11-1.0.jar:$AKKA_HOME/lib/akka/*:$SCALA_HOME/lib/* ClientApp # or # $SPARK_HOME/bin/spark-submit --master spark://$(hostname):7077 --class ClientApp target/scala-2.11
/target/scala-2.11/simple-application-project_2.11-1.0.jar ... [info] Done packaging. project in local with 4 threads $SPARK_HOME/bin/spark-submit --master local[4] --class SimpleApp target/scala the project $SPARK_HOME/bin/spark-submit --master spark://$(hostname):7077 --class SimpleApp target/scala
change-scala-version.sh 2.11 mvn -Pyarn -Phadoop-2.7.1 -Dscala-2.11 -DskipTests clean package 编译完成后,拷贝ssembly/target/scala
Spark2的HOME目录创建如下目录: [root@cdh02 ~]# mkdir -p /opt/cloudera/parcels/SPARK2/lib/spark2/launcher/target/scala 注意:部署spark-sql客户端时需要创建$SPARK_HOME/launcher/target/scala-2.11目录,否则启动会报“java.lang.IllegalStateException
/target/scala-2.11/simple-project_2.11-1.0.jar # Use spark-submit to run your application 通过spark-submit 提交任务jar包 $ YOUR_SPARK_HOME/bin/spark-submit \ --class "SimpleApp" \ --master local[4] \ target/scala
/target/scala-2.11/simple-project_2.11-1.0.jar # Use spark-submit to run your application $ YOUR_SPARK_HOME /bin/spark-submit \ --class "SimpleApp" \ --master local[4] \ target/scala-2.11/simple-project_
apache/hudi•git clone https://github.com/apache/hudi.git && cd hudi•mvn clean package -DskipTests注意:默认是用scala
val jarPaths="target/scala-2.11/spark-hello_2.11-1.0.jar" /**Spark SQL映射的到实体类的方式**/ def mapSQL2()
>2.3</version> </dependency> </dependencies> </profile> <profile> <id>scala
我的安装版本是spark-1.6.1-bin-hadoop2.6.tgz 这个版本必须要求jdk1.7或者1.7以上 安装spark必须要scala-2.11 版本支撑 我安装的是scala
apache/hudi git clone https://github.com/apache/hudi.git && cd hudi mvn clean package -DskipTests 注意:默认是用scala
1.12.4/flink-dist/target/flink-1.12.4-bin/目录下:tar -zcf flink-1.12.4-bin-scala_2.11.tgz flink-1.12.4/至此,依赖Scala
/lib:${JRE_HOME}/lib export PATH=$PATH:${JAVA_HOME}/bin - - - Scala 无需安装,直接解压缩后配置环境变量既可用,但需要先安装Jdk 以scala -2.11为例,修改环境变量,在末尾添加以下几行(配置完毕后不要忘记使用source令环境变量生效) export SCALA_HOME=/usr/lib/scala/scala-2.11 (scala
--driver-memory 1g \ --executor-memory 1g \ --executor-cores 1 \ --queue thequeue \ examples/target/scala
compile [info] Compiling 1 Scala source to /Users/tiger-macpro/Scala/IntelliJ/learn-macro/macros/target/scala 7.876 s [info] Compiling 1 Scala source to /Users/tiger-macpro/Scala/IntelliJ/learn-macro/demos/target/scala
以下是Flink on Zeppelin 主要feature: Feature Feature 说明 多版本Flink支持 同时支持 Flink 1.10 到 1.15 的 6 个大版本,并且同时支持Scala
1 [StreamPark] build info: package mode @ mixed, scala-2.11, now build starting...