基本上,我是一个java开发人员&现在我有机会在Spark上工作&我了解了Spark的基础知识,比如什么是SparkConfig、SparkContaxt、RDD、SQLContaxt、DataFrame、DataSet &然后我能够使用RDD、SQL执行一些简单的转换.但是当我尝试用java开发一些样例图形框架应用程序时,我就不能成功了&我读了很多youtube教程,论坛和堆栈溢出线程,但没有找到任何直接的建议或solution.Actually,我面临这个问题,当我试图创建一个对象的GraphFrame类&我也下载了可接受的GraphFrame,但现在仍然面临的问题,我想把我的分析,直到我到达的地方,因为我非常新的火花,我不能走得更远,所以如果有人帮助我,这是真正有益于所有人。提前谢谢。例外是我面对的是构造函数GraphFrame(DataFrame,DataFrame)是未定义的
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import org.apache.spark.storage.StorageLevel;
import org.graphframes.GraphFrame;
import com.fasterxml.jackson.core.JsonParseException;
import com.fasterxml.jackson.databind.JsonMappingException;
public class SparkJavaGraphFrameOne {
public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException{
SparkConf conf = new SparkConf().setAppName("test").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
JavaRDD<Row> verRow = sc.parallelize(Arrays.asList(RowFactory.create(1,"A"),RowFactory.create(2,"B")));
JavaRDD<Row> edgRow = sc.parallelize(Arrays.asList(RowFactory.create(1,2,"Edge")));
List<StructField> verFields = new ArrayList<StructField>();
verFields.add(DataTypes.createStructField("id",DataTypes.IntegerType, true));
verFields.add(DataTypes.createStructField("name",DataTypes.StringType, true));
List<StructField> EdgFields = new ArrayList<StructField>();
EdgFields.add(DataTypes.createStructField("fromId",DataTypes.IntegerType, true));
EdgFields.add(DataTypes.createStructField("toId",DataTypes.IntegerType, true));
EdgFields.add(DataTypes.createStructField("name",DataTypes.StringType, true));
StructType verSchema = DataTypes.createStructType(verFields);
StructType edgSchema = DataTypes.createStructType(EdgFields);
DataFrame verDF = sqlContext.createDataFrame(verRow, verSchema);
DataFrame edgDF = sqlContext.createDataFrame(edgRow, edgSchema);
GraphFrame g = new GraphFrame(verDF,edgDF);
g.vertices().show();
g.edges().show();
g.persist(StorageLevel.MEMORY_AND_DISK());
}
}发布于 2016-08-26 10:06:48
我使用Spark2.0.0和GraphFrame 0.2.0用java编写了示例程序。该程序基于http://graphframes.github.io/quick-start.html#start-using-graphframes提供的示例程序。希望这能有所帮助。
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.abaghel.examples.spark</groupId>
<artifactId>spark-graphframe</artifactId>
<version>1.0.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-graphx_2.11</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>graphframes</groupId>
<artifactId>graphframes</artifactId>
<version>0.2.0-spark2.0-s_2.11</version>
</dependency>
</dependencies>
<repositories>
<!-- list of other repositories -->
<repository>
<id>SparkPackagesRepo</id>
<url>http://dl.bintray.com/spark-packages/maven</url>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>SparkGraphFrameSample.java
package com.abaghel.examples.spark.graphframe;
import java.util.ArrayList;
import java.util.List;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.graphframes.GraphFrame;
import org.graphframes.lib.PageRank;
/**
* Sample application shows how to create a GraphFrame, query it, and run the PageRank algorithm.
*
* @author abaghel
*
*/
public class SparkGraphFrameSample {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder()
.appName("SparkGraphFrameSample")
.config("spark.sql.warehouse.dir", "/file:C:/temp")
.master("local[2]")
.getOrCreate();
//Create a Vertex DataFrame with unique ID column "id"
List<User> uList = new ArrayList<User>() {
{
add(new User("a", "Alice", 34));
add(new User("b", "Bob", 36));
add(new User("c", "Charlie", 30));
}
};
Dataset<Row> verDF = spark.createDataFrame(uList, User.class);
//Create an Edge DataFrame with "src" and "dst" columns
List<Relation> rList = new ArrayList<Relation>() {
{
add(new Relation("a", "b", "friend"));
add(new Relation("b", "c", "follow"));
add(new Relation("c", "b", "follow"));
}
};
Dataset<Row> edgDF = spark.createDataFrame(rList, Relation.class);
//Create a GraphFrame
GraphFrame gFrame = new GraphFrame(verDF, edgDF);
//Get in-degree of each vertex.
gFrame.inDegrees().show();
//Count the number of "follow" connections in the graph.
long count = gFrame.edges().filter("relationship = 'follow'").count();
//Run PageRank algorithm, and show results.
PageRank pRank = gFrame.pageRank().resetProbability(0.01).maxIter(5);
pRank.run().vertices().select("id", "pagerank").show();
//stop
spark.stop();
}
}User.java
package com.abaghel.examples.spark.graphframe;
/**
* User class
*
* @author abaghel
*
*/
public class User {
private String id;
private String name;
private int age;
public User(){
}
public User(String id, String name, int age) {
super();
this.id = id;
this.name = name;
this.age = age;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
}Relation.java
package com.abaghel.examples.spark.graphframe;
/**
* Relation class
*
* @author abaghel
*
*/
public class Relation {
private String src;
private String dst;
private String relationship;
public Relation(){
}
public Relation(String src, String dst, String relationship) {
super();
this.src = src;
this.dst = dst;
this.relationship = relationship;
}
public String getSrc() {
return src;
}
public void setSrc(String src) {
this.src = src;
}
public String getDst() {
return dst;
}
public void setDst(String dst) {
this.dst = dst;
}
public String getRelationship() {
return relationship;
}
public void setRelationship(String relationship) {
this.relationship = relationship;
}
}控制台输出
16/08/27 22:34:45 INFO DAGScheduler: Job 10 finished: show at SparkGraphFrameSample.java:56, took 0.938910 s
16/08/27 22:34:45 INFO CodeGenerator: Code generated in 6.599005 ms
+---+-------------------+
| id| pagerank|
+---+-------------------+
| a| 0.01|
| b|0.08763274109799998|
| c| 0.077926810699|
+---+-------------------+发布于 2016-09-15 19:52:46
我不知道你是否能解决你的问题。我刚刚看到了你的问题。我认为要在线程"main“java.lang.NoClassDefFoundError: com/typesafe/scalalogging/slf4j/LazyLogging中获得异常,需要在pom.xml中放置以下jar
<dependency>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging-slf4j_2.10</artifactId>
<version>2.1.2</version>
</dependency>我遇到了同样的问题,通过添加这个jar,我能够解决这个问题。
发布于 2017-09-18 23:34:07
我能够在0.5.0-spak2.1-s2.11中复制这个问题(连续运行),在0.4.0-spak2.1-s2.11中可以很好地工作。
https://stackoverflow.com/questions/39158954
复制相似问题