我使用的是rdf爬虫,因为我有一个名为:
import edu.unika.aifb.rdf.crawler.*;
import com.hp.hpl.jena.rdf.model.*;
import com.hp.hpl.jena.util.FileManager;这些类文件称为error,我尝试使用jena包,但是我已经附加了,它不会做任何更改。
更新:
SampleCrawl.java 类内容:
import java.util.*;
import edu.unika.aifb.rdf.crawler.*;
/**
* Call this class with 3 arguments - URL to crawl to,
* depth and time in seconds
*/
public class SampleCrawl {
/**
* @param uRI
* @param depth
* @param time
*/
@SuppressWarnings("rawtypes")
public SampleCrawl(Vector uRI, Vector hf, int depth, int time){
// Initialize Crawling parameters
CrawlConsole c = new CrawlConsole(uRI,hf,depth,time);
// get an ontology file from its local location
// (OPTIONAL)
c.setLocalNamespace("http://www.daml.org/2000/10/daml-ont","c:\\temp\\rdf\\schemas\\daml-ont.rdf");
// set all the paths to get all the results
c.setLogPath("c:\\temp\\crawllog.xml");
c.setCachePath("c:\\temp\\crawlcache.txt");
c.setModelPath("c:\\temp\\crawlmodel.rdf");
try{
// crawl and get RDF model
c.start();
// This writes all three result files out
c.writeResults();
}catch(Exception e){
}
}
/**
* @param args
* @throws Exception
*/
@SuppressWarnings({ "rawtypes", "unchecked" })
public static void main(String[] args) throws Exception {
if (args.length != 3) {
System.err.println("Usage: java -cp [JARs] SampleCrawl [URL] [depth:int] [time:int]");
System.exit(0);
}
Vector uris = new Vector();
uris.add(args[0]);
// no host filtering - crawl to all hosts
Vector hostfilter = null;
/* You may want to do something else to enable host filtering:
* Vector hostfilter = new Vector();
* hostfilter.add("http://www.w3.org");
*/
int depth = 2;
int time = 60;
try {
depth = Integer.parseInt(args[1]);
time = Integer.parseInt(args[2]);
}
catch (Exception e) {
System.err.println("Illegal argument types:");
System.err.println("Argument list: URI:String depth:int time(s):int");
System.exit(0);
}
new SampleCrawl(uris,hostfilter,depth,time);
}
}问题:
如何添加import edu.unika.aifb.rdf.crawler.;错误
发布于 2011-05-08 12:35:33
我在谷歌上搜索了你要导入的包,看起来你使用的是Kaon。如果是这样的话,您在导入声明中犯了一个错误。你有:
import edu.unika.aifb.rdf.crawler.*;而SourceForge上的下载则需要:
import edu.unika.aifb.rdf.rdfcrawler.*;顺便说一句,如果你能提供一些信息,比如“我试图使用Kaon的rdfcrawler .”,那将是很有帮助的。在你的问题上。否则,我们必须尝试猜测重要的细节在您的设置。
https://stackoverflow.com/questions/5902374
复制相似问题