首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >使用斯坦福大学CoreNLP解析共同引用-无法加载解析器模型

使用斯坦福大学CoreNLP解析共同引用-无法加载解析器模型
EN

Stack Overflow用户
提问于 2012-05-21 16:10:59
回答 1查看 7.7K关注 0票数 9

我想做一个非常简单的工作:给定一个包含代词的字符串,我想解析它们。

例如,我想转一句:“玛丽有一只小羊羔,她很可爱。”在“玛丽有一只小羊羔。玛丽很可爱。”

我试过使用斯坦福大学的CoreNLP。但是,我似乎无法启动解析器。我已经使用Eclipse导入了项目中包含的所有jars,并将3GB分配给JVM (-Xmx3g)。

这个错误非常尴尬:

线程"main“中的edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava/lang/String;[Ljava/lang/String;)Ledu/stanford/nlp/parser/lexparser/LexicalizedParser;异常:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava/lang/String;[Ljava/lang/String;)Ledu/stanford/nlp/parser/lexparser/LexicalizedParser;

我不明白我从何而来,我认为这是我问题的根源.这很奇怪。我试图进入源文件,但没有错误的参考那里。

代码:

代码语言:javascript
复制
import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefChainAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefGraphAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.dcoref.CorefChain;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.Tree;
import edu.stanford.nlp.semgraph.SemanticGraph;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.util.IntTuple;
import edu.stanford.nlp.util.Pair;
import edu.stanford.nlp.util.Timing;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import java.util.Properties;

public class Coref {

/**
 * @param args the command line arguments
 */
public static void main(String[] args) throws IOException, ClassNotFoundException {
    // creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution 
    Properties props = new Properties();
    props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

    // read some text in the text variable
    String text = "Mary has a little lamb. She is very cute."; // Add your text here!

    // create an empty Annotation just with the given text
    Annotation document = new Annotation(text);

    // run all Annotators on this text
    pipeline.annotate(document);

    // these are all the sentences in this document
    // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
    List<CoreMap> sentences = document.get(SentencesAnnotation.class);

    for(CoreMap sentence: sentences) {
      // traversing the words in the current sentence
      // a CoreLabel is a CoreMap with additional token-specific methods
      for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
        // this is the text of the token
        String word = token.get(TextAnnotation.class);
        // this is the POS tag of the token
        String pos = token.get(PartOfSpeechAnnotation.class);
        // this is the NER label of the token
        String ne = token.get(NamedEntityTagAnnotation.class);       
      }

      // this is the parse tree of the current sentence
      Tree tree = sentence.get(TreeAnnotation.class);
      System.out.println(tree);

      // this is the Stanford dependency graph of the current sentence
      SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
    }

    // This is the coreference link graph
    // Each chain stores a set of mentions that link to each other,
    // along with a method for getting the most representative mention
    // Both sentence and token offsets start at 1!
    Map<Integer, CorefChain> graph = 
      document.get(CorefChainAnnotation.class);
    System.out.println(graph);
  }
}

全堆栈跟踪:

edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger .>从经过训练的标签加载默认属性edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger从edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger读取POS标签模型。完成2.1秒。完成2.2秒。添加注解引理,添加注解器,从edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz加载分类器。完成4秒。从edu/stanford/nlp/models/ner/english.muc.distsim.crf.ser.gz加载分类器。完成3.0秒。从edu/stanford/nlp/models/ner/english.conll.distsim.crf.ser.gz加载分类器。完成3.3秒。在线程"main“java.lang.NoSuchMethodError: java.lang.NoSuchMethodError中添加注释器解析异常在edu.stanford.nlp.pipeline.ParserAnnotator.loadModel(ParserAnnotator.java:115) at edu.stanford.nlp.pipeline.ParserAnnotator.(ParserAnnotator.java:64) at edu.stanford.nlp.pipeline.StanfordCoreNLP$12.create(StanfordCoreNLP.java:603) at edu.stanford.nlp.pipeline.StanfordCoreNLP$12.create(StanfordCoreNLP.java:585) at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:62) at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:329) at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:196) at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:186) at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:178) at Coref.main(Coref.java:41)

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2012-05-23 18:56:07

是的,从Java 1.0开始,L只是一个奇怪的Sun东西。

LexicalizedParser.loadModel(String, String ...)是一个添加到解析器中的新方法,它没有被找到。我怀疑这意味着您的类路径中有另一个解析器版本,正在使用它。

尝试如下:在任何IDE之外的shell中,给出以下命令(适当地给出通往stanford的路径,并在Windows上更改: to;if:

代码语言:javascript
复制
javac -cp ".:stanford-corenlp-2012-04-09/*" Coref.java
java -mx3g -cp ".:stanford-corenlp-2012-04-09/*" Coref

解析器加载,您的代码为我正确运行-只需要添加一些打印语句,以便您可以看到它做了什么:-)。

票数 9
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/10688739

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档