我正在使用Python2.7中的nltk模块。以下是我的代码
from nltk.corpus import wordnet as wn
listsyn1 = []
listsyn2 = []
for synset in wn.synsets('dog', pos=wn.NOUN):
print synset.name()
for lemma in synset.lemmas():
listsyn1.append(lemma.name())
for synset in wn.synsets('paw', pos=wn.NOUN):
print synset.name()
for lemma in synset.lemmas():
listsyn2.append(lemma.name())
countsyn1 = len(listsyn1)
countsyn2 = len(listsyn2)
sumofsimilarity = 0;
for firstgroup in listsyn1:
for secondgroup in listsyn2:
print(firstgroup.wup_similarity(secondgroup))
sumofsimilarity = sumofsimilarity + firstgroup.wup_similarity(secondgroup)
averageofsimilarity = sumofsimilarity/(countsyn1*countsyn2)当我试图运行这段代码时,我得到了错误"AttributeError:'unicode‘对象没有属性'wup_similarity'“。谢谢你的帮助。
发布于 2018-01-19 07:24:27
相似性度量只能由Synset对象访问,而不能由Lemma或lemma_names (即str类型)访问。
dog = wn.synsets('dog', 'n')[0]
paw = wn.synsets('paw', 'n')[0]
print(type(dog), type(paw), dog.wup_similarity(paw))输出
<class 'nltk.corpus.reader.wordnet.Synset'> <class 'nltk.corpus.reader.wordnet.Synset'> 0.21052631578947367当您获得.lemmas()并从Synset对象访问.names()属性时,您将得到str
dog = wn.synsets('dog', 'n')[0]
print(type(dog), dog)
print(type(dog.lemmas()[0]), dog.lemmas()[0])
print(type(dog.lemmas()[0].name()), dog.lemmas()[0].name())输出
<class 'nltk.corpus.reader.wordnet.Synset'> Synset('dog.n.01')
<class 'nltk.corpus.reader.wordnet.Lemma'> Lemma('dog.n.01.dog')
<class 'str'> dog您可以使用hasattr函数来检查哪些对象/类型可以访问某个函数或属性:
dog = wn.synsets('dog', 'n')[0]
print(hasattr(dog, 'wup_similarity'))
print(hasattr(dog.lemmas()[0], 'wup_similarity'))
print(hasattr(dog.lemmas()[0].name(), 'wup_similarity'))输出
True
False
False最有可能的是,您需要一个类似于https://github.com/alvations/pywsd/blob/master/pywsd/similarity.py#L76的函数,它使两个同步集的wup_similarity最大化,但请注意,有许多必要的警告,比如预柠檬化。
所以我认为这就是你想通过使用.lemma_names()来避免它的地方。也许,你可以这样做:
def ss_lnames(word):
return set(chain(*[ss.lemma_names() for ss in wn.synsets(word, 'n')]))
dog_lnames = ss_lnames('dog')
paw_lnames = ss_lnames('paw')
for dog_name, paw_name in product(dog_lnames, paw_lnames):
for dog_ss, paw_ss in product(wn.synsets(dog_name, 'n'), wn.synsets(paw_name, 'n')):
print(dog_ss, paw_ss, dog_ss.wup_similarity(paw_ss)) 但最有可能的结果是不可解释和不可靠的,因为没有词义消歧在同步之前上升,在外部和内部循环中查找机器人。
https://stackoverflow.com/questions/48323393
复制相似问题