我试图用Pytorch实现关于序列模型和长短期记忆网络的the exercise。我们的想法是添加一个LSTM词性标记器字符级特性,但我似乎无法实现它。他们给出了一个提示,应该涉及两个LSTM,一个将输出字符级表示,另一个将负责预测词性标签。我只是想不出如何循环遍历单词级别(句子中)和字符(句子中的每个单词),并在forward函数中实现它。有人知道怎么做吗?或者遇到类似的情况?
下面是我的代码:
class LSTMTaggerAug(nn.Module):
def __init__(self, embedding_dim_words, embedding_dim_chars, hidden_dim_words, hidden_dim_chars, vocab_size, tagset_size, charset_size):
super(LSTMTaggerAug, self).__init__()
self.hidden_dim_words = hidden_dim_words
self.hidden_dim_chars = hidden_dim_chars
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim_words)
self.char_embeddings = nn.Embedding(charset_size, embedding_dim_chars)
self.lstm_char = nn.LSTM(embedding_dim_chars, hidden_dim_chars)
self.lstm_words = nn.LSTM(embedding_dim_words + hidden_dim_chars, hidden_dim_words)
self.hidden2tag = nn.Linear(hidden_dim_words, tagset_size)
self.hidden_char = self.init_hidden(c=False)
self.hidden_words = self.init_hidden(c=True)
def init_hidden(self, c=True):
if c:
return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim_words)),
autograd.Variable(torch.zeros(1, 1, self.hidden_dim_words)))
else:
return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim_chars)),
autograd.Variable(torch.zeros(1, 1, self.hidden_dim_chars)))
def forward(self, sentence, words):
# embeds = self.word_embeddings(sentence)
for ix, word in enumerate(sentence):
chars = words[ix]
char_embeds = self.char_embeddings(chars)
lstm_char_out, self.hidden_char = self.lstm_char(
char_embeds.view(len(chars), 1, -1), self.hidden_char)
char_rep = lstm_char_out[-1]
embeds = self.word_embeddings(word)
embeds_cat = torch.cat((embeds, char_rep), dim=1)
lstm_out, self.hidden_words = self.lstm_words(embeds_cat, self.hidden_words)
tag_space = self.hidden2tag(lstm_out.view(1, -1))
tag_score = F.log_softmax(tag_space, dim=1)
if ix==0:
tag_scores = tag_score
else:
tag_scores = torch.cat((tag_scores, tag_score), 0)
return tag_scores发布于 2018-02-19 21:24:39
根据你的描述,最天真的方法是把一个句子s去掉标点符号。然后将其拆分成单词:
words = s.split()获取您的第一个字符级lstm,LSTMc并将其单独应用于每个单词以对单词进行编码(使用lstm的最后一个输出状态对单词进行编码):
encoded_words = []
for word in words:
state = state_0
for char in word:
h, state = LSTMc(one_hot_encoding(char), state)
encoded_words.append(h)在对单词进行编码之后,在编码的单词上传递词性标记器lstm LSTMw:
state = statew_0
parts_of_speech = []
for enc_word in encoded_words:
pos, state = LSTMw(enc_word, state)
parts_of_speech.append(pos)https://stackoverflow.com/questions/48705162
复制相似问题