首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何从致密层中提取激活物

如何从致密层中提取激活物
EN

Stack Overflow用户
提问于 2020-05-08 21:07:36
回答 1查看 280关注 0票数 2

我正在尝试实现这个的预处理代码(存储库中的代码)。本文描述了预处理代码:

“使用卷积神经网络(Kim,2014)从语音记录中提取文本特征。我们使用单一的卷积层,然后是最大池和完全连接的层来获得语音的特征表示。这个网络的输入是300维预先训练的840 b GloVe矢量(Pennington等人,2014年)。我们使用大小为3,4和5的过滤器,每个过滤器有50个特征地图。这些复杂的特性是最大的集合,窗口大小为2,然后是ReLU激活(Nair和Hinton,2010年)。,然后将这些连接到一个100维完全连接的层,它的激活形成了话语的表示。这个网络是用情感标签在话语层次上训练的。“

本文作者指出,在此存储库中可以找到CNN特征提取代码。但是,此代码用于执行序列分类的完整模型。除了粗体部分之外,它做以上引号中的所有事情(并进一步完成do分类)。我希望编辑代码来构建连接和输入到100 d层的代码,然后提取激活。要培训的数据可以在repo (其IMDB数据集)中找到。

每个序列的输出应该是一个(100,)张量。

以下是CNN模型的代码:

代码语言:javascript
复制
import tensorflow as tf
import numpy as np


class TextCNN(object):
    """
    A CNN for text classification.
    Uses an embedding layer, followed by a convolutional, max-pooling and softmax layer.
    """
    def __init__(
      self, sequence_length, num_classes, vocab_size,
      embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0):

        # Placeholders for input, output and dropout
        self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x")
        self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y")
        self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")

        # Keeping track of l2 regularization loss (optional)
        l2_loss = tf.constant(0.0)

        # Embedding layer
        with tf.device('/cpu:0'), tf.name_scope("embedding"):
            self.W = tf.Variable(
                tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
                name="W")
            self.embedded_chars = tf.nn.embedding_lookup(self.W, self.input_x)
            self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1)

        # Create a convolution + maxpool layer for each filter size
        pooled_outputs = []
        for i, filter_size in enumerate(filter_sizes):
            with tf.name_scope("conv-maxpool-%s" % filter_size):
                # Convolution Layer
                filter_shape = [filter_size, embedding_size, 1, num_filters]
                W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
                b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
                conv = tf.nn.conv2d(
                    self.embedded_chars_expanded,
                    W,
                    strides=[1, 1, 1, 1],
                    padding="VALID",
                    name="conv")
                # Apply nonlinearity
                h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
                # Maxpooling over the outputs
                pooled = tf.nn.max_pool(
                    h,
                    ksize=[1, sequence_length - filter_size + 1, 1, 1],
                    strides=[1, 1, 1, 1],
                    padding='VALID',
                    name="pool")
                pooled_outputs.append(pooled)

        # Combine all the pooled features
        num_filters_total = num_filters * len(filter_sizes)
        self.h_pool = tf.concat(pooled_outputs, 3)
        self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])

        # Add dropout
        with tf.name_scope("dropout"):
            self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)

        # Final (unnormalized) scores and predictions
        with tf.name_scope("output"):
            W = tf.get_variable(
                "W",
                shape=[num_filters_total, num_classes],
                initializer=tf.contrib.layers.xavier_initializer())
            b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b")
            l2_loss += tf.nn.l2_loss(W)
            l2_loss += tf.nn.l2_loss(b)
            self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")
            self.predictions = tf.argmax(self.scores, 1, name="predictions")

        # Calculate mean cross-entropy loss
        with tf.name_scope("loss"):
            losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.input_y)
            self.loss = tf.reduce_mean(losses) + l2_reg_lambda * l2_loss

        # Accuracy
        with tf.name_scope("accuracy"):
            correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
            self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")

我想把连接到100 d层中以获得激活,我在第59行(就在底部附近的# Add Dropout部分之前,然后注释掉它下面的其余部分)。我该怎么做?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-05-11 19:35:50

你想要实现的卷积神经网络是NLP领域的一个很好的基线。这是首次推出这款手机(Kim,2014年)。

我发现您报告的代码非常有用,但可能比我们需要的要复杂。我试图用简单的keras重写网络(我只错过了正则化)。

代码语言:javascript
复制
def TextCNN(sequence_length, num_classes, vocab_size, 
            embedding_size, filter_sizes, num_filters, 
            embedding_matrix):

    sequence_input = Input(shape=(sequence_length,), dtype='int32')

    embedding_layer = Embedding(vocab_size,
                                embedding_size,
                                weights=[embedding_matrix],
                                input_length=sequence_length,
                                trainable=False)

    embedded_sequences = embedding_layer(sequence_input)

    convs = []
    for fsz in filter_sizes:
        x = Conv1D(num_filters, fsz, activation='relu', padding='same')(embedded_sequences)
        x = MaxPooling1D(pool_size=2)(x)
        convs.append(x)

    x = Concatenate(axis=-1)(convs)
    x = Flatten()(x)
    x = Dropout(0.5)(x)
    output = Dense(num_classes, activation='softmax')(x)

    model = Model(sequence_input, output)
    model.compile(loss='categorical_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy'])

    return model

初始嵌入是通过在手套中学习权重来设置的。您可以使用其他技术(Word2Vec或FastText)上载它们或学习新的嵌入表示,并将它们上传。拟合值与往常一样计算。

我要强调的是,以上是网络的原始代表。如果您想在输出之前插入一个100密层,那么可以这样简单地修改它(这里是一个代码参考):

代码语言:javascript
复制
def TextCNN(sequence_length, num_classes, vocab_size, 
            embedding_size, filter_sizes, num_filters, 
            embedding_matrix):

    sequence_input = Input(shape=(sequence_length,), dtype='int32')

    embedding_layer = Embedding(vocab_size,
                                embedding_size,
                                weights=[embedding_matrix],
                                input_length=sequence_length,
                                trainable=False)

    embedded_sequences = embedding_layer(sequence_input)

    convs = []
    for fsz in filter_sizes:
        x = Conv1D(num_filters, fsz, activation='relu', padding='same')(embedded_sequences)
        x = MaxPooling1D(pool_size=2)(x)
        convs.append(x)

    x = Concatenate(axis=-1)(convs)
    x = Flatten()(x)
    x = Dense(100, activation='relu', name='extractor')(x)
    x = Dropout(0.5)(x)
    output = Dense(num_classes, activation='softmax')(x)

    model = Model(sequence_input, output)
    model.compile(loss='categorical_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy'])

    return model

model = TextCNN(sequence_length=50, num_classes=10, vocab_size=3333, 
        embedding_size=100, filter_sizes=[3,4,5], num_filters=50, 
        embedding_matrix)

model.fit(....)

为了提取我们感兴趣的特性,我们需要我们的Dense100 (我们命名为“提取器”)的输出。我还建议使用本教程进行过滤和特征提取。

代码语言:javascript
复制
extractor = Model(model.input, model.get_layer('extractor').output)
representation = extractor.predict(np.random.randint(0,200, (1000,50)))

representation将是一个形状数组(n_sample,100)

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/61688104

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档