首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在Tensorflow中实现自定义Min_MAX_Pooling层

在Tensorflow中实现自定义Min_MAX_Pooling层
EN

Stack Overflow用户
提问于 2022-08-15 10:42:11
回答 1查看 60关注 0票数 0

嗨,我正在尝试实现在tensorflow中使用lambda层来降低时间序列数据中的噪声。以下是使用最小最大池的功能

代码语言:javascript
复制
def min_max_pooling(sequence, window=5):

    output = tf.constant([],dtype='float64')
    max_ = tf.Variable(0,dtype = 'float64')
    min_ = tf.Variable(0,dtype = 'float64')

    # loop over sequence in chunks, get the min max values and concat all of them into single 
    tensor and return as output.

    for i in range(window, len(sequence) + window, window):
        chunk = sequence[i - window:i]
        print(i)
        
        # get the max and min values from chunk

        max_.assign(chunk[tf.argmax(chunk)])
        min_.assign(chunk[tf.argmin(chunk)])

        # get the index of max and min values from chunk

        max_index = tf.argmax(chunk)
        min_index = tf.argmin(chunk)

        
        # append values to output tensor according to the original sequence
        # if min was first in sequence than max i,e.  tf.greater(max_index , min_index) == True,  
        # append min first and then max else vice versa

        if tf.greater(max_index , min_index):
            output = tf.concat([output, [min_]],-1)
            output = tf.concat([output, [max_]],-1) 

        else:
            output = tf.concat([output, [max_]],-1)
            output = tf.concat([output, [min_]],-1)

    return tf.convert_to_tensor(output)

# print(tf.autograph.to_code(min_max_pooling))

# min_max_pooling = tf.autograph.to_graph(min_max_pooling)

该函数接受两个参数--时间序列数据的一维张量(从0到1之间)和窗口大小。它计算输出序列w.r.t到windows大小,并返回一个张量。基本上,它是一个像maxpooling1d一样工作的函数,它有助于降低噪声(下采样的数据),但也解释了最小值,这也是我想要实现它的原因。下面是这个函数的测试输出。

tf.Tensor( 0.99941323 0.98313041 0.97799619 0.98533079 0.98635764 0.99457239 0.99413232 0.99105178 0.99193193 0.98753117 0.98489071 0.98371718 0.98459733 0.98445064 0.98386387 0.98547748 0.99163855 0.99061171 0.99735954 1.,shape=(20 ),dtype=float64)

极小池

现在,当我在tensorflow模型中使用它作为lambda层时,问题就出现了,我得到了各种各样的错误,试图解决所有的错误,但仍然无法解决问题。我无法让它与tensorflow.here一起工作,这是它的代码。

代码语言:javascript
复制
input_layer = tf.keras.layers.Input(shape=(1000,), name="input_layer")

output_layer = tf.keras.layers.Lambda(min_max_pooling, name="lambda_layer")(input_layer)

model = tf.keras.models.Model(input_layer, output_layer, name="model")

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005), loss="categorical_crossentropy", run_eagerly=True)

我想要构建的只是一个层,它接受一系列时间序列数据,并像maxpooling1d一样从它中消除噪音,除了考虑到序列中的最小值的重要性之外。

最终的结果应该是将一个张量传递到层中。

代码语言:javascript
复制
tf.Tensor(
[0.99941323 0.98313041 0.97799619 0.98533079 0.98635764 0.99457239
 0.99413232 0.99105178 0.99193193 0.98753117 0.98489071 0.98371718
 0.98459733 0.98445064 0.98386387 0.98547748 0.99163855 0.99061171
 0.99735954 1.        ], shape=(20,), dtype=float64)

并将下采样的输出作为

代码语言:javascript
复制
tf.Tensor(
[0.99941323 0.97799619 0.99457239 0.98753117 0.98489071 0.98371718
 0.98547748 1.        ], shape=(8,), dtype=float64)

现在我知道我对tensorflow不太了解。但是谁能帮我实现这个问题的完整工作代码呢?也不知道如何通过窗口参数到兰达层?想帮我处理这个。任何帮助都是值得感激的。

ValueError:调用层"lambda_layer“时遇到的异常(键入Lambda)。

新误差

代码语言:javascript
复制
The following Variables were created within a Lambda layer (lambda_layer)
but are not tracked by said layer:
  <tf.Variable 'lambda_layer/map/while/Variable:0' shape=() dtype=float32>
  <tf.Variable 'lambda_layer/map/while/Variable:0' shape=() dtype=float32>
The layer cannot safely ensure proper Variable reuse across multiple
calls, and consequently this behavior is disallowed for safety. Lambda
layers are not well suited to stateful computation; instead, writing a
subclassed Layer is the recommend way to define layers with
Variables.

第二错误

代码语言:javascript
复制
data = pd.read_csv('/content/Stock_data.csv', parse_dates=False,
                   index_col=1)

tensor = data.close.head(10).to_numpy(dtype='float64')
tensor = tensor / max(tensor)
print(tensor)
# tensor = tf.convert_to_tensor(tensor)

print(tensor)
p = model.predict(tensor)
print(p)

错误:

代码语言:javascript
复制
[1.         0.98370762 0.97857038 0.98590929 0.98693674 0.99515632
 0.99471598 0.99163364 0.99251431 0.98811096]
[1.         0.98370762 0.97857038 0.98590929 0.98693674 0.99515632
 0.99471598 0.99163364 0.99251431 0.98811096]
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-59-5028b0f38a02> in <module>()
      8 
      9 print(tensor)
---> 10 p = model.predict(tensor)
     11 print(p)

1 frames
<ipython-input-54-a3aac9d10348> in multiple_min_max_pooling(sequences)
      1 def multiple_min_max_pooling(sequences):
----> 2     return tf.map_fn(min_max_pooling, sequences)

ValueError: Exception encountered when calling layer "lambda_layer" (type Lambda).

in user code:

    File "<ipython-input-53-495c7ee6064b>", line 10, in min_max_pooling  *
        for i in range(window, len(sequence) + window, window):

    ValueError: len requires a non-scalar tensor, got one of shape []


Call arguments received:
  • inputs=tf.Tensor(shape=(10,), dtype=float32)
  • mask=None
  • training=False
EN

回答 1

Stack Overflow用户

发布于 2022-08-15 11:26:17

除了在我的经验中TF通常使用float32之外,浮点数64是内存的两倍,而且附加的精度/大数通常是没有用的,您的问题是,您没有考虑到TF使用批数据。

换句话说,您的层将接收一批序列,而不是单个序列。您的代码对于单个代码可以很好地工作,因此可以很容易(但不能有效地)使用tf.map_fn进行修复。

代码语言:javascript
复制
def multiple_min_max_pooling(sequences):
    return tf.map_fn(min_max_pooling, sequences)

...
tf.keras.layers.Lambda(multiple_min_max_pooling, name="lambda_layer")(input_layer)

关于时间窗口,您有两个选择,要么定义一个自定义层(更优雅、更易读),要么定义一个返回函数的函数(如果您想要在线阅读更多关于它的内容,以了解它的工作方式),这种“设计模式”称为高阶函数:

代码语言:javascript
复制
def multiple_min_max_pooling(window=5):
    def fn(sequences):
        return tf.map_fn(min_max_pooling(window), sequences)
    return fn
def min_max_pooling(window=5):
    def fn(sequence):
        output = tf.constant([],dtype='float32')
        max_ = tf.Variable(0,dtype = 'float32')
        min_ = tf.Variable(0,dtype = 'float32')

        # loop over sequence in chunks, get the min max values and concat all of them into single

        for i in range(window, len(sequence) + window, window):
            chunk = sequence[i - window:i]
            print(i)

            # get the max and min values from chunk

            max_.assign(chunk[tf.argmax(chunk)])
            min_.assign(chunk[tf.argmin(chunk)])

            # get the index of max and min values from chunk

            max_index = tf.argmax(chunk)
            min_index = tf.argmin(chunk)


            # append values to output tensor according to the original sequence
            # if min was first in sequence than max i,e.  tf.greater(max_index , min_index) == True,
            # append min first and then max else vice versa

            if tf.greater(max_index , min_index):
                output = tf.concat([output, [min_]],-1)
                output = tf.concat([output, [max_]],-1)

            else:
                output = tf.concat([output, [max_]],-1)
                output = tf.concat([output, [min_]],-1)

        return tf.convert_to_tensor(output)
    return fn

# and now you can do this:
tf.keras.layers.Lambda(multiple_min_max_pooling(window=2), name="lambda_layer")(input_layer)
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/73359855

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档