首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >__getitem__函数中的浅拷贝和深拷贝

__getitem__函数中的浅拷贝和深拷贝
EN

Stack Overflow用户
提问于 2021-12-14 15:15:55
回答 1查看 546关注 0票数 0

我遇到了一个问题,它与一个定制的pytorch有关,我认为,它与__getitem__()函数中的浅而深的副本有关。然而,有一些行为我不明白。而且我也不知道它是来自于类还是其他什么地方。

我根据自己的复杂用例创建了一个最小的工作示例。最初,我将一个数据集保存为.hdf5,并加载到__init__()中。对于NN,我希望将元素归一化为1(除以它们的和),并分别返回和。:

代码语言:javascript
复制
# imports
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
# create dataset with fixed seed
np.random.seed(1234)
data = np.random.rand(20, 4)
print(data)
# create custom dataset class

class TestDataset(Dataset):
    """ Test dataset to illustrate bug in get_item """

    def __init__(self, data_array, transform=None, apply_logit=True, with_noise=False):
        """
        Args:
            data_array (np.array): representing data loaded from hdf5 file or so
            transform (None, callable or 'norm'): if data should be transformed
            apply_logit (bool): if logit transform should be applied at the end
            with_noise (bool): if noise should be applied in each call
        """

        self.data = data_array

        self.transform = transform
        self.apply_logit = apply_logit
        self.with_noise = with_noise


    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        if torch.is_tensor(idx):
            idx = idx.tolist()

        data = self.data[idx]

        if self.with_noise:
            data = add_noise(data)

        data_sum = data.sum(axis=(-1), keepdims=True)

        if self.transform:
            if self.transform == 'norm':
                data /= (data_sum + 1e-16) # this should be avoided
            else:
                data = self.transform(data)

        if self.apply_logit:
            data = logit_trafo(data)

        sample = {'data': data, 'data_sum': data_sum.squeeze()}

        return sample

def get_dataloader(data_array, device, batch_size=2, apply_logit=True, with_noise=False, normed=False):

    kwargs = {'num_workers': 2, 'pin_memory': True} if device.type is 'cuda' else {}

    dataset = TestDataset(data_array, transform='norm' if normed else None, apply_logit=apply_logit,
                              with_noise=with_noise)
    return DataLoader(dataset, batch_size=batch_size, shuffle=False, **kwargs)

def add_noise(input_tensor):
    noise = np.random.rand(*input_tensor.shape)*1e-6
    return input_tensor+noise

ALPHA = 1e-6
def logit(x):
    return np.log(x / (1.0 - x))

def logit_trafo(x):
    local_x = ALPHA + (1. - 2.*ALPHA) * x
    return logit(local_x)
# with_noise=False will print just [1. 1.] after one epoch (due to the /= operation above)
# with_noise=True will remove this issue. Why?

mydata = get_dataloader(data, torch.device('cpu'), apply_logit=False, with_noise=False, normed=True)
with torch.no_grad():
    for n in range(3):
        print("epoch: ", n)
        for i, elem in enumerate(mydata):
            print('batch: ', i, #elem['data'].numpy(), 
                  elem['data_sum'].numpy())

我得到以下输出:

代码语言:javascript
复制
[[0.19151945 0.62210877 0.43772774 0.78535858]
 [0.77997581 0.27259261 0.27646426 0.80187218]
 [0.95813935 0.87593263 0.35781727 0.50099513]
 [0.68346294 0.71270203 0.37025075 0.56119619]
 [0.50308317 0.01376845 0.77282662 0.88264119]
 [0.36488598 0.61539618 0.07538124 0.36882401]
 [0.9331401  0.65137814 0.39720258 0.78873014]
 [0.31683612 0.56809865 0.86912739 0.43617342]
 [0.80214764 0.14376682 0.70426097 0.70458131]
 [0.21879211 0.92486763 0.44214076 0.90931596]
 [0.05980922 0.18428708 0.04735528 0.67488094]
 [0.59462478 0.53331016 0.04332406 0.56143308]
 [0.32966845 0.50296683 0.11189432 0.60719371]
 [0.56594464 0.00676406 0.61744171 0.91212289]
 [0.79052413 0.99208147 0.95880176 0.79196414]
 [0.28525096 0.62491671 0.4780938  0.19567518]
 [0.38231745 0.05387369 0.45164841 0.98200474]
 [0.1239427  0.1193809  0.73852306 0.58730363]
 [0.47163253 0.10712682 0.22921857 0.89996519]
 [0.41675354 0.53585166 0.00620852 0.30064171]]

epoch:  0
batch:  0 [2.03671454 2.13090485]
batch:  1 [2.69288438 2.3276119 ]
batch:  2 [2.17231943 1.42448741]
batch:  3 [2.77045097 2.19023559]
batch:  4 [2.35475675 2.49511645]
batch:  5 [0.96633253 1.73269209]
batch:  6 [1.5517233 2.1022733]
batch:  7 [3.5333715  1.58393664]
batch:  8 [1.86984429 1.56915029]
batch:  9 [1.70794311 1.25945542]
epoch:  1
batch:  0 [1. 1.]
batch:  1 [1. 1.]
batch:  2 [1. 1.]
batch:  3 [1. 1.]
batch:  4 [1. 1.]
batch:  5 [1. 1.]
batch:  6 [1. 1.]
batch:  7 [1. 1.]
batch:  8 [1. 1.]
batch:  9 [1. 1.]
epoch:  2
batch:  0 [1. 1.]
batch:  1 [1. 1.]
batch:  2 [1. 1.]
batch:  3 [1. 1.]
batch:  4 [1. 1.]
batch:  5 [1. 1.]
batch:  6 [1. 1.]
batch:  7 [1. 1.]
batch:  8 [1. 1.]
batch:  9 [1. 1.]

在第一个时代之后,应该给出每个输入向量之和的条目返回1。根据我的理解,原因是__getitem()__中的__getitem()__操作覆盖了原始数组(因为它只是一个浅拷贝)。但是,当我用with_noise=True创建数据中心时,输出就变成了

代码语言:javascript
复制
epoch:  0
batch:  0 [2.03671714 2.13090728]
batch:  1 [2.69288618 2.32761437]
batch:  2 [2.17232151 1.42449024]
batch:  3 [2.7704527  2.19023717]
batch:  4 [2.35475926 2.49511859]
batch:  5 [0.96633553 1.73269352]
batch:  6 [1.55172434 2.10227475]
batch:  7 [3.53337356 1.58393908]
batch:  8 [1.86984558 1.56915276]
batch:  9 [1.70794503 1.25945833]
epoch:  1
batch:  0 [2.03671729 2.13090765]
batch:  1 [2.69288721 2.32761405]
batch:  2 [2.17232208 1.42449008]
batch:  3 [2.77045253 2.19023718]
batch:  4 [2.35475815 2.4951189 ]
batch:  5 [0.96633595 1.73269401]
batch:  6 [1.55172476 2.10227547]
batch:  7 [3.53337382 1.58393882]
batch:  8 [1.86984584 1.56915165]
batch:  9 [1.70794547 1.25945795]
epoch:  2
batch:  0 [2.03671533 2.13090593]
batch:  1 [2.69288633 2.32761373]
batch:  2 [2.17232158 1.42448975]
batch:  3 [2.77045371 2.19023796]
batch:  4 [2.3547586  2.49511857]
batch:  5 [0.96633348 1.73269476]
batch:  6 [1.55172544 2.10227616]
batch:  7 [3.53337367 1.58393892]
batch:  8 [1.86984568 1.56915256]
batch:  9 [1.70794379 1.25945825]

如果我添加的噪声乘以0.,情况也是如此。

为什么会这样呢?为什么它突然变成了一个很深的复制品?

EN

回答 1

Stack Overflow用户

发布于 2021-12-14 15:57:30

谢谢你疯狂的物理学家!我必须读几遍它和代码才能看到这个问题:

如果不调用add_noise(),行data /= (data_sum + 1e-16)就会更改原始的输入数组.因此,每次对它的后续调用都返回已经标准化的数据。对add_noise()的调用按照编码的方式创建了一个新数组。然后,就地操作只更改新数组,不触及原始数组(这是我错过的步骤)。因此,随后的调用返回原始数组,而不是规范化数组。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/70351242

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档