首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >训练子模型代替全模型Tensorflow联邦

训练子模型代替全模型Tensorflow联邦
EN

Stack Overflow用户
提问于 2021-10-29 10:02:33
回答 1查看 170关注 0票数 3

我正在尝试修改TensorFlow联邦示例。我想从原始模型中创建一个子模型,并将新创建的子模型用于培训阶段,然后将权重发送到服务器,以便他更新原始模型。

我知道这不应该在client_update内部完成,但是服务器应该直接向客户端发送正确的子模型,但现在我更喜欢这样做。

现在我有两个问题:

  1. 似乎我无法在client_update函数中创建一个新模型,如下所示:
代码语言:javascript
复制
    @tf.function
    def client_update(model, dataset, server_message, client_optimizer):
        """Performans client local training of `model` on `dataset`.
        Args:
          model: A `tff.learning.Model`.
          dataset: A 'tf.data.Dataset'.
          server_message: A `BroadcastMessage` from server.
          client_optimizer: A `tf.keras.optimizers.Optimizer`.
        Returns:
          A 'ClientOutput`.
        """
    
        model_weights = model.weights
    
        import dropout_model
        dropout_model = dropout_model.get_dropoutmodel(model)
    
    
        initial_weights = server_message.model_weights
        tf.nest.map_structure(lambda v, t: v.assign(t), model_weights,
                              initial_weights)
        .....

错误是这个:

代码语言:javascript
复制
ValueError: tf.function-decorated function tried to create variables on non-first call.

创建的模型如下:

代码语言:javascript
复制
    def from_original_to_submodel(only_digits=True):
        """The CNN model used in https://arxiv.org/abs/1602.05629.
        Args:
          only_digits: If True, uses a final layer with 10 outputs, for use with the
            digits only EMNIST dataset. If False, uses 62 outputs for the larger
            dataset.
        Returns:
          An uncompiled `tf.keras.Model`.
        """
        data_format = 'channels_last'
        input_shape = [28, 28, 1]
        max_pool = functools.partial(
            tf.keras.layers.MaxPooling2D,
            pool_size=(2, 2),
            padding='same',
            data_format=data_format)
        conv2d = functools.partial(
            tf.keras.layers.Conv2D,
            kernel_size=5,
            padding='same',
            data_format=data_format,
            activation=tf.nn.relu)
        model = tf.keras.models.Sequential([
            conv2d(filters=32, input_shape=input_shape),
            max_pool(),
            conv2d(filters=64),
            max_pool(),
            tf.keras.layers.Flatten(),
            tf.keras.layers.Dense(410, activation=tf.nn.relu), #20% dropout
            tf.keras.layers.Dense(10 if only_digits else 62),
        ])
        return model
    
    def get_dropoutmodel(model):
        keras_model = from_original_to_submodel(only_digits=False)
        loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
        return tff.learning.from_keras_model(keras_model, loss=loss, input_spec=model.input_spec)
  1. 更像是一个理论性的问题。我想像我说的那样训练一个子模型,所以我将获取从服务器initial_weights发送的原始模型权重,对于每一层,我将为子模型权重分配一个随机权重的子列表。例如,第6层的initial_weights包含100个元素,同一层的新子模型只有40个元素,我会从一个随机的种子中选择40个元素,进行训练,然后将种子发送到服务器,这样他就会选择相同的识别码,然后只更新它们。对吗?我的第二个版本是创建仍然100个元素(40个随机元素和60个等于0),但我认为这会在服务器端聚合时破坏模型性能。

编辑:

我对client_update_fn函数做了如下修改:

代码语言:javascript
复制
@tff.tf_computation(tf_dataset_type, server_message_type)
def client_update_fn(tf_dataset, server_message):
    model = model_fn()
    submodel = submodel_fn()
    client_optimizer = client_optimizer_fn()
    return client_update(model, submodel, tf_dataset, server_message, client_optimizer)

向函数build_federated_averaging_process添加一个新参数,如下所示:

代码语言:javascript
复制
def build_federated_averaging_process(
        model_fn, submodel_fn,
        server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0),
        client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.1)):

main.py中,我这样做了:

代码语言:javascript
复制
def tff_submodel_fn():
    keras_model = create_submodel_dropout(only_digits=False)
    loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
    return tff.learning.from_keras_model(keras_model, loss=loss, input_spec=train_data.element_type_structure)

iterative_process = simple_fedavg_tff.build_federated_averaging_process(
    tff_model_fn, tff_submodel_fn, server_optimizer_fn, client_optimizer_fn)

现在,在client_update内部,我可以使用子模型:

代码语言:javascript
复制
@tf.function
def client_update(model, submodel, dataset, server_message, client_optimizer):
    """Performans client local training of `model` on `dataset`.
    Args:
      model: A `tff.learning.Model`.
      dataset: A 'tf.data.Dataset'.
      server_message: A `BroadcastMessage` from server.
      client_optimizer: A `tf.keras.optimizers.Optimizer`.
    Returns:
      A 'ClientOutput`.
    """



    model_weights = model.weights
    initial_weights = server_message.model_weights      
    submodel_weights = submodel.weights
    tf.nest.map_structure(lambda v, t: v.assign(t), submodel_weights,
                          initial_weights)
    num_examples = tf.constant(0, dtype=tf.int32)
    loss_sum = tf.constant(0, dtype=tf.float32)

    # Explicit use `iter` for dataset is a trick that makes TFF more robust in
    # GPU simulation and slightly more performant in the unconventional usage
    # of large number of small datasets.
    weights_delta = []
    testing = False
    if not testing:
        for batch in iter(dataset):
            with tf.GradientTape() as tape:
                outputs = model.forward_pass(batch)
            grads = tape.gradient(outputs.loss, submodel_weights.trainable)
            client_optimizer.apply_gradients(zip(grads, submodel_weights.trainable))
            batch_size = tf.shape(batch['x'])[0]
            num_examples += batch_size
            loss_sum += outputs.loss * tf.cast(batch_size, tf.float32)

        weights_delta = tf.nest.map_structure(lambda a, b: a - b,
                                              submodel_weights.trainable,
                                              initial_weights.trainable)
    client_weight = tf.cast(num_examples, tf.float32)
    return ClientOutput(weights_delta, client_weight, loss_sum / client_weight)

我收到这个错误:

代码语言:javascript
复制
    ValueError: No gradients provided for any variable: ['conv2d_2/kernel:0', 'conv2d_2/bias:0', 'conv2d_3/kernel:0', 'conv2d_3/bias:0', 'dense_2/kernel:0', 'dense_2/bias:0', 'dense_3/kernel:0', 'dense_3/bias:0'].

Fatal Python error: Segmentation fault

Current thread 0x00007f27af18b740 (most recent call first):
  File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1853 in _create_c_op
  File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 2041 in __init__
  File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3557 in _create_op_internal
  File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 599 in _create_op_internal
  File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 748 in _apply_op_helper
  File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 1276 in delete_iterator
  File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 549 in __del__

Process finished with exit code 11

目前,模型与原始模型相同,我在create_submodel_dropout中复制了函数create_submodel_dropout,所以我不知道出了什么问题

EN

回答 1

Stack Overflow用户

发布于 2021-10-31 18:44:57

通常,我们不能在tf.function中创建变量,因为该方法将在TFF计算中被重复使用;尽管从技术上讲,在tf.function中是变量只能创建一次。。我们可以看到,model实际上是在大多数TFF库代码中在tf.function之外创建的,并作为参数传递给tf.function (例如:tff.py#L101)。另一个需要研究的可能性是tf.init_scope上下文,但请确保全面阅读有关警告和行为的所有文档。

TFF有一个名为tff.federated_select的新的通信启动机制,它在这里可能会很有帮助。内部部分附带了两个教程:

  1. tff.federated_select,它专门讨论通信原语。
  2. 和稀疏聚集演示了使用federated_select进行线性回归的联邦学习;并演示了“稀疏聚合”的必要性,即填充零的困难。
票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69767043

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档