首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >将TensorFlow 1代码移植到TensorFlow 2(没有sess.run的模型学习过程)

将TensorFlow 1代码移植到TensorFlow 2(没有sess.run的模型学习过程)
EN

Stack Overflow用户
提问于 2020-10-30 15:19:16
回答 1查看 152关注 0票数 0

我有这段tf1代码,摘自S.Nikolenko的好书“深度学习”。

这是一个简单的线性回归,它将kb分别学习到2和1。

代码语言:javascript
复制
%tensorflow_version 1.x

import numpy as np,tensorflow as tf
import pandas as pd

n_samples, batch_size, num_steps = 1000, 100, 20000 #set learning constants
X_data = np.random.uniform(1, 10, (n_samples, 1)) #generate array x from 1 to 10 of shape (1000,1)
print(X_data.shape)
y_data = 2 * X_data + 1 + np.random.normal(0, 2, (n_samples, 1)) #generate right answer and add noise to it (to make it scatter)

X = tf.placeholder(tf.float32, shape=(batch_size, 1)) #defining placeholders to put into session.run
y = tf.placeholder(tf.float32, shape=(batch_size, 1))

with tf.variable_scope('linear-regression'):
  k = tf.Variable(tf.random_normal((1, 1)), name='slope') #defining 2 variables with shape (1,1)
  b = tf.Variable(tf.zeros((1,)), name='bias') # and (1,)
  print(k.shape,b.shape)

y_pred = tf.matmul(X, k) + b # all predicted y in batch, represents linear formula k*x + b
loss = tf.reduce_sum((y - y_pred) ** 2)  # mean square
optimizer = tf.train.GradientDescentOptimizer(0.0001).minimize(loss)
display_step = 100

with tf.Session() as sess:
  sess.run(tf.initialize_variables([k,b]))
  for i in range(num_steps):
    indices = np.random.choice(n_samples, batch_size) # taking random indices
    X_batch, y_batch = X_data[indices], y_data[indices] # taking x and y from generated examples
    _, loss_val, k_val, b_val = sess.run([optimizer, loss, k, b ],
      feed_dict = { X : X_batch, y : y_batch })
    if (i+1) % display_step == 0:
      print('Epoch %d: %.8f, k=%.4f, b=%.4f' %
        (i+1, loss_val, k_val, b_val))

我正在努力将它移植到TensorFlow 2上

在很长一段时间里,我不能用什么来代替sess.run()feed_dict,它们在幕后发挥着神奇的作用,官方文档通过编写模型类等来详细说明,但我想尽可能地保持这一点。

也有人建议用tf.GradientTape来计算导数,但我很难把它应用到我的例子中

代码语言:javascript
复制
%tensorflow_version 2.x

import numpy as np,tensorflow as tf
import pandas as pd

n_samples, batch_size, num_steps = 1000, 100, 200
X_data = np.random.uniform(1, 10, (n_samples, 1))
y_data = 2 * X_data + 1 + np.random.normal(0, 2, (n_samples, 1))

X = tf.Variable(tf.zeros((batch_size, 1)), dtype=tf.float32, shape=(batch_size, 1))
y = tf.Variable(tf.zeros((batch_size, 1)), dtype=tf.float32, shape=(batch_size, 1))

k = tf.Variable(tf.random.normal((1, 1)), name='slope')
b = tf.Variable(tf.zeros((1,)), name='bias')

loss = lambda: tf.reduce_sum((y - (tf.matmul(X, k) + b)) ** 2)
optimizer = tf.keras.optimizers.SGD(0.01).minimize(loss, [k, b, X, y])
display_step = 100


for i in range(num_steps):
  indices = np.random.choice(n_samples, batch_size)
  X_batch, y_batch = X_data[indices], y_data[indices]

我需要SGD优化器最小化给定的损失函数,并学习k和b值,如何从这一点实现它?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-11-21 11:50:53

在完成了大量的手册之后,我了解了如何做到这一点,那就是隐藏在sess.run的tf1中,但没有优化器:

与变量相关的

  1. 计数损失
  2. 共轭梯度调整函数相对于每个训练变量的增长速度以学习速率
  3. k

调整新的值

代码语言:javascript
复制
X_batch, y_batch = X_data[indices], y_data[indices]
X.assign(tf.convert_to_tensor(X_batch))
y.assign(tf.convert_to_tensor(y_batch))
with tf.GradientTape(persistent=True) as tape:
  loss_val = loss()

dy_dk = tape.gradient(loss_val, k)
dy_db = tape.gradient(loss_val, b)

k.assign_sub(dy_dk * learn_rate)
b.assign_sub(dy_db * learn_rate)
if (i+1) % display_step == 0:
  print('Epoch %d: %.8f, k=%.4f, b=%.4f' %
        (i+1, loss_val, k.numpy(), b.numpy()))
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64611137

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档