根据数据建立了一个线性模型,并设计了一个损失模型。 在我们的线性模型 y=W×x+b中,不断的改变W和b的值,来找到一个使loss最小的值。使用梯度下降(Gradient Descent)优化算法,通过不断的改变模型中变量的值,来找到最小损失值。

1、实例一

#引入TensorFlow模块
import tensorflow as tf

#创建节点保存W和b,并初始化
W = tf.Variable([0.1],  tf.float32)
b = tf.Variable([-0.1], tf.float32)

#定义节点x,保存输入x数据
x = tf.placeholder(tf.float32)

#定义线性模型
linear_model = W * x + b

#定义节点y,保存输入y数据
y = tf.placeholder(tf.float32)

#定义损失函数
loss = tf.reduce_sum(tf.square(linear_model - y))

#初始化
init = tf.global_variables_initializer()

#定义session
sess = tf.Session()

#训练数据
x_train = [1,2,3,6,8]
y_train = [4.8,8.5,10.4,21.0,25.3]

sess.run(init)

#定义优化器
opti = tf.train.GradientDescentOptimizer(0.001)
train = opti.minimize(loss)

#迭代
for i in range(10000):
    sess.run(train, {x:x_train, y:y_train})

#打印结果
print('W:%s  b:%s  loss:%s' %(sess.run(W), sess.run(b), sess.run(loss, {x:x_train, y:y_train})))

结果如下:

基于TensorFlow的2个机器学习简单应用实例

2、实例二

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# Prepare train data
train_X = np.linspace(-1, 1, 100)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10

# Define the model
X = tf.placeholder("float")
Y = tf.placeholder("float")
w = tf.Variable(0.0, name="weight")
b = tf.Variable(0.0, name="bias")
loss = tf.square(Y - X*w - b)
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)

# Create session to run
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    epoch = 1
    for i in range(10):
        for (x, y) in zip(train_X, train_Y):
            _, w_value, b_value = sess.run([train_op, w, b],feed_dict={X: x,Y: y})
        print("Epoch: {}, w: {}, b: {}".format(epoch, w_value, b_value))
        epoch += 1


#draw
plt.plot(train_X,train_Y,"+")
plt.plot(train_X,train_X.dot(w_value)+b_value)
plt.show()

结果如下:基于TensorFlow的2个机器学习简单应用实例

相关文章:

  • 2021-06-11
  • 2021-12-19
  • 2021-12-28
  • 2021-04-13
  • 2021-11-30
  • 2021-04-17
  • 2021-11-10
  • 2021-07-23
猜你喜欢
  • 2021-05-19
  • 2021-11-20
  • 2021-07-30
  • 2021-05-02
  • 2021-04-27
  • 2021-06-21
  • 2021-04-06
相关资源
相似解决方案