【问题标题】:Multivariable Linear Regression using tensorflow使用张量流的多变量线性回归
【发布时间】:2017-08-15 22:18:25
【问题描述】:

我将 TensorFlow 代码重用于多变量线性回归并试图降低成本,但问题是经过一些迭代后,成本以及 W 和 b 的值变成 inf 并很快变为 nan。有人可以告诉我问题出在哪里。 我有大约 100,000 个值。我已将其修剪为 10,000 个值进行测试。 数据集是here

这里是代码

import numpy as np
import tensorflow as tf



def computeX():

    all_xs = np.loadtxt("test.csv", delimiter=',', skiprows=1, usecols=range(4,260)) #reads the columns except first one 


    timestamps = np.loadtxt("test.csv", delimiter=',', skiprows=1, usecols=(0),dtype =str)
    symbols = np.loadtxt("test.csv", delimiter=',', skiprows=1, usecols=(1),dtype =float)
    categories = np.loadtxt("test.csv", delimiter=',', skiprows=1, usecols=(2),dtype =str)

    tempList = []
    BOW = {"M1": 1.0, "M5": 2.0, "M15": 3.0, "M30": 4.0, "H1": 5.0, "H4": 6.0, "D1": 7.0}

    #explode dates and make them features.. 2016/11/1 01:54 becomes [2016, 11, 1, 01, 54]
    for i, v in enumerate(timestamps):
        splitted = v.split()
        dateVal = splitted[0]
        timeVal = splitted[1]
        ar = dateVal.split("/")
        splittedTime = timeVal.split(":")

        ar = ar + splittedTime

        Features = np.asarray(ar)
        Features = Features.astype(float)

        # append symbols

        Features = np.append(Features,symbols[i])

        #append categories from BOW

        Features = np.append(Features, BOW[categories[i]] )
        row = np.append(Features,all_xs[i])
        row = row.tolist()
        tempList.append(row)

    all_xs = np.array(tempList)
    del tempList[:]
    return all_xs


if __name__ == "__main__":
    print ("Starting....")


    learn_rate = 0.5

    all_ys = np.loadtxt("test.csv", delimiter=',', skiprows=1, usecols=3) 
#reads only first column  

    all_xs = computeX()

    datapoint_size= int(all_xs.shape[0])

    print(datapoint_size)
    x = tf.placeholder(tf.float32, [None, 263], name="x")
    W = tf.Variable(tf.ones([263,1]), name="W")
    b = tf.Variable(tf.ones([1]), name="b")

    product = tf.matmul(x,W)
    y = product + b

    y_ = tf.placeholder(tf.float32, [datapoint_size])

    cost = tf.reduce_mean(tf.square(y_-y))/ (2*datapoint_size)

    train_step = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)

    sess = tf.Session()


    init = tf.global_variables_initializer()
    sess.run(init)

    batch_size = 10000
    steps =10
    for i in range(steps):
      print("Entering Loop")
      if datapoint_size == batch_size:
         batch_start_idx = 0
      elif datapoint_size < batch_size:
         raise ValueError("datapoint_size: %d, must be greater than batch_size: %d" % (datapoint_size, batch_size))
      else:
         batch_start_idx = (i * batch_size) % (datapoint_size - batch_size)
      batch_end_idx = batch_start_idx + batch_size
      batch_xs = all_xs[batch_start_idx:batch_end_idx]
      batch_ys = all_ys[batch_start_idx:batch_end_idx]
      xs = np.array(batch_xs)
      ys = np.array(batch_ys)

      feed = { x: xs, y_: ys }

      sess.run(train_step, feed_dict=feed)  
      print("W: %s" % sess.run(W))
      print("b: %f" % sess.run(b))
      print("cost: %f" % sess.run(cost, feed_dict=feed))

【问题讨论】:

    标签: python machine-learning tensorflow linear-regression


    【解决方案1】:

    看看你的数据:

    id8         id9         id10    id11    id12
    1451865600  1451865600  -19.8   87.1    0.5701
    1451865600  1451865600  -1.6    3.6     0.57192
    1451865600  1451865600  -5.3    23.9    0.57155
    

    您还将权重初始化为 1,如果将所有输入数据乘以 1,然后将它们相加,则所有“重”列(id8、id9 等,具有大数字的列)都会将数据从较小的列)。您还有用零填充的列:

    id236   id237   id238   id239   id240
    0       0       0       0       0
    0       0       0       0       0
    0       0       0       0       0 
    

    这些都是不能一起玩的东西。较大的值将导致非常高的预测,这将导致损失爆炸和溢出。即使将学习率降低 10 亿倍也几乎没有任何效果。

    因此建议:

    • 检查您的数据,删除所有无意义的数据(用零填充的列)
    • 规范化您的输入数据
    • 检查此时的幅度或损失函数,然后尝试调整学习率。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2017-08-27
      • 2021-09-26
      • 2023-03-15
      • 1970-01-01
      • 2016-07-21
      • 2011-01-06
      相关资源
      最近更新 更多