【问题标题】:InvalidArgumentError: logits and labels must be same size: logits_size=[128,1] labels_size=[1,128]InvalidArgumentError:logits 和标签的大小必须相同:logits_size=[128,1] labels_size=[1,128]
【发布时间】:2016-11-20 01:05:31
【问题描述】:

我正在尝试使用 tensorflow 中的循环神经网络来预测股市数据。数据文件中有 5 个特征和 >5000 行。标签是调整后的收盘价。

为我的输入文件编辑sentdex's rnn code 后:

import tensorflow as tf
import numpy as np
from preprocess import create_feature_sets_and_labels
from tensorflow.python.ops import rnn, rnn_cell

train_x,train_y,test_x,test_y = create_feature_sets_and_labels()

hm_epochs = 10
n_classes = 1
batch_size = 128
chunk_size = 5
n_chunks = 1
rnn_size = 128

x = tf.placeholder('float', [None, n_chunks, chunk_size])
y = tf.placeholder('float')


def recurrent_neural_network(x):

    layer = {'weights':tf.Variable(tf.random_normal([rnn_size, n_classes])),
             'biases':tf.Variable(tf.random_normal([n_classes]))}

    x = tf.transpose(x, [1,0,2])
    x = tf.reshape(x, [-1, chunk_size])
    x = tf.split(0, n_chunks, x)


    lstm_cell = rnn_cell.BasicLSTMCell(rnn_size)
    outputs, states = rnn.rnn(lstm_cell, x, dtype = tf.float32)

    output = tf.add(tf.matmul(outputs[-1], layer['weights']), layer['biases'])
    return output



def train_neural_network(x):
    prediction = recurrent_neural_network(x)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y))
    optimizer = tf.train.AdamOptimizer().minimize(cost)

    with tf.Session() as sess:
        sess.run(tf.initialize_all_variables())

        for epoch in range(hm_epochs):
            epoch_loss = 0
            i = 0
            while i < len(train_x):
                start = i
                end = i+batch_size
                batch_x = np.array(train_x[start:end])
                batch_y = np.array(train_y[start:end])
                batch_x = batch_x.reshape((batch_size, n_chunks, chunk_size))


                _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
                                                              y: batch_y})
                epoch_loss += c
            print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)

        correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))

        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))

        print('Accuracy:', accuracy.eval({x: test_x, y: test_y}))

train_neural_network(x)

回溯显示:

Traceback (most recent call last):
  File "rnn.py", line 70, in <module>
    train_neural_network(x)
  File "rnn.py", line 60, in train_neural_network
    y: batch_y})
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 717, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 915, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 965, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 985, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: logits and labels must be same size: logits_size=[128,1] labels_size=[1,128]
     [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_1, Reshape_2)]]

Caused by op u'SoftmaxCrossEntropyWithLogits', defined at:
  File "rnn.py", line 70, in <module>
    train_neural_network(x)
  File "rnn.py", line 42, in train_neural_network
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 676, in softmax_cross_entropy_with_logits
    precise_logits, labels, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 1744, in _softmax_cross_entropy_with_logits
    features=features, labels=labels, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[128,1] labels_size=[1,128]
     [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_1, Reshape_2)]]

我不知道 logit 大小或标签尺寸应该是多少,因此无法解决这个错误。请帮忙!!

【问题讨论】:

  • 你有没有解决这个问题?我遇到了同样的问题。

标签: python numpy tensorflow deep-learning


【解决方案1】:

错误在这一行:

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y))

这里有两个问题:

  1. 形状错误,可以通过将y重塑为[128]来修复:

    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
        logits=prediction, labels=tf.reshape(y, [batch_size])))
    
  2. 该代码使用带有单个输出类的 softmax 交叉熵损失。单个值的softmax 是 1,所以所有预测都是 1.0,你的模型不会学习。考虑更改您的模型以在 2 个或更多类之间进行预测,或计算回归。

【讨论】:

    【解决方案2】:

    def recurrent_neural_network(x):的最后一行

    output = tf.transpose(tf.add(tf.matmul(outputs[-1], layer['weights']), layer['biases'])))
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-12-12
      • 2017-09-12
      • 2016-08-12
      • 2021-12-29
      • 2022-01-23
      • 2021-09-21
      相关资源
      最近更新 更多