【问题标题】:Tensorflow: Check failed: NDIMS == new_sizes.size() (2 vs. 1)Tensorflow:检查失败:NDIMS == new_sizes.size() (2 vs. 1)
【发布时间】:2018-10-23 11:45:28
【问题描述】:

我对 tensorflow 完全陌生。我正在做一个项目,我收到一条错误消息:2018-05-13 20:50:57.669722: FT:\src\github\tensorflow\tensorflow/core/framework/tensor.h:630] 检查失败:NDIMS = = new_sizes.size() (2 vs. 1) Pycharm 说:进程以退出代码 -1073740791 (0xC0000409) 结束

我不知道那是什么意思。我正在运行 windows 和 python 3.6。

这是我的代码:

import tensorflow as tf
import gym
import numpy as np

env = gym.make("MountainCar-v0").env

n_inputs = 2
n_hidden = 3
n_output = 3

initializer = tf.contrib.layers.variance_scaling_initializer()

learning_rate = 0.1

X = tf.placeholder(tf.float32, shape=[None,n_inputs])

hidden = tf.layers.dense(X,n_hidden,activation=tf.nn.elu,kernel_initializer=initializer)
logits = tf.layers.dense(hidden,n_output,kernel_initializer=initializer)
outputs = tf.nn.softmax(logits)

index,action = tf.nn.top_k(logits,1)
y = tf.to_float(action)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y,logits=logits)
optimizer = tf.train.AdamOptimizer(learning_rate)

grads_and_vars = optimizer.compute_gradients(cross_entropy)
gradients = [grad for grad, variable in grads_and_vars]
gradient_placeholders = []
grads_and_vars_feed = []
for grad, variable in grads_and_vars:
    gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape())
    gradient_placeholders.append(gradient_placeholder)
    grads_and_vars_feed.append((gradient_placeholder,variable))
training_op = optimizer.apply_gradients(grads_and_vars_feed)

#Variablen und Speicher initialisieren
init = tf.global_variables_initializer()
saver = tf.train.Saver()

#Belohnung der versch. Schritte abziehen
def discount_rewards(rewards, discount_rate):
    discounted_rewards = np.empty(len(rewards))
    comulative_rewards = 0
    for step in reversed(range(len(rewards))):
        comulative_rewards = rewards[step] + comulative_rewards * discount_rate
        discounted_rewards[step] = comulative_rewards
    return discounted_rewards


def discount_and_normalize_rewards(all_rewards, discount_rate):
    all_discounted_rewards = [discount_rewards(rewards, discount_rate) for rewards in all_rewards]
    #Zusammenfügen aller rewards zu einem array
    flat_rewards = np.concatenate(all_discounted_rewards)
    reward_mean = flat_rewards.mean()
    reward_std = flat_rewards.std()
    return [(discount_rewards - reward_mean)/reward_std for discount_rewards in all_discounted_rewards]

n_iterations = 25
n_max_steps = 10000
n_games_per_update = 10
save_iteration = 10
discount_rate = 0.95

with tf.Session() as sess:
    init.run()
    for iteration in range(n_iterations):
        all_rewards = []
        my_rewards = []
        all_gradients = []

        for game in range(n_games_per_update):
            current_rewards = []
            current_gradients = []
            #env.render()
            obs = env.reset()
            for step in range(n_max_steps):
                action_val,gradient_val = sess.run([action,gradients], feed_dict={X: obs.reshape(1, n_inputs)})
                obs, reward, done, info = env.step(action_val)
                current_rewards.append(reward)
                current_gradients.append(gradient_val)
                if done:
                    break
            my_rewards.append(sum(current_rewards))
            print(iteration,": ", sum(current_rewards))
            all_rewards.append(current_rewards)
            all_gradients.append(current_gradients)
        all_rewards = discount_and_normalize_rewards(all_rewards,discount_rate)
        feed_dict = {}
        for var_index, grad_placeholder in enumerate(gradient_placeholders):
            mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index] for game_index,rewards in enumerate(all_rewards) for step,reward in enumerate(rewards)],axis=0)
            feed_dict[grad_placeholder] = mean_gradients
        sess.run(training_op, feed_dict=feed_dict)
        if iteration % save_iteration == 0:
            saver.save(sess, "./my_policy_net_pg.ckpt")

    print("Average: ", sum(my_rewards) / len(my_rewards))
    print("Maximum: ", max(my_rewards))

【问题讨论】:

    标签: python python-3.x tensorflow


    【解决方案1】:

    这些行似乎包含多个错误:

    index, action = tf.nn.top_k( logits, 1 )
    y = tf.to_float( action )
    cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2( labels = y, logits = logits )
    

    首先,tf.nn.top_k() 首先返回值,然后返回索引。因此,action 将保存索引,而不是 indexy 然后成为索引(浮点数),并将其作为labels 传递给tf.nn.softmax_cross_entropy_with_logits_v2()

    这有两个主要问题。首先,您应该将标签作为 one-hot 向量传递,而不是作为索引传递。我怀疑这就是您收到错误的原因,您传递的是一维张量而不是 2。

    第二个问题是理论上的(与您的错误无关,但我想指出):因为logits 是您的预测,并且您从那里获取y,所以您基本上是在比较您的logits对自己。不会有学习的。您需要提供实际标签并以此为基础进行学习。

    只是一个评论,发布整个错误回溯通常是有益的,而不仅仅是最后一行,因为我现在只是猜测错误在哪里,不能确定。

    【讨论】:

    • 您将如何纠正导致模型无法学习的 logits 错误?只是为了删除它们?
    • @MasonChoi 在上述情况下,他也将预测用作标签。您需要使用训练集中的原始标签作为标签(基本事实),以便网络能够学习。只是删除它们会从系统中删除任何反馈,这不会删除它。
    猜你喜欢
    • 2022-09-28
    • 1970-01-01
    • 1970-01-01
    • 2019-01-13
    • 2017-11-14
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2017-02-26
    相关资源
    最近更新 更多