【问题标题】:Tensorflow: AttributeError: 'dict' object has no attribute 'train'Tensorflow:AttributeError:'dict'对象没有属性'train'
【发布时间】:2021-08-05 11:32:30
【问题描述】:

我正在学习深度学习课程,当我尝试运行 >code:'AttributeError: 'dict' object has no attribute 'train' 时出现此错误 我有一种感觉,这是一个 tensorflow 版本处理问题——以及我目前对它的了解有限的事实。我想要一些关于如何清理它并获得平滑运行算法的帮助 这是我在实际算法之前的导入

import numpy as np
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior() 
#deprecated - from tensorflow.examples.tutorials.mnist import input_data

import tensorflow_datasets as tfds
mnist = tfds.load('mnist')
#depricated - mnist = input_data.read_data_sets("MNIST_data/", one_hot = True, with_info=True, as_supervised=True)```

input_size = 784
output_size = 10
hidden_layer_size = 50

#Clear memory of variables from previous runs
#depricated - tf.reset_default_graph()
tf.compat.v1.reset_default_graph()
#tf.disable_v2_behavior() 

#Declare the placeholders
inputs = tf.placeholder(tf.float32, [None, input_size])
targets = tf.placeholder(tf.float32, [None, output_size])

#Declare the weights and biases
weights_1 = tf.get_variable("weights_1", [input_size,hidden_layer_size])
biases_1 = tf.get_variable("biases_1", [hidden_layer_size])
#Declare the output nodes for the 1st hidden layer using the desired activation function
outputs_1 = tf.nn.relu(tf.matmul(inputs,weights_1) + biases_1)

#Declare the weights and biases for the second hidden layer
weights_2 = tf.get_variable("weights_2", [hidden_layer_size, hidden_layer_size])
biases_2 = tf.get_variable("biases_2", [hidden_layer_size])
#Declare the output nodes for hidden layer_2
outputs_2 = tf.nn.relu(tf.matmul(outputs_1, weights_2) + biases_2)

#Declare the weights & biases for the output layer
weights_3 = tf.get_variable("weights_3", [hidden_layer_size,output_size])
biases_3 = tf.get_variable("biases_3", [output_size])
#Declare the final output nodes (nb: you can add a transformation final output with a desired 
optimizer)
outputs = tf.matmul(outputs_2, weights_3) + biases_3

#Next we need an activation function - we'll use the softmax activation with logits - the 
values b4 the activation occurs
loss = tf.nn.softmax_cross_entropy_with_logits(logits=outputs, labels=targets)
#we'll use the mean loss function as that give great performance boost to our algorithm
mean_loss = tf.reduce_mean(loss)

#Next, let's choose out optimization algorithm
optimize = tf.train.AdamOptimizer(learning_rate=0.001).minimize(mean_loss)

# Let's measure the accuracy of our model - using tf.argmax - which returns the index of the 
largest value
out_equals_target = tf.equal(tf.argmax(outputs,1),tf.argmax(targets,1))

accuracy = tf.reduce_mean(tf.cast(out_equals_target, tf.float32))

#Now, let's set early stopping & Batching Mechanisms
sess = tf.InteractiveSession()
initializer = tf.global_variables_initializer()
sess.run(initializer)

batch_size = 100
# batches = #samples/batch_size
batches_number = mnist.train._num_examples // batch_size

#Create the optimization

for epoch_counter in range(max_epochs):
    curr_epoch_loss = 0.
    for batch_counter in range(batches_number):
        input_batch, target_batch = mnist.train.next_batch(batch_size)
    
        _, batch_loss = sess.run([optimize, mean_loss],
                                feed_dict = {inputs: input_batch, targets: target_batch})
        curr_epoch_loss += batch_loss
    #Let's set the avg. loss over all batches - n.b: its outside the batches for loop
    curr_epoch_loss /= batches_number

#The validation loss
input_batch, target_batch = mnist.validation.next_batch(mnist.validation._num_examples)
validation_loss, validation_accuracy = sess.run([mean_loss,accuracy],
                                                feed_dict={inputs: input_batch, target: 
target_batch})

#Finally print the results you've obtained - n.b: inside for loop
    print('Epoch' + str(epoch_counter + 1) + '_ Training loss: ' + 
'(0:.3f)'.format(curr_epoch_loss)+
          '_ Validation loss: ' + '(0:.3f)'.format(validation_loss) + 
          '_ Validation accuracy: ' + '(0:.2f)'.format(validation_accuracy * 100.) + '%')
# Add early stopping mechanism related to the early stop

if validation_loss > prev_validation_loss:
    break
prev_validation_loss = validation_loss
    print('End of training.')

【问题讨论】:

  • 嗨,Alusine,欢迎来到社区。我认为错误是在代码中的mnist.train._num_examples 部分引发的,但您能否分享整个错误消息,包括行号等,以便我们更好地提供帮助?
  • 您好 Merve,感谢您的及时回复;这是错误代码: --------------------------------- ------------------------------ AttributeError Traceback (most recent call last) <ipython-input-2-7aab5166f997> in <module> 70 71 # Calculate the number of batches per epoch for the training set. ---> 72 batches_number = mnist.train._num_examples // batch_size 73 74 # Basic early stopping. Set a miximum number of epochs. AttributeError: 'dict' object has no attribute 'train'
  • 对我来说似乎是一些版本问题。您是指上述代码的任何教程/帖子吗?

标签: python tensorflow machine-learning deep-learning neural-network


【解决方案1】:

您可以通过以下不同方式加载数据集来避免错误(代码ref)。

ds, info = tfds.load('mnist', with_info=True)
# instead of...
# batches_number = mnist.train._num_examples // batch_size
batches_number = info.splits['train'].num_examples // batch_size

但是,我最好的猜测是,这次你的程序会在mnist.train.next_batch(batch_size) 再次崩溃,主要是因为你不能/不能使用下面的调用。

from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data/", one_hot = True, with_info=True, as_supervised=True)

.train.validation.next_batch 等数据集的特性和功能与 TF-1 中的调用相关,并且 TF-2 到 1 桥接器似乎无法使用它。更多信息请参考here

您可以直接使用 TF-1.x 或更新您的代码以使用新版本的 TF。请参阅here 了解 TF-2 中的数据集使用情况,或参阅here 了解 Keras 版本。

总的来说,我强烈建议您使用最新版本的 TF 开始您的旅程,尤其是如果您还处于起步阶段。

【讨论】:

    猜你喜欢
    • 2018-10-13
    • 2019-03-12
    • 1970-01-01
    • 2017-06-09
    • 1970-01-01
    • 2017-07-10
    • 2016-12-04
    • 2018-01-29
    • 2020-11-18
    相关资源
    最近更新 更多