【发布时间】:2021-08-05 11:32:30
【问题描述】:
我正在学习深度学习课程,当我尝试运行 >code:'AttributeError: 'dict' object has no attribute 'train' 时出现此错误 我有一种感觉,这是一个 tensorflow 版本处理问题——以及我目前对它的了解有限的事实。我想要一些关于如何清理它并获得平滑运行算法的帮助 这是我在实际算法之前的导入
import numpy as np
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
#deprecated - from tensorflow.examples.tutorials.mnist import input_data
import tensorflow_datasets as tfds
mnist = tfds.load('mnist')
#depricated - mnist = input_data.read_data_sets("MNIST_data/", one_hot = True, with_info=True, as_supervised=True)```
input_size = 784
output_size = 10
hidden_layer_size = 50
#Clear memory of variables from previous runs
#depricated - tf.reset_default_graph()
tf.compat.v1.reset_default_graph()
#tf.disable_v2_behavior()
#Declare the placeholders
inputs = tf.placeholder(tf.float32, [None, input_size])
targets = tf.placeholder(tf.float32, [None, output_size])
#Declare the weights and biases
weights_1 = tf.get_variable("weights_1", [input_size,hidden_layer_size])
biases_1 = tf.get_variable("biases_1", [hidden_layer_size])
#Declare the output nodes for the 1st hidden layer using the desired activation function
outputs_1 = tf.nn.relu(tf.matmul(inputs,weights_1) + biases_1)
#Declare the weights and biases for the second hidden layer
weights_2 = tf.get_variable("weights_2", [hidden_layer_size, hidden_layer_size])
biases_2 = tf.get_variable("biases_2", [hidden_layer_size])
#Declare the output nodes for hidden layer_2
outputs_2 = tf.nn.relu(tf.matmul(outputs_1, weights_2) + biases_2)
#Declare the weights & biases for the output layer
weights_3 = tf.get_variable("weights_3", [hidden_layer_size,output_size])
biases_3 = tf.get_variable("biases_3", [output_size])
#Declare the final output nodes (nb: you can add a transformation final output with a desired
optimizer)
outputs = tf.matmul(outputs_2, weights_3) + biases_3
#Next we need an activation function - we'll use the softmax activation with logits - the
values b4 the activation occurs
loss = tf.nn.softmax_cross_entropy_with_logits(logits=outputs, labels=targets)
#we'll use the mean loss function as that give great performance boost to our algorithm
mean_loss = tf.reduce_mean(loss)
#Next, let's choose out optimization algorithm
optimize = tf.train.AdamOptimizer(learning_rate=0.001).minimize(mean_loss)
# Let's measure the accuracy of our model - using tf.argmax - which returns the index of the
largest value
out_equals_target = tf.equal(tf.argmax(outputs,1),tf.argmax(targets,1))
accuracy = tf.reduce_mean(tf.cast(out_equals_target, tf.float32))
#Now, let's set early stopping & Batching Mechanisms
sess = tf.InteractiveSession()
initializer = tf.global_variables_initializer()
sess.run(initializer)
batch_size = 100
# batches = #samples/batch_size
batches_number = mnist.train._num_examples // batch_size
#Create the optimization
for epoch_counter in range(max_epochs):
curr_epoch_loss = 0.
for batch_counter in range(batches_number):
input_batch, target_batch = mnist.train.next_batch(batch_size)
_, batch_loss = sess.run([optimize, mean_loss],
feed_dict = {inputs: input_batch, targets: target_batch})
curr_epoch_loss += batch_loss
#Let's set the avg. loss over all batches - n.b: its outside the batches for loop
curr_epoch_loss /= batches_number
#The validation loss
input_batch, target_batch = mnist.validation.next_batch(mnist.validation._num_examples)
validation_loss, validation_accuracy = sess.run([mean_loss,accuracy],
feed_dict={inputs: input_batch, target:
target_batch})
#Finally print the results you've obtained - n.b: inside for loop
print('Epoch' + str(epoch_counter + 1) + '_ Training loss: ' +
'(0:.3f)'.format(curr_epoch_loss)+
'_ Validation loss: ' + '(0:.3f)'.format(validation_loss) +
'_ Validation accuracy: ' + '(0:.2f)'.format(validation_accuracy * 100.) + '%')
# Add early stopping mechanism related to the early stop
if validation_loss > prev_validation_loss:
break
prev_validation_loss = validation_loss
print('End of training.')
【问题讨论】:
-
嗨,Alusine,欢迎来到社区。我认为错误是在代码中的
mnist.train._num_examples部分引发的,但您能否分享整个错误消息,包括行号等,以便我们更好地提供帮助? -
您好 Merve,感谢您的及时回复;这是错误代码: --------------------------------- ------------------------------
AttributeError Traceback (most recent call last) <ipython-input-2-7aab5166f997> in <module> 70 71 # Calculate the number of batches per epoch for the training set. ---> 72 batches_number = mnist.train._num_examples // batch_size 73 74 # Basic early stopping. Set a miximum number of epochs. AttributeError: 'dict' object has no attribute 'train' -
对我来说似乎是一些版本问题。您是指上述代码的任何教程/帖子吗?
标签: python tensorflow machine-learning deep-learning neural-network