【问题标题】:ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_lossValueError:无法挤压昏暗 [1],预期维度为 1,为 'sparse_softmax_cross_entropy_loss 获得 3
【发布时间】:2018-08-11 13:05:26
【问题描述】:

我尝试用本地图像替换训练和验证数据。但是在运行训练代码时,却出现了错误:

ValueError: 无法挤压 dim[1],预期维度为 1,输入形状为 [100,3] 的“sparse_softmax_cross_entropy_loss/remove_squeezable_dimensions/Squeeze”(操作:“Squeeze”)为 3。

我不知道如何解决它。模型定义代码中没有可见变量。代码是从 TensorFlow 教程中修改的。图片为jpg。

这里是详细的错误信息:

INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_log_step_count_steps': 100, '_is_chief': True, '_model_dir': '/tmp/mnist_convnet_model', '_tf_random_seed': None, '_session_config': None, '_save_checkpoints_secs': 600, '_num_worker_replicas': 1, '_save_checkpoints_steps': None, '_service': None, '_keep_checkpoint_max': 5, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x00000288088D50F0>, '_keep_checkpoint_every_n_hours': 10000, '_task_type': 'worker', '_master': '', '_save_summary_steps': 100, '_num_ps_replicas': 0, '_task_id': 0}
Traceback (most recent call last):
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 686, in _call_cpp_shape_fn_impl
    input_tensors_as_shapes, status)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_loss/remove_squeezable_dimensions/Squeeze' (op: 'Squeeze') with input shapes: [100,3].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\tf_exe_5_make_image_lables\cnn_mnist.py", line 214, in <module>
    tf.app.run()
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\platform\app.py", line 124, in run
    _sys.exit(main(argv))
  File "D:\tf_exe_5_make_image_lables\cnn_mnist.py", line 203, in main
    hooks=[logging_hook])
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\estimator\estimator.py", line 314, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\estimator\estimator.py", line 743, in _train_model
    features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\estimator\estimator.py", line 725, in _call_model_fn
    model_fn_results = self._model_fn(features=features, **kwargs)
  File "D:\tf_exe_5_make_image_lables\cnn_mnist.py", line 67, in cnn_model_fn
    loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py", line 790, in sparse_softmax_cross_entropy
    labels, logits, weights, expected_rank_diff=1)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py", line 720, in _remove_squeezable_dimensions
    labels, predictions, expected_rank_diff=expected_rank_diff)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\confusion_matrix.py", line 76, in remove_squeezable_dimensions
    labels = array_ops.squeeze(labels, [-1])
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2490, in squeeze
    return gen_array_ops._squeeze(input, axis, name)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 7049, in _squeeze
    "Squeeze", input=input, squeeze_dims=axis, name=name)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3162, in create_op
    compute_device=compute_device)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3208, in _create_op_helper
    set_shapes_for_outputs(op)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2427, in set_shapes_for_outputs
    return _set_shapes_for_outputs(op)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2400, in _set_shapes_for_outputs
    shapes = shape_func(op)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2330, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 627, in call_cpp_shape_fn
    require_shape_fn)
  File "C:\Users\ASUS\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 691, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_loss/remove_squeezable_dimensions/Squeeze' (op: 'Squeeze') with input shapes: [100,3].
>>> 

这是我的代码:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

#imports
import numpy as np
import tensorflow as tf
import glob
import cv2
import random
import matplotlib.pylab as plt
import pandas as pd
import sys as system
from mlxtend.preprocessing import one_hot
from sklearn import preprocessing
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder


tf.logging.set_verbosity(tf.logging.INFO)

def cnn_model_fn(features, labels, mode):
    """Model function for CNN"""
    #Input Layer
    input_layer = tf.reshape(features["x"], [-1,320,320,3])
    #Convolutional Layer #1
    conv1 = tf.layers.conv2d(
        inputs = input_layer,
        filters = 32,
        kernel_size=[5,5],
        padding = "same",
        activation=tf.nn.relu)

    #Pooling Layer #1
    pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2,2], strides=2)

    #Convolutional Layer #2 and Pooling Layer #2
    conv2 = tf.layers.conv2d(
        inputs=pool1,
        filters=64,
        kernel_size=[5,5],
        padding="same",
        activation=tf.nn.relu)
    pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2,2], strides=2)

    #Dense Layer
    pool2_flat = tf.reshape(pool2, [-1,80*80*64])
    dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
    dropout = tf.layers.dropout(
        inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)

    #Logits Layer
    logits = tf.layers.dense(inputs=dropout, units=3)

    predictions = {
        #Generate predictions (for PREDICT and EVAL mode)
        "classes": tf.argmax(input=logits, axis=1),
        #Add 'softmax_tensor' to the graph. It is used for PREDICT and by the
        #'logging_hook'
        "probabilities": tf.nn.softmax(logits, name="softmax_tensor")
    }

    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)

    # Calculate Loss (for both TRAIN and EVAL modes
    loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)


# Configure the Training Op (for TRAIN mode)
    if mode == tf.estimator.ModeKeys.TRAIN:
        optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
        train_op = optimizer.minimize(
            loss=loss,
            global_step=tf.train.get_global_step())
        return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)

    # Add evaluation metrics (for EVAL mode)
    eval_metric_ops = {
        "accuracy": tf.metrics.accuracy(
            labels=labels, predictions=predictions["classes"])}
    return tf.estimator.EstimatorSpec(
        mode=mode, loss=loss,eval_metric_ops=eval_metric_ops)

def main(unused_argv):
    '''
    #Load training and eval data
    mnist = tf.contrib.learn.datasets.load_dataset("mnist")
    train_data = mnist.train.images
    train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
    eval_data = mnist.test.images
    eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
    '''
    #Load cats, dogs and cars image in local folder
    X_data = []
    files = glob.glob("data/cats/*.jpg")
    for myFile in files:
        image = cv2.imread (myFile)
        imgR = cv2.resize(image, (320, 320))
        imgNR = imgR/255
        X_data.append(imgNR)

    files = glob.glob("data/dogs/*.jpg")
    for myFile in files:
        image = cv2.imread (myFile)
        imgR = cv2.resize(image, (320, 320))
        imgNR = imgR/255
        X_data.append(imgNR)

    files = glob.glob ("data/cars/*.jpg")
    for myFile in files:
        image = cv2.imread (myFile)
        imgR = cv2.resize(image, (320, 320))
        imgNR = imgR/255
        X_data.append (imgNR)
    #print('X_data count:', len(X_data))

    X_data_Val = []
    files = glob.glob ("data/Validation/cats/*.jpg")
    for myFile in files:
        image = cv2.imread (myFile)
        imgR = cv2.resize(image, (320, 320))
        imgNR = imgR/255
        X_data_Val.append (imgNR)

    files = glob.glob ("data/Validation/dogs/*.jpg")
    for myFile in files:
        image = cv2.imread (myFile)
        imgR = cv2.resize(image, (320, 320))
        imgNR = imgR/255
        X_data_Val.append (imgNR)

    files = glob.glob ("data/Validation/cars/*.jpg")
    for myFile in files:
        image = cv2.imread (myFile)
        imgR = cv2.resize(image, (320, 320))
        imgNR = imgR/255
        X_data_Val.append (imgNR)


    #Feed One hot lables
    Y_Label = np.zeros(shape=(300,1))
    for el in range(0,100):
        Y_Label[el]=[0]
    for el in range(101,200):
        Y_Label[el]=[1]
    for el in range(201,300):
        Y_Label[el]=[2]
    onehot_encoder = OneHotEncoder(sparse=False)
    #Y_Label_RS = Y_Label.reshape(len(Y_Label), 1)
    Y_Label_Encode = onehot_encoder.fit_transform(Y_Label)

    #print('Y_Label_Encode shape:', Y_Label_Encode.shape)


    Y_Label_Val = np.zeros(shape=(30,1))
    for el in range(0, 10):
        Y_Label_Val[el]=[0]
    for el in range(11, 20):
        Y_Label_Val[el]=[1]
    for el in range(21, 30):
        Y_Label_Val[el]=[2]

    #Y_Label_Val_RS = Y_Label_Val.reshape(len(Y_Label_Val), 1)
    Y_Label_Val_Encode = onehot_encoder.fit_transform(Y_Label_Val)

    #print('Y_Label_Val_Encode shape:', Y_Label_Val_Encode.shape)

    train_data = np.array(X_data)
    train_data = train_data.astype(np.float32)
    train_labels = np.asarray(Y_Label_Encode, dtype=np.int32)

    eval_data = np.array(X_data_Val)
    eval_data = eval_data.astype(np.float32)
    eval_labels = np.asarray(Y_Label_Val_Encode, dtype=np.int32)

    print(train_data.shape)
    print(train_labels.shape)

    #Create the Estimator
    mnist_classifier = tf.estimator.Estimator(
        model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")
    # Set up logging for predictions
    tensor_to_log = {"probabilities": "softmax_tensor"}
    logging_hook = tf.train.LoggingTensorHook(
        tensors=tensor_to_log, every_n_iter=50)


    # Train the model
    train_input_fn = tf.estimator.inputs.numpy_input_fn(
        x={"x": train_data},
        y=train_labels,
        batch_size=100,
        num_epochs=None,
        shuffle=True)



    mnist_classifier.train(
        input_fn=train_input_fn,
        #original steps are 20000
        steps=1, 
        hooks=[logging_hook])
    # Evaluate the model and print results
    eval_input_fn = tf.estimator.inputs.numpy_input_fn(
        x={"x": eval_data},
        y=eval_labels,
        num_epochs=1,
        shuffle=False)
    eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn)
    print(eval_results)

if __name__ == "__main__":
    tf.app.run()

【问题讨论】:

标签: python tensorflow


【解决方案1】:

这里的错误来自 tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)

TensorFlow 文档明确指出“标签向量必须为每一行 logits 的真实类提供单个特定索引”。所以你的标签向量必须只包含类索引,如 0,1,2,而不是它们各自的单热编码,如 [1,0,0]、[0,1,0]、[0,0,1]。

重现错误以进一步解释:

import numpy as np
import tensorflow as tf

# Create random-array and assign as logits tensor
np.random.seed(12345)
logits = tf.convert_to_tensor(np.random.sample((4,4)))
print logits.get_shape() #[4,4]

# Create random-labels (Assuming only 4 classes)
labels = tf.convert_to_tensor(np.array([2, 2, 0, 1]))

loss_1 = tf.losses.sparse_softmax_cross_entropy(labels, logits)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

print 'Loss: {}'.format(sess.run(loss_1)) # 1.44836854

# Now giving one-hot-encodings in place of class-indices for labels
wrong_labels = tf.convert_to_tensor(np.array([[0,0,1,0], [0,0,1,0], [1,0,0,0],[0,1,0,0]]))
loss_2 = tf.losses.sparse_softmax_cross_entropy(wrong_labels, logits)

# This should give you a similar error as soon as you define it

因此,请尝试在 Y_Labels 向量中提供类索引而不是一次性编码。 希望这可以消除您的疑问。

【讨论】:

  • 天哪,谢谢!实际上,我认为文档不够清晰,但您的答案非常清楚。
  • 对于那些寻找一种简单方法来轻松从分类版本中获取类索引的人:使用 numpy.argmax()
【解决方案2】:

我已经解决了这个错误。标签采用onehot 编码,因此其维度为[,10],而不是[,1]。所以我使用了tf.argmax()

【讨论】:

    【解决方案3】:

    如果你使用过 Keras 的ImageDataGenerator,可以添加class_mode="sparse" 以获得正确的关卡:

    train_datagen = keras.preprocessing.image.ImageDataGenerator(
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True)
    train_generator = train_datagen.flow_from_directory(
            'data/train',
            target_size=(150, 150),
            batch_size=32, 
            class_mode="sparse")
    

    或者,您也可以使用softmax_cross_entropy,它似乎对标签使用 onehot 编码。

    【讨论】:

      【解决方案4】:

      我编写了将 [1,0,0]、[0,1,0]、[0,0,1] 更改为 0,1,2 的代码。

      import numpy as np
      import tensorflow as tf
      
      def change_to_right(wrong_labels):
          right_labels=[]
          for x in wrong_labels:
              for i in range(0,len(wrong_labels[0])):
                  if x[i]==1:
                      right_labels.append(i)
          return right_labels
      
      wrong_labels =np.array([[0,0,1,0], [0,0,1,0], [1,0,0,0],[0,1,0,0]])
      right_labels =tf.convert_to_tensor(np.array(change_to_right(wrong_labels)))
      

      【讨论】:

      【解决方案5】:

      变化

      loss='sparse_categorical_crossentropy'
      

      loss='categorical_crossentropy'
      

      为我工作。

      【讨论】:

        【解决方案6】:

        简单来说,如果您对测试数据应用了 labelbinarizer(用于热编码),您的损失函数应该是 categorical_crossentropy。如果你没有对你的测试数据进行热编码,你应该使用'sparse_categorical_crossentropy'。

        【讨论】:

          【解决方案7】:

          您可以将一种热编码更改为 loss='categorical_crossentropy' 或前面提到的另一个选项是 tf.losses.sparse_softmax_cross_entropy(labels, logits),

          【讨论】:

            猜你喜欢
            • 1970-01-01
            • 1970-01-01
            • 1970-01-01
            • 1970-01-01
            • 1970-01-01
            • 1970-01-01
            • 2021-07-03
            • 2021-10-09
            • 1970-01-01
            相关资源
            最近更新 更多