【问题标题】:PrefetchDataset' object has no attribute 'ndim'PrefetchDataset' 对象没有属性 'ndim'
【发布时间】:2020-12-11 17:14:47
【问题描述】:

我正在使用以下代码使用 GRU 预测下一个单词。

import numpy as np
shakespeare_url = "https://homl.info/shakespeare"
filepath = keras.utils.get_file("shakespeare.txt",shakespeare_urlspeare_url)

with open(filepath) as f:
    shakespeare_txt = f.read()
    
tokenizer = keras.preprocessing.text.Tokenizer(char_level=True)
tokenizer.fit_on_texts(shakespeare_txt)
max_id = len(tokenizer.word_index) ## Number of distinct words
dataset_size = tokenizer.document_count ## total number of character
[encoded] = np.array(tokenizer.texts_to_sequences([shakespeare_txt])) - 1
train_size = (dataset_size * 90) // 100
dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])
n_steps = 100
window_length = n_steps +1
dataset = dataset.window(window_length,shift=1,drop_remainder=True)
dataset = dataset.flat_map(lambda window : window.batch(window_length))
batch_size =32
dataset = dataset.shuffle(10000).batch(batch_size)
dataset = dataset.map(lambda windows : (windows[:,:-1],windows[:,1:]))
dataset = dataset.map(lambda X_batch,Y_batch : (tf.one_hot(X_batch,depth = max_id),Y_batch))
dataset = dataset.prefetch(1)
model = keras.models.Sequential([
    keras.layers.GRU(128, return_sequences=True, input_shape =[None,max_id], dropout=0.2,recurrent_dropout=0.2),
    keras.layers.GRU(128,return_sequences=True,dropout=0.2,recurrent_dropout=0.2),
    keras.layers.TimeDistributed(keras.layers.Dense(max_id,activation='softmax'))
])
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam')
history = model.fit(dataset,epochs=20)

低于异常。请帮我解决这个问题??

AttributeError Traceback(最近一次调用最后一次) 在 ----> 1 个历史 = model.fit(dataset,epochs=20)

c:\users\dixit\appdata\local\programs\python\python38\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split,validation_data,shuffle,class_weight,sample_weight,initial_epoch,steps_per_epoch,validation_steps,validation_freq,max_queue_size,workers,use_multiprocessing,**kwargs) 1148 1149 # 案例 2:符号张量或类似 Numpy 数组。 -> 1150 x, y, sample_weights = self._standardize_user_data( 第1151章 第1152章

c:\users\dixit\appdata\local\programs\python\python38\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size) 572 573 # 标准化输入。 --> 574 x = training_utils.standardize_input_data( 575 x, 第576章

c:\users\dixit\appdata\local\programs\python\python38\lib\site-packages\keras\engine\training_utils.py 在standardize_input_data(数据,名称,形状,check_batch_axis,exception_prefix) 97 data = data.values if data.class.name == 'DataFrame' else data 98 数据 = [数据] ---> 99 个数据 = [standardize_single_array(x) for x in data] 100 101 如果 len(data) != len(names):

c:\users\dixit\appdata\local\programs\python\python38\lib\site-packages\keras\engine\training_utils.py in (.0) 97 data = data.values if data.class.name == 'DataFrame' else data 98 数据 = [数据] ---> 99 个数据 = [standardize_single_array(x) for x in data] 100 101 如果 len(data) != len(names):

c:\users\dixit\appdata\local\programs\python\python38\lib\site-packages\keras\engine\training_utils.py 在standardize_single_array(x) 32 '得到具有形状的张量:%s' % str(shape)) 33 返回 x ---> 34 elif x.ndim == 1: 35 x = np.expand_dims(x, 1) 36返回x

AttributeError: 'PrefetchDataset' 对象没有属性 'ndim'

【问题讨论】:

    标签: python tensorflow keras


    【解决方案1】:

    确保从 tensorflow 导入 keras 并使用 2.2.4-tf 版本。得到了同样的错误,这对我有用。

    from tensorflow import keras
    keras.__version__
    

    2.2.4-tf

    【讨论】:

      猜你喜欢
      • 2021-02-19
      • 2019-08-11
      • 1970-01-01
      • 1970-01-01
      • 2018-07-07
      • 2014-12-11
      • 2021-07-31
      • 1970-01-01
      • 2018-07-19
      相关资源
      最近更新 更多