【问题标题】:Google colab is not training under full dataset [duplicate]Google colab 未在完整数据集下进行训练 [重复]
【发布时间】:2020-08-05 05:34:37
【问题描述】:

我在 Google colab 中训练神经网络时遇到问题。即使我将模型上传到驱动器并提供了正确的路径,我的模型也没有在完整的训练数据集下进行训练。这是我写的代码

import tensorflow as tf
import tensorflow.keras as keras
from keras.models import Sequential
from keras.layers import Dense, Flatten, Activation, Dropout
from keras.optimizers import Adam
from sklearn.metrics import mean_squared_error, mean_absolute_error, max_error, r2_score
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

X=pd.read_csv('/content/drive/My Drive/ML Data/prob_232_full.dat',sep="\s+",header=None)
y=pd.read_csv('/content/drive/My Drive/ML Data/pGuess_232_full.dat',sep="\s+",header=None)

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X.astype(np.float64), y.astype(np.float64), test_size = 0.25, random_state = 1)

X_train = np.array(X_train)
X_test = np.array(X_test)

# Sklearn wants the labels as one-dimensional vectors
y_train = np.array(y_train).reshape((-1,))
y_test = np.array(y_test).reshape((-1,))

ncols=X_train.shape[1]

model = Sequential()

model.add(Dense(activation="relu", input_dim=ncols, units=64, kernel_initializer="uniform"))
model.add(Dense(activation="relu", units=128, kernel_initializer="uniform"))
model.add(Dense(activation="relu", units=256, kernel_initializer="uniform"))
model.add(Dense(activation="relu", units=64, kernel_initializer="uniform"))
model.add(Dense(activation="relu", units=1, kernel_initializer="uniform"))

opt=keras.optimizers.Adam(learning_rate=0.0001)
model.compile(optimizer = opt, loss='mean_squared_error', metrics=['mean_absolute_error'])
history=model.fit(X_train, y_train, validation_data=(X_test, y_test), 
                  batch_size = 32, epochs = 40, verbose=1)

虽然训练集的大小为 457500,但它表明模型仅在 14297 个训练数据下进行训练。

【问题讨论】:

    标签: tensorflow keras google-colaboratory


    【解决方案1】:

    欢迎来到 Stackoverflow.com

    亲爱的,您的dataset 是 457500,您使用的是 32 的 batch size(在 model.fit 中)。因此,您对数据集的总迭代次数为 457500 / 32 几乎等于 = 14296。最后一批包含的示例少了 4 个,因此它没有使用最后一批。所以它显示得很好。这只是关于理解。

    【讨论】:

    • +1 有同样的“问题”/Google Colab 的一切实际上都很好。我关注的视频在本地计算机上使用了 Jupyter 笔记本,我们都没有指定 batch_size,他的显示 50,000 我的显示 1563。1563*32 = 50,016。很高兴知道一切正常。
    猜你喜欢
    • 1970-01-01
    • 2020-09-10
    • 1970-01-01
    • 1970-01-01
    • 2019-08-11
    • 2020-07-22
    • 2021-05-17
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多