【问题标题】:Kaggle Titanic-Machine Learning From Disaster with TensorFlow: Model Training unable to get loss valuesKaggle Titanic-Machine Learning From Disaster with TensorFlow:模型训练无法获得损失值
【发布时间】:2021-10-18 21:31:45
【问题描述】:

我刚刚开始使用 TensorFlow 学习机器学习,我认为通过在Kaggle 上输入 Titanic-Machine Learning from Disaster 来测试我欠发达的技能是一个很好的方法。本次比赛的数据可以在here找到。

为简单起见,我删除了除Sex 之外的所有字符串值,我将其映射为1 用于male0 用于female

但是在模型训练过程中,所有epoch的loss值都是nan。我不知道为什么会这样,如果有人能告诉我问题出在哪里,那就太好了。

我当前的代码:

import numpy as np
import pandas as pd

train_data = pd.read_csv('train.csv')
test_data = pd.read_csv('test.csv')

train_data['Sex'] = train_data['Sex'].map({'male':1,'female':0})

train_data = train_data.drop('PassengerId', axis=1)
train_data = train_data.drop('Name', axis=1)
train_data = train_data.drop('Ticket', axis=1)
train_data = train_data.drop('Cabin', axis=1)
train_data = train_data.drop('Embarked', axis=1)
train_data = train_data.drop('Fare', axis=1)

test_data = test_data.drop('PassengerId', axis=1)
test_data = test_data.drop('Name', axis=1)
test_data = test_data.drop('Ticket', axis=1)
test_data = test_data.drop('Cabin', axis=1)
test_data = test_data.drop('Embarked', axis=1)
test_data = test_data.drop('Fare', axis=1)

X = train_data.drop('Survived',axis=1).values
y = train_data['Survived'].values

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.constraints import max_norm

model = Sequential()
model.add(Dense(6, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')

model.fit(x=X_train, 
      y=y_train, 
      epochs=25,
      batch_size=256,
      validation_data=(X_test, y_test), 
      )

输出:

Epoch 1/25
3/3 [==============================] - 1s 102ms/step - loss: nan - val_loss: nan
Epoch 2/25
3/3 [==============================] - 0s 15ms/step - loss: nan - val_loss: nan
Epoch 3/25
3/3 [==============================] - 0s 14ms/step - loss: nan - val_loss: nan
Epoch 4/25
3/3 [==============================] - 0s 19ms/step - loss: nan - val_loss: nan
Epoch 5/25
3/3 [==============================] - 0s 22ms/step - loss: nan - val_loss: nan
Epoch 6/25
3/3 [==============================] - 0s 22ms/step - loss: nan - val_loss: nan
Epoch 7/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 8/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 9/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 10/25
3/3 [==============================] - 0s 20ms/step - loss: nan - val_loss: nan
Epoch 11/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 12/25
3/3 [==============================] - 0s 19ms/step - loss: nan - val_loss: nan
Epoch 13/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 14/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 15/25
3/3 [==============================] - 0s 18ms/step - loss: nan - val_loss: nan
Epoch 16/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 17/25
3/3 [==============================] - 0s 15ms/step - loss: nan - val_loss: nan
Epoch 18/25
3/3 [==============================] - 0s 18ms/step - loss: nan - val_loss: nan
Epoch 19/25
3/3 [==============================] - 0s 19ms/step - loss: nan - val_loss: nan
Epoch 20/25
3/3 [==============================] - 0s 16ms/step - loss: nan - val_loss: nan
Epoch 21/25
3/3 [==============================] - 0s 19ms/step - loss: nan - val_loss: nan
Epoch 22/25
3/3 [==============================] - 0s 20ms/step - loss: nan - val_loss: nan
Epoch 23/25
3/3 [==============================] - 0s 18ms/step - loss: nan - val_loss: nan
Epoch 24/25
3/3 [==============================] - 0s 13ms/step - loss: nan - val_loss: nan
Epoch 25/25
3/3 [==============================] - 0s 18ms/step - loss: nan - val_loss: nan
<tensorflow.python.keras.callbacks.History at 0x18bc9160dc0>

【问题讨论】:

    标签: python pandas dataframe tensorflow machine-learning


    【解决方案1】:

    因为在这个数据集中,Age 列有一些空值。这就是为什么您会以nan 的身份获得损失。

    您可以删除Age 列或清理数据,使其不包含空值。

    【讨论】:

    • 感谢您指出这一点!用train_data.head()看的时候看不到完整的数据集。
    【解决方案2】:

    这里是修正 这是因为您在编译时伪造了 1 个非常重要的参数,即 metrics 参数。

    您的代码

    model.compile(loss='binary_crossentropy', optimizer='adam')
    

    更正

    model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['accuracy'])
    

    您可以根据最适合您的条件传递不同类型的指标

    【讨论】:

    • 损失依然是nan。之后,我正在寻找那个,但我确实得到了准确性。这一半解决了我的问题! :D
    • 我深刻地看到并观察到您的训练数据集包含空值。要处理它,您有两个选择,1-如果列中的空值数量较少,则删除记录,2-如果列中有太多空值,则删除列,3-用临时值替换空值,pandas 提供了一个选项每个选项。尝试所有选项并选择最适合您的类别。
    • 会的!谢谢!
    【解决方案3】:

    我认为问题可能是由validation_data=(X_test, y_test) 引起的。

    我认为需要将火车数据分成三组才能使用该选项。也许在您的初始拆分之后执行此操作并将任何缩放事物实现为有效集合也会有所帮助:

    X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2)
    

    然后改成validation_data=(X_valid, y_valid)

    【讨论】:

    • 我在原始拆分后立即拆分它,然后我使用 scaler.transform(X_valid) 缩放 X_valid,并将validation_data 更改为 X_valid 和 y_valid,但它给出了相同的结果。
    猜你喜欢
    • 2022-06-22
    • 1970-01-01
    • 2020-01-03
    • 1970-01-01
    • 1970-01-01
    • 2019-01-20
    • 1970-01-01
    • 1970-01-01
    • 2021-10-17
    相关资源
    最近更新 更多