【发布时间】:2021-10-18 21:31:45
【问题描述】:
我刚刚开始使用 TensorFlow 学习机器学习,我认为通过在Kaggle 上输入 Titanic-Machine Learning from Disaster 来测试我欠发达的技能是一个很好的方法。本次比赛的数据可以在here找到。
为简单起见,我删除了除Sex 之外的所有字符串值,我将其映射为1 用于male 和0 用于female。
但是在模型训练过程中,所有epoch的loss值都是nan。我不知道为什么会这样,如果有人能告诉我问题出在哪里,那就太好了。
我当前的代码:
import numpy as np
import pandas as pd
train_data = pd.read_csv('train.csv')
test_data = pd.read_csv('test.csv')
train_data['Sex'] = train_data['Sex'].map({'male':1,'female':0})
train_data = train_data.drop('PassengerId', axis=1)
train_data = train_data.drop('Name', axis=1)
train_data = train_data.drop('Ticket', axis=1)
train_data = train_data.drop('Cabin', axis=1)
train_data = train_data.drop('Embarked', axis=1)
train_data = train_data.drop('Fare', axis=1)
test_data = test_data.drop('PassengerId', axis=1)
test_data = test_data.drop('Name', axis=1)
test_data = test_data.drop('Ticket', axis=1)
test_data = test_data.drop('Cabin', axis=1)
test_data = test_data.drop('Embarked', axis=1)
test_data = test_data.drop('Fare', axis=1)
X = train_data.drop('Survived',axis=1).values
y = train_data['Survived'].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.constraints import max_norm
model = Sequential()
model.add(Dense(6, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(x=X_train,
y=y_train,
epochs=25,
batch_size=256,
validation_data=(X_test, y_test),
)
输出:
Epoch 1/25
3/3 [==============================] - 1s 102ms/step - loss: nan - val_loss: nan
Epoch 2/25
3/3 [==============================] - 0s 15ms/step - loss: nan - val_loss: nan
Epoch 3/25
3/3 [==============================] - 0s 14ms/step - loss: nan - val_loss: nan
Epoch 4/25
3/3 [==============================] - 0s 19ms/step - loss: nan - val_loss: nan
Epoch 5/25
3/3 [==============================] - 0s 22ms/step - loss: nan - val_loss: nan
Epoch 6/25
3/3 [==============================] - 0s 22ms/step - loss: nan - val_loss: nan
Epoch 7/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 8/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 9/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 10/25
3/3 [==============================] - 0s 20ms/step - loss: nan - val_loss: nan
Epoch 11/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 12/25
3/3 [==============================] - 0s 19ms/step - loss: nan - val_loss: nan
Epoch 13/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 14/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 15/25
3/3 [==============================] - 0s 18ms/step - loss: nan - val_loss: nan
Epoch 16/25
3/3 [==============================] - 0s 17ms/step - loss: nan - val_loss: nan
Epoch 17/25
3/3 [==============================] - 0s 15ms/step - loss: nan - val_loss: nan
Epoch 18/25
3/3 [==============================] - 0s 18ms/step - loss: nan - val_loss: nan
Epoch 19/25
3/3 [==============================] - 0s 19ms/step - loss: nan - val_loss: nan
Epoch 20/25
3/3 [==============================] - 0s 16ms/step - loss: nan - val_loss: nan
Epoch 21/25
3/3 [==============================] - 0s 19ms/step - loss: nan - val_loss: nan
Epoch 22/25
3/3 [==============================] - 0s 20ms/step - loss: nan - val_loss: nan
Epoch 23/25
3/3 [==============================] - 0s 18ms/step - loss: nan - val_loss: nan
Epoch 24/25
3/3 [==============================] - 0s 13ms/step - loss: nan - val_loss: nan
Epoch 25/25
3/3 [==============================] - 0s 18ms/step - loss: nan - val_loss: nan
<tensorflow.python.keras.callbacks.History at 0x18bc9160dc0>
【问题讨论】:
标签: python pandas dataframe tensorflow machine-learning