【发布时间】:2018-08-31 15:08:43
【问题描述】:
我在使用 LSTM 和 Keras 时遇到问题。
我尝试预测正常/假域名。
我的数据集是这样的:
domain,fake
google, 0
bezqcuoqzcjloc,1
...
有 50% 的正常域和 50% 的假域
这是我的 LSTM 模型:
def build_model(max_features, maxlen):
"""Build LSTM model"""
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(LSTM(64))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['acc'])
return model
然后我预处理我的文本数据以将其转换为数字:
"""Run train/test on logistic regression model"""
indata = data.get_data()
# Extract data and labels
X = [x[1] for x in indata]
labels = [x[0] for x in indata]
# Generate a dictionary of valid characters
valid_chars = {x:idx+1 for idx, x in enumerate(set(''.join(X)))}
max_features = len(valid_chars) + 1
maxlen = 100
# Convert characters to int and pad
X = [[valid_chars[y] for y in x] for x in X]
X = sequence.pad_sequences(X, maxlen=maxlen)
# Convert labels to 0-1
y = [0 if x == 'benign' else 1 for x in labels]
然后我将数据分成训练、测试和验证集:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print("Build model...")
model = build_model(max_features, maxlen)
print("Train...")
X_train, X_holdout, y_train, y_holdout = train_test_split(X_train, y_train, test_size=0.2)
然后我在训练数据和验证数据上训练我的模型,并在测试数据上进行评估:
history = model.fit(X_train, y_train, epochs=max_epoch, validation_data=(X_holdout, y_holdout), shuffle=False)
scores = model.evaluate(X_test, y_test, batch_size=batch_size)
在我的训练/测试结束时,我得到了这些结果:
以及在测试数据集上评估时的这些分数:
loss = 0.060554939906234596
accuracy = 0.978109902033532
但是,当我对这样的数据集样本进行预测时:
LSTM_model = load_model('LSTMmodel_64_sgd.h5')
data = pickle.load(open('traindata.pkl', 'rb'))
#### LSTM ####
"""Run train/test on logistic regression model"""
# Extract data and labels
X = [x[1] for x in data]
labels = [x[0] for x in data]
X1, _, labels1, _ = train_test_split(X, labels, test_size=0.9)
# Generate a dictionary of valid characters
valid_chars = {x:idx+1 for idx, x in enumerate(set(''.join(X1)))}
max_features = len(valid_chars) + 1
maxlen = 100
# Convert characters to int and pad
X1 = [[valid_chars[y] for y in x] for x in X1]
X1 = sequence.pad_sequences(X1, maxlen=maxlen)
# Convert labels to 0-1
y = [0 if x == 'benign' else 1 for x in labels1]
y_pred = LSTM_model.predict(X1)
我的表现很差:
accuracy = 0.5934741842730341
confusion matrix = [[25201 14929]
[17589 22271]]
F1-score = 0.5780171295094731
谁能给我解释一下为什么? 我为 LSTM 节点尝试了 64 而不是 128,为优化器尝试了 adam 和 rmsprop,增加了 batch_size 但性能仍然很低。
【问题讨论】:
-
您能否在计算
max_features时(即在拟合模型之前和预测之前)分享它的值? -
@desertnaut 我在拟合之前有
max_features = 39,在预测之前有max_features = 38。 -
谢谢;以及您在
X中大约有多少样本(拆分前)? -
您目前的批量大小是多少?如果您的样本准确度非常好,而样本外的准确度很差,可能是由于过度拟合,在这种情况下,您需要减少批量大小。
-
@desertnaut 在拆分之前我有大约 800,000 个样本。
标签: python machine-learning keras lstm