【发布时间】:2019-09-16 01:09:40
【问题描述】:
对于二元分类问题,我从 keras evaluate_generator() 和 predict_generator() 获得了不同的模型精度:
def evaluate_model(model, generator, nBatches):
score = model.evaluate_generator(generator=generator, # Generator yielding tuples
steps=generator.samples//nBatches, # number of steps (batches of samples) to yield from generator before stopping
max_queue_size=10, # maximum size for the generator queue
workers=1, # maximum number of processes to spin up when using process based threading
use_multiprocessing=False, # whether to use process-based threading
verbose=0)
print("loss: %.3f - acc: %.3f" % (score[0], score[1]))
使用evaluate_generator(),我得到的acc 值最高0.7。
def evaluate_predcitions(model, generator):
predictions = model.predict_generator(generator=generator,
steps=generator.samples/nBatches,
max_queue_size=10,
workers=1,
use_multiprocessing=False,
verbose=0)
# Evaluate predictions
predictedClass = np.argmax(predictions, axis=1)
trueClass = generator.classes
classLabels = list(generator.class_indices.keys())
# Create confusion matrix
confusionMatrix = (confusion_matrix(
y_true=trueClass, # ground truth (correct) target values
y_pred=predictedClass)) # estimated targets as returned by a classifier
print(confusionMatrix)
使用predict_generator(),我得到的acc 值为0.5。
我将acc 计算为(TP+TN)/(TP+TN+FP+FN)
- 我说得对吗,来自
evaluate_generator()的acc是基于TP+TN/(TP+TN+FP+FN)的? - 当我使用相同的数据和生成器时,
acc有何不同?
【问题讨论】:
标签: python tensorflow keras deep-learning