【问题标题】:what does this .n method do?这个 .n 方法有什么作用?
【发布时间】:2021-12-11 03:21:29
【问题描述】:

.n 方法在这个 (trainGen.n) 中做了什么(在 train_and_evaluate_model() 函数中)

这是我要学习的全部代码

感谢您的帮助

def build_data_generators(train_folder, test_folder, labels=None,
                          image_size=(100, 100), batch_size=50):
    train_datagen = ImageDataGenerator(
        width_shift_range=0.0,
        height_shift_range=0.0,
        zoom_range=0.0,
        horizontal_flip=True,
        vertical_flip=True,  # randomly flip images
        preprocessing_function=augment_image)  # augmentation is done only on
    # the train set (and optionally validation)

    test_datagen = ImageDataGenerator()

    train_gen = train_datagen.flow_from_directory(
        train_folder, target_size=image_size, class_mode='sparse',
        batch_size=batch_size, shuffle=True, subset='training', classes=labels)
    test_gen = test_datagen.flow_from_directory(
        test_folder, target_size=image_size, class_mode='sparse',
        batch_size=batch_size, shuffle=False, subset=None, classes=labels)
    return train_gen, test_gen


def train_and_evaluate_model(model, name="", epochs=2, batch_size=50, verbose=verbose,
                             useCkpt=False):
    print(model.summary())
    model_out_dir = os.path.join(output_dir, name)
    if not os.path.exists(model_out_dir):
        os.makedirs(model_out_dir)
    if useCkpt:
        model.load_weights(model_out_dir + "/Model.h5")

    trainGen, testGen = build_data_generators(
        train_dir, test_dir, labels=labels, image_size=image_size, batch_size=batch_size)
    optimizer = Adadelta(lr=learning_rate)
    model.compile(optimizer=optimizer, loss="sparse_categorical_crossentropy",
                  metrics=["acc"])
    learning_rate_reduction = ReduceLROnPlateau(
        monitor='loss', patience=patience, verbose=verbose,
        factor=learning_rate_reduction_factor, min_lr=min_learning_rate)
    save_model = ModelCheckpoint(
        filepath=model_out_dir + "/Model.h5", monitor='loss', verbose=verbose,
        save_best_only=True, save_weights_only=False, mode='min', save_freq='epoch')

    history = model.fit(trainGen,
                        epochs=epochs,
                        steps_per_epoch=(trainGen.n // batch_size) + 1,
                        verbose=verbose,
                        callbacks=[learning_rate_reduction, save_model])

    model.load_weights(model_out_dir + "/Model.h5")

    trainGen.reset()
    loss_t, accuracy_t = model.evaluate(trainGen, steps=(trainGen.n // batch_size) + 1,
                                        verbose=verbose)
    loss, accuracy = model.evaluate(testGen, steps=(testGen.n // batch_size) + 1,
                                    verbose=verbose)
    print("Train: accuracy = %f  ;  loss_v = %f" % (accuracy_t, loss_t))
    print("Test: accuracy = %f  ;  loss_v = %f" % (accuracy, loss))
    plot_model_history(history, out_path=model_out_dir)
    testGen.reset()
    y_pred = model.predict(testGen, steps=(testGen.n // batch_size) + 1, verbose=verbose)
    y_true = testGen.classes[testGen.index_array]
    plot_confusion_matrix(y_true, y_pred.argmax(axis=-1), labels, out_path=model_out_dir)
    class_report = classification_report(y_true, y_pred.argmax(axis=-1), target_names=labels)

    with open(model_out_dir + "/Classification_report.txt", "w") as text_file:
        text_file.write("%s" % class_report)

由于我主要发布代码,所以它一直要求我添加更多内容,所以这句话只是一个填充物,谢谢你,我最深切的歉意,因为你浪费了 16.5 秒的时间来阅读整个句子

【问题讨论】:

  • 这绝不是minimal reproducible example,因此在找出正确的课程后查看文档比在 SO 上发帖花费的时间更少。
  • n 是一个属性,但不可调用,因此不是方法。这可能只是数据集中元素的数量。
  • 能否贴出初始化和更新trainGen 变量的代码部分?
  • 是的,我只是按要求更新,感谢您分享您的知识

标签: python conv-neural-network


【解决方案1】:

这些是 Keras 类型。 trainGen 是一个 DirectoryIterator,它实现了该属性所属的类 Iterator。见构造函数here

n 整数,数据集中要循环的样本总数。

【讨论】:

  • 谢谢你,Cory,我想我还有很多东西要学,因为我读的时候还有 0 个想法 T.T(不确定它会如何影响程序)
猜你喜欢
  • 2013-02-04
  • 2018-09-04
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多