【发布时间】:2018-11-06 18:13:12
【问题描述】:
我正在使用 model.save_weights() 在 keras 中节省模型重量,这可以节省 h5 扩展中的重量。我在HDFView2.9 中看到了 h5 文件。我的模型摘要如下所示:
Layer (type) Output Shape Param #
=================================================================
conv2d_37 (Conv2D) (None, 49, 49, 32) 160
_________________________________________________________________
conv2d_38 (Conv2D) (None, 48, 48, 32) 4128
_________________________________________________________________
max_pooling2d_19 (MaxPooling (None, 24, 24, 32) 0
_________________________________________________________________
dropout_28 (Dropout) (None, 24, 24, 32) 0
_________________________________________________________________
conv2d_39 (Conv2D) (None, 23, 23, 64) 8256
_________________________________________________________________
conv2d_40 (Conv2D) (None, 22, 22, 64) 16448
_________________________________________________________________
max_pooling2d_20 (MaxPooling (None, 11, 11, 64) 0
_________________________________________________________________
dropout_29 (Dropout) (None, 11, 11, 64) 0
_________________________________________________________________
flatten_10 (Flatten) (None, 7744) 0
_________________________________________________________________
dense_19 (Dense) (None, 256) 1982720
_________________________________________________________________
dropout_30 (Dropout) (None, 256) 0
_________________________________________________________________
dense_20 (Dense) (None, 2) 514
=================================================================
Total params: 2,012,226
Trainable params: 2,012,226
Non-trainable params: 0
所以,我的h5 文件的第一层显示了 32 个大小为 (2x2) 的过滤器。但是当我签入 HDFViewer 时,它只显示 1 个过滤器而不是 32,如下所示:
但是当我使用load_weights 加载权重时,权重会正确加载。然后如何在 HDFView 中正确查看权重。而且它并没有顺序地节省重量。第一层仅显示 1 个过滤器,然后是 32 个,然后是 32 个,然后是 64 个。接下来的 64 个未命中。
【问题讨论】:
标签: python keras hdf5 h5py hdf