【发布时间】:2020-09-05 16:13:37
【问题描述】:
我想了解embedding_dim 与使用整个vocab_size 的一个热向量的目的,它是从vocab_size 暗淡到embedding_dim 维度的一个热向量的降维还是有任何直观的其他实用程序?另外应该如何确定embedding_dim 号码?
代码-
vocab_size = 10000
embedding_dim = 16
max_length = 120
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(6, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
O/P-
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 120, 16) 160000
_________________________________________________________________
flatten (Flatten) (None, 1920) 0
_________________________________________________________________
dense (Dense) (None, 6) 11526
_________________________________________________________________
dense_1 (Dense) (None, 1) 7
=================================================================
Total params: 171,533
Trainable params: 171,533
Non-trainable params: 0
_________________________________________________________________
【问题讨论】:
标签: tensorflow keras deep-learning nlp word-embedding