【发布时间】:2019-07-17 02:31:10
【问题描述】:
从page我得到以下代码:
from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
# define documents
docs = ['Well done!',
'Good work',
'Great effort',
'nice work',
'Excellent!',
'Weak',
'Poor effort!',
'not good',
'poor work',
'Could have done better.']
# define class labels
labels = array([1,1,1,1,1,0,0,0,0,0])
# integer encode the documents
vocab_size = 50
encoded_docs = [one_hot(d, vocab_size) for d in docs]
print(encoded_docs)
# pad documents to a max length of 4 words
max_length = 4
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
print(padded_docs)
# define the model
model = Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy*100))
- 我查看了
encoded_docs并注意到单词done和work的one_hot 编码都是2,为什么?是因为unicity of word to index mapping non-guaranteed.按照这个page吗? - 我通过命令
embeddings = model.layers[0].get_weights()[0]得到了embeddings。在这种情况下,为什么我们会得到大小为 50 的embedding对象?即使两个词具有相同的 one_hot 编号,它们是否具有不同的嵌入? - 我怎么能理解哪个嵌入是哪个词,即
donevswork -
我还在page 找到了下面的代码,它可以帮助找到每个单词的嵌入。但我不知道如何创建
word_to_indexword_to_index是从单词到它们的索引的映射(即 dict),例如love: 69 words_embeddings = {w:embeddings[idx] for w, idx in word_to_index.items()} 请确保我对
para #的理解是正确的。
第一层有 400 个参数,因为总字数为 50,嵌入有 8 个维度,因此 50*8=400。
最后一层有 33 个参数,因为每个句子最多有 4 个单词。所以 4*8 由于嵌入的尺寸和 1 的偏差。共 33 个
_________________________________________________________________
Layer (type) Output Shape Param#
=================================================================
embedding_3 (Embedding) (None, 4, 8) 400
_________________________________________________________________
flatten_3 (Flatten) (None, 32) 0
_________________________________________________________________
dense_3 (Dense) (None, 1) 33
=================================================================
- 最后,如果上面的1是正确的,有没有更好的方法来获得嵌入层
model.add(Embedding(vocab_size, 8, input_length=max_length))而无需进行一次热编码encoded_docs = [one_hot(d, vocab_size) for d in docs]
++++++++++++++++++++++++++++++++++ update - 提供更新后的代码
from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
# define documents
docs = ['Well done!',
'Good work',
'Great effort',
'nice work',
'Excellent!',
'Weak',
'Poor effort!',
'not good',
'poor work',
'Could have done better.']
# define class labels
labels = array([1,1,1,1,1,0,0,0,0,0])
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer()
#this creates the dictionary
#IMPORTANT: MUST HAVE ALL DATA - including Test data
#IMPORTANT2: This method should be called only once!!!
tokenizer.fit_on_texts(docs)
#this transforms the texts in to sequences of indices
encoded_docs2 = tokenizer.texts_to_sequences(docs)
encoded_docs2
max_length = 4
padded_docs2 = pad_sequences(encoded_docs2, maxlen=max_length, padding='post')
max_index = array(padded_docs2).reshape((-1,)).max()
# define the model
model = Sequential()
model.add(Embedding(max_index+1, 8, input_length=max_length))# you cannot use just max_index
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs2, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs2, labels, verbose=0)
print('Accuracy: %f' % (accuracy*100))
embeddings = model.layers[0].get_weights()[0]
embeding_for_word_7 = embeddings[14]
index = tokenizer.texts_to_sequences([['well']])[0][0]
tokenizer.document_count
tokenizer.word_index
【问题讨论】:
标签: python tensorflow keras word-embedding