【发布时间】:2019-07-12 05:46:06
【问题描述】:
我正在尝试在连体 LSTM 网络 (https://github.com/eliorc/Medium/blob/master/MaLSTM.ipynb) 中用 BERT 的句子嵌入替换 Word2Vec 词嵌入。然而,我的 BERT 嵌入是 (1,768) 形状的矩阵,而不是可以馈送到 keras 层的张量。我想知道是否可以转换它。
我找到了一种用通用句子嵌入 (http://hunterheidenreich.com/blog/google-universal-sentence-encoder-in-keras/) 替换词嵌入的方法,我尝试修改 LSTM 的代码以使用来自以下服务 (https://github.com/hanxiao/bert-as-service#what-is-it) 的 BERT 句子嵌入。
# Model variables for LSTM
n_hidden = 50
gradient_clipping_norm = 1.25
batch_size = 64
n_epoch = 25
def BERTEmbedding(x):
#x is an input tensor
encoded= bc.encode(tf.squeeze(tf.cast(x, tf.string)))
return encoded
def exponent_neg_manhattan_distance(left, right):
''' Helper function for the similarity estimate of the LSTMs outputs'''
return K.exp(-K.sum(K.abs(left-right), axis=1, keepdims=True))
left_input_text = Input(shape=(1,), dtype=tf.string)
right_input_text = Input(shape=(1,), dtype=tf.string)
encoded_left = Lambda(BERTEmbedding, output_shape=(768, ))(left_input_text)
encoded_right = Lambda(BERTEmbedding, output_shape=(768, ))(right_input_text)
# Since this is a siamese network, both sides share the same LSTM
shared_lstm = LSTM(n_hidden)
left_output = shared_lstm(encoded_left)
right_output = shared_lstm(encoded_right)
我收到以下错误消息 TypeError: "Tensor("lambda_3/Squeeze:0", dtype=string)" must be , but received class 'tensorflow.python.framework.ops.Tensor'
【问题讨论】:
标签: keras deep-learning nlp lstm word-embedding