【问题标题】:Use Tf-Idf with in Keras Model在 Keras 模型中使用 Tf-Idf
【发布时间】:2020-05-28 10:44:16
【问题描述】:

我已将我的训练、测试和验证句子读成 train_sentences、test_sentences、val_sentences

然后我在这些上应用了 Tf-IDF 矢量化器。

vectorizer = TfidfVectorizer(max_features=300)
vectorizer = vectorizer.fit(train_sentences)

X_train = vectorizer.transform(train_sentences)
X_val = vectorizer.transform(val_sentences)
X_test = vectorizer.transform(test_sentences)

我的模型看起来像这样

model = Sequential()

model.add(Input(????))

model.add(Flatten())

model.add(Dense(256, activation='relu'))

model.add(Dense(32, activation='relu'))

model.add(Dense(8, activation='sigmoid'))

model.summary()

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

通常我们在 word2vec 的情况下在嵌入层中传递嵌入矩阵。

我应该如何在 Keras 模型中使用 Tf-IDF?请给我一个使用的例子。

谢谢。

【问题讨论】:

  • 为什么要在嵌入层中使用 TF/IDF 值?
  • 实际上我的计划是使用 2 种不同类型的输入。 1) Tf-IDF (300) 和 2) Word2vec embeddigs (300) 并将它们连接成一个并通过密集层。我没有看到任何说明这一点的例子。
  • 您能否澄清一下 1) 您是否希望使用 TF/IDF 值作为嵌入层的 输入 2) 您希望将 TF/IDF 向量与嵌入向量连接 (嵌入层的输出)。谢谢。
  • 我想将 Tf-IDF 向量与嵌入向量连接起来。很抱歉造成混乱
  • 输入句子中的每个单词都会有一个嵌入向量。这个形状 (sequence_length, embedding_size) 与句子的 one TF/IDF 向量不兼容。你会如何组合它们?

标签: python tensorflow keras scikit-learn tfidfvectorizer


【解决方案1】:

我无法想象将 TF/IDF 值与嵌入向量结合的充分理由,但这里有一个可能的解决方案:使用 功能 API、多个 Inputs 和 concatenate 函数。

要连接层输出,它们的形状必须对齐(被连接的轴除外)。一种方法是平均嵌入,然后连接到 TF/IDF 值的向量。

设置和一些示例数据

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split

from sklearn.datasets import fetch_20newsgroups

import numpy as np

import keras

from keras.models import Model
from keras.layers import Dense, Activation, concatenate, Embedding, Input

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences

# some sample training data
bunch = fetch_20newsgroups()
all_sentences = []

for document in bunch.data:
  sentences = document.split("\n")
  all_sentences.extend(sentences)

all_sentences = all_sentences[:1000]

X_train, X_test = train_test_split(all_sentences, test_size=0.1)
len(X_train), len(X_test)

vectorizer = TfidfVectorizer(max_features=300)
vectorizer = vectorizer.fit(X_train)

df_train = vectorizer.transform(X_train)

tokenizer = Tokenizer()
tokenizer.fit_on_texts(X_train)

maxlen = 50

sequences_train = tokenizer.texts_to_sequences(X_train)
sequences_train = pad_sequences(sequences_train, maxlen=maxlen)

模型定义

vocab_size = len(tokenizer.word_index) + 1
embedding_size = 300

input_tfidf = Input(shape=(300,))
input_text = Input(shape=(maxlen,))

embedding = Embedding(vocab_size, embedding_size, input_length=maxlen)(input_text)

# this averaging method taken from:
# https://stackoverflow.com/a/54217709/1987598

mean_embedding = keras.layers.Lambda(lambda x: keras.backend.mean(x, axis=1))(embedding)

concatenated = concatenate([input_tfidf, mean_embedding])

dense1 = Dense(256, activation='relu')(concatenated)
dense2 = Dense(32, activation='relu')(dense1)
dense3 = Dense(8, activation='sigmoid')(dense2)

model = Model(inputs=[input_tfidf, input_text], outputs=dense3)

model.summary()

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

模型汇总输出

Model: "model_2"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_11 (InputLayer)           (None, 50)           0                                            
__________________________________________________________________________________________________
embedding_5 (Embedding)         (None, 50, 300)      633900      input_11[0][0]                   
__________________________________________________________________________________________________
input_10 (InputLayer)           (None, 300)          0                                            
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 300)          0           embedding_5[0][0]                
__________________________________________________________________________________________________
concatenate_4 (Concatenate)     (None, 600)          0           input_10[0][0]                   
                                                                 lambda_1[0][0]                   
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 256)          153856      concatenate_4[0][0]              
__________________________________________________________________________________________________
dense_6 (Dense)                 (None, 32)           8224        dense_5[0][0]                    
__________________________________________________________________________________________________
dense_7 (Dense)                 (None, 8)            264         dense_6[0][0]                    
==================================================================================================
Total params: 796,244
Trainable params: 796,244
Non-trainable params: 0

【讨论】:

  • 这是我的要求。非常感谢
猜你喜欢
  • 2019-02-12
  • 2020-11-08
  • 2020-06-23
  • 1970-01-01
  • 2018-05-31
  • 2021-07-22
  • 2019-02-10
  • 2015-05-07
  • 2021-08-30
相关资源
最近更新 更多