【发布时间】:2019-11-23 13:41:15
【问题描述】:
我正在尝试将嵌入层与其他功能连接起来。它没有给我任何错误,但也没有进行任何培训。这个模型定义有什么问题,如何调试?
注意:我的 X 中的最后一列(特征)是带有 word2ix(单个单词)的特征。 注意:没有嵌入特征/层的网络工作正常
原帖于pytorch forum
class Net(torch.nn.Module):
def __init__(self, n_features, h_sizes, num_words, embed_dim, out_size, dropout=None):
super().__init__()
self.num_layers = len(h_sizes) # hidden + input
self.embedding = torch.nn.Embedding(num_words, embed_dim)
self.hidden = torch.nn.ModuleList()
self.bnorm = torch.nn.ModuleList()
if dropout is not None:
self.dropout = torch.nn.ModuleList()
else:
self.dropout = None
for k in range(len(h_sizes)):
if k == 0:
self.hidden.append(torch.nn.Linear(n_features, h_sizes[0]))
self.bnorm.append(torch.nn.BatchNorm1d(h_sizes[0]))
if self.dropout is not None:
self.dropout.append(torch.nn.Dropout(p=dropout))
else:
if k == 1:
input_dim = h_sizes[0] + embed_dim
else:
input_dim = h_sizes[k-1]
self.hidden.append(torch.nn.Linear(input_dim, h_sizes[k]))
self.bnorm.append(torch.nn.BatchNorm1d(h_sizes[k]))
if self.dropout is not None:
self.dropout.append(torch.nn.Dropout(p=dropout))
# Output layer
self.out = torch.nn.Linear(h_sizes[-1], out_size)
def forward(self, inputs):
# Feedforward
for l in range(self.num_layers):
if l == 0:
x = self.hidden[l](inputs[:, :-1])
x = self.bnorm[l](x)
if self.dropout is not None:
x= self.dropout[l](x)
embeds = self.embedding(inputs[:,-1])#.view((1, -1)
x = torch.cat((embeds, x),dim=1)
else:
x = self.hidden[l](x)
x = self.bnorm[l](x)
if self.dropout is not None:
x = self.dropout[l](x)
x = F.relu(x)
output= self.out(x)
return output
【问题讨论】:
-
inputs是整数类型还是单词(比如string)?您可以打印最后一个功能返回的值并检查嵌入(也许self.embedding在您的情况下总是返回占位符值0?)。没有培训是什么意思?损失完全没有变化还是发散或挂起? -
嵌入获取整数索引。我做 word2ix 并将其作为最后一列添加到输入中。将尝试使用小示例进行更新。
-
请添加所有相关部分(例如
word2ix函数),它的调用。是的,拥有 MCVE 会很棒。
标签: deep-learning concatenation pytorch embedding