【问题标题】:Dimension out of range (expected to be in range of [-4, 3], but got 64)维度超出范围(预计在 [-4, 3] 范围内,但得到 64)
【发布时间】:2021-03-28 03:24:41
【问题描述】:

我是 Pytorch 的新手,我一直致力于使用 MNIST 数据集训练 MLP 模型。基本上,我将图像和标签作为输入提供给模型,并在其上训练数据集。我使用 CrossEntropyLoss() 作为损失函数,但是每次运行模型时都会出现尺寸错误。

IndexError                                Traceback (most recent call last)
<ipython-input-37-04f8cfc1d3b6> in <module>()
     47 
     48         # Forward
---> 49         outputs = model(images)
     50 

5 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/flatten.py in forward(self, input)
     38 
     39     def forward(self, input: Tensor) -> Tensor:
---> 40         return input.flatten(self.start_dim, self.end_dim)
     41 
     42     def extra_repr(self) -> str:

IndexError: Dimension out of range (expected to be in range of [-4, 3], but got 64)

这是我创建的 MLP 类

class MLP(nn.Module):
    def __init__(self, device, input_size = 1*28*28, output_size = 10):
        super().__init__()
        
        self.seq = nn.Sequential(nn.Flatten(BATCH=64, input_size),
                                                       nn.Linear(input_size, 32),
                                                       nn.ReLU(),
                                                       nn.Linear(32, output_size))
        
        self.to(device)
        
    def forward(self, x):
        return self.seq(x)

其余的训练模型是

from tqdm.notebook import tqdm
from datetime import datetime

from torch.utils.tensorboard import SummaryWriter
import torch.optim as optim

exp_name = "MLP version 1"

# log_name = "logs/" + exp_name + f" {datetime.now()}"
# print("Tensorboard logs will be written to:", log_name)
# writer = SummaryWriter(log_name)

criterion = nn.CrossEntropyLoss()
model = MLP(device)

optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)
num_epochs = 10

for epoch in tqdm(range(num_epochs)):
    epoch_train_loss = 0.0
    epoch_accuracy = 0.0
    
    for data in train_loader:
        images, labels = data
        images, labels = images.to(device), labels.to(device)
        images = images.permute(0, 3, 1, 2)
        
        
        optimizer.zero_grad()
        print("hello")
        
        outputs = model(images)
    
        loss = criterion(outputs, labels)
        epoch_train_loss += loss.item()
        
        loss.backward()
        optimizer.step()
        
        accuracy = compute_accuracy(outputs, labels)
        epoch_accuracy += accuracy
    writer.add_scalar("Loss/training", epoch_train_loss, epoch)
    writer.add_scalar("Accuracy/training", epoch_accuracy / len(train_loader), epoch)
    
    print('epoch: %d loss: %.3f' % (epoch + 1, epoch_train_loss / len(train_loader)))
    print('epoch: %d accuracy: %.3f' % (epoch + 1, epoch_accuracy / len(train_loader)))
    
    epoch_accuracy = 0.0
    # The code below computes the validation results
    for data in val_loader:
        images, labels = data
        images, labels = images.to(device), labels.to(device)
        images = images.permute(0, 3, 1, 2)
        
        model.eval()
        with torch.no_grad():
            outputs = model(images)
            
        accuracy = compute_accuracy(outputs, labels)
        epoch_accuracy += accuracy
    writer.add_scalar("Accuracy/validation", epoch_accuracy / len(val_loader), epoch)
print("finished training")

任何帮助将不胜感激。谢谢。

【问题讨论】:

    标签: python deep-learning pytorch mlp


    【解决方案1】:

    nn.Flatten() 代替 nn.Flatten(BATCH=64, input_size)

    https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html

    【讨论】:

      猜你喜欢
      • 2019-11-25
      • 1970-01-01
      • 2018-06-30
      • 2021-08-26
      • 2017-11-28
      • 2021-10-04
      • 2021-05-05
      • 2019-09-08
      • 1970-01-01
      相关资源
      最近更新 更多