【问题标题】:Input Layer type: ImageData in windows caffe cpp giving Blank Output输入层类型:Windows caffe cpp 中的 ImageData 给出空白输出
【发布时间】:2017-03-27 14:41:49
【问题描述】:

我正在使用 cpp 在 windows 中使用 caffe 解决图像分割问题。我正在使用“Imagedata”输入类型来训练网络,但在测试时我得到了空白输出。谁能帮我分析一下这个问题。

**********  solver.prototxt  ***************

test_initialization: false
base_lr: 0.01
display: 51
max_iter: 50000
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.0001
stepsize: 4069
snapshot: 10000
snapshot_prefix: "snapshot"
solver_mode: GPU
net: "train.prototxt"
solver_type: SGD

File_Triangle.txt 和 File_label_triangle.txt 具有图像位置的绝对路径和一个虚拟标签。 例如 D:\00000032.png 0

****************  train.prototxt   ********************

layer {
  name: "data"
  type: "ImageData"
  top: "data"
  top: "xx"
  include {
    phase: TRAIN
  }
  image_data_param {
    source: "File_triangle.txt"
     batch_size: 1
     new_height: 32
     new_width: 32
     is_color: False
}

}

layer {
  name: "label"
  type: "ImageData"
  top: "label"
  top: "yy"
  image_data_param {
    source: "File_label_triangle.txt"
     batch_size: 1
     new_height: 32
     new_width: 32
     is_color: False
}
  include {
    phase: TRAIN
  }
}


layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1.0
  }
  param {
    lr_mult: 0.10000000149
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.0010000000475
    }
    bias_filler {
      type: "constant"
      value: 0.0
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "conv1"
  top: "conv2"
  param {
    lr_mult: 1.0
  }
  param {
    lr_mult: 0.10000000149
  }
  convolution_param {
    num_output: 1024
    pad: 0
    kernel_size: 16
    stride: 16
    weight_filler {
      type: "gaussian"
      std: 0.0010000000475
    }
    bias_filler {
      type: "constant"
      value: 0.0
    }
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "upsample"
  type: "Deconvolution"
  bottom: "conv2"
  top: "upsample"
  param {
    lr_mult: 1.0
  }
  convolution_param {
    num_output: 1
    pad: 0
    kernel_size: 16
    stride: 16
    bias_filler {
      type: "constant"
      value: 128.0
    }
  }
}
layer {
  name: "lossL1"
  type: "SmoothL1Loss"
  bottom: "upsample"
  bottom: "label"
  top: "lossL1"
  loss_weight: 1.0
}

用于在 cpp 中训练的代码 sn-p

shared_ptr<Net<float> > net_;
net_.reset(new Net<float>("train.prototxt", caffe::Phase::TRAIN));
Caffe::set_mode(Caffe::GPU);
caffe::SolverParameter solver_param;
caffe::ReadSolverParamsFromTextFileOrDie("solver.prototxt", &solver_param);
boost::shared_ptr<caffe::Solver<float> > solver(caffe::SolverRegistry<float>::CreateSolver(solver_param));
solver->Solve();

训练后我使用 .caffemodel 来测试网络。

********************  test.prototxt  **********************

layer {
  name: "data"
  type: "Input"
  top: "data"
  input_param { shape: { dim: 1 dim: 1 dim: 32 dim: 32 } }
}

layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1.0
  }
  param {
    lr_mult: 0.10000000149
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.0010000000475
    }
    bias_filler {
      type: "constant"
      value: 0.0
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "conv1"
  top: "conv2"
  param {
    lr_mult: 1.0
  }
  param {
    lr_mult: 0.10000000149
  }
  convolution_param {
    num_output: 1024
    pad: 0
    kernel_size: 16
    stride: 16
    weight_filler {
      type: "gaussian"
      std: 0.0010000000475
    }
    bias_filler {
      type: "constant"
      value: 0.0
    }
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "upsample"
  type: "Deconvolution"
  bottom: "conv2"
  top: "upsample"
  param {
    lr_mult: 1.0
  }
  convolution_param {
    num_output: 1
    pad: 0
    kernel_size: 16
    stride: 16
    bias_filler {
      type: "constant"
      value: 128.0
    }
  }
}

用于测试的代码片段。

Caffe::set_mode(Caffe::GPU);

boost::shared_ptr<caffe::Net<float> > net_;
net_.reset(new Net<float>("test.prototxt", caffe::TEST));

net_->CopyTrainedLayersFrom("snapshot_iter_50000.caffemodel");

cv::Mat matInput = cv::imread("input image path");

matInput.convertTo(matInput, CV_32F);
int height = matInput.rows;
int width = matInput.cols;

Blob<float>* input_layer = net_->input_blobs()[0];
float* input_data = input_layer->mutable_cpu_data();
int layer_index = height * width;
for (size_t i = 0; i < height; i++)
{
    for (size_t j = 0; j < width; j++)
    {
        input_data[i*width + j] = matInput.at<float>(i, j);
    }

}

net_->Forward();

const shared_ptr<Blob<float> >& concat_blob = net_->blob_by_name("upsample");
const float* concat_out = concat_blob->cpu_data();

cv::Mat matout(height, width, CV_8UC1);
for (size_t i = 0; i < height*width; i++)
{
    matout.data[i] = concat_out[i];
}

cv::imwrite(output_str, matout);

【问题讨论】:

    标签: c++ windows deep-learning caffe


    【解决方案1】:

    我有问题。网络提供了正确的输出,但错误在于转储它。网络以浮点形式(即在上采样层)提供输出,并且不是标准化形式。下面的修改给出了正确的输出。

    const shared_ptr<Blob<float> >& concat_blob = net_->blob_by_name("upsample");
    const float* concat_out = concat_blob->cpu_data();
    
    cv::Mat matout(height, width, CV_32FC1);
    for (int i = 0; i < height; i++)
    {
        for (int j = 0; j < width; j++)
        {
             matout.at<float>(i, j) = (float)(concat_out[i*width + j]);
        }
    }
    cv::normalize(matout, matout, 0, 255, CV_MINMAX);
    cv::imwrite("output image path", matout);
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2018-04-04
      • 2015-11-21
      • 2017-08-01
      • 1970-01-01
      • 2017-07-24
      • 2021-04-20
      • 1970-01-01
      • 2021-10-09
      相关资源
      最近更新 更多