【发布时间】:2013-06-12 15:15:54
【问题描述】:
在 Dalal 和 Triggs 关于 HOG 的论文中,多尺度检测似乎是通过扫描图像金字塔来工作的。但是我找不到执行金字塔扫描/循环的 modules/objdetect/src/hog.cpp 的哪一部分。是我理解错了,还是我读错了源文件?
【问题讨论】:
标签: c++ opencv image-processing object-detection
在 Dalal 和 Triggs 关于 HOG 的论文中,多尺度检测似乎是通过扫描图像金字塔来工作的。但是我找不到执行金字塔扫描/循环的 modules/objdetect/src/hog.cpp 的哪一部分。是我理解错了,还是我读错了源文件?
【问题讨论】:
标签: c++ opencv image-processing object-detection
如果你查看这个函数的源代码
void HOGCache::init(const HOGDescriptor* _descriptor,
const Mat& _img, Size _paddingTL, Size _paddingBR,
bool _useCache, Size _cacheStride)
您将看到以下 cmets
// Initialize 2 lookup tables, pixData & blockData.
// Here is why:
//
// The detection algorithm runs in 4 nested loops (at each pyramid layer):
// loop over the windows within the input image
// loop over the blocks within each window
// loop over the cells within each block
// loop over the pixels in each cell
//
// As each of the loops runs over a 2-dimensional array,
// we could get 8(!) nested loops in total, which is very-very slow.
//
// To speed the things up, we do the following:
// 1. loop over windows is unrolled in the HOGDescriptor::{compute|detect} methods;
// inside we compute the current search window using getWindow() method.
// Yes, it involves some overhead (function call + couple of divisions),
// but it's tiny in fact.
// 2. loop over the blocks is also unrolled. Inside we use pre-computed blockData[j]
// to set up gradient and histogram pointers.
// 3. loops over cells and pixels in each cell are merged
// (since there is no overlap between cells, each pixel in the block is processed once)
// and also unrolled. Inside we use PixData[k] to access the gradient values and
// update the histogram
//
正如 cmets 所解释的,循环展开是为了优化目的,这可能就是为什么通过快速扫描源代码很难找到它们的原因。
【讨论】: