【问题标题】:Find SIFT features matching between two video查找两个视频之间匹配的 SIFT 特征
【发布时间】:2014-03-20 05:36:34
【问题描述】:

我从两个视频中提取了 SIFT 特征进行匹配。 我需要将第二个视频的每个特征与存储在第一个视频的一系列特征中的特征进行比较。我在设置代码时遇到了麻烦,因此当我有对应关系时,我可以获得该功能所在的框架。我能怎么做?谁能给我一个代码示例?

这是我的代码:

obj = VideoReader('video2.avi');
lastFrame = read(obj, inf);
numFrames = obj.NumberOfFrames;

%estrazione frame e sift
for k = 1 : 3 % numFrames / 5
    disp(['Processing frame #', num2str(k)]);
    this_frame = read(obj, k * 5); % leggi solo un fotogramma ogni 5 per velocizzarle la cosa
    this_frame = imresize(this_frame, 0.5); % rimpiccioliamolo per questioni di efficienza!
    I = single(rgb2gray(this_frame)) ;
    [f,d] = vl_sift(I);  % estrazione feature

    features{k} = f;     % salviamo le feautre e i relativi descrittori in delle celle
    descriptors{k} = d;
end
save('feature_input', 'features');
save('descrittori_input', 'descriptors');

%%% un esempio di come ripescare i dati...
pippo = load('feature_input');
newfeat = pippo.features;
pippo = load('descrittori_input');
newdesc = pippo.descriptors;

for k = 1 : 3
    disp(['Le feature del fotogramma', num2str(k), ' sono: ']);
    f = cell2mat( newfeat(k) );
    f(:, 1:10) % ne mostriamo solo un pezzetto... le posizioni delle prime 10 features
end


obj2 = VideoReader('video2u.avi');
lastFrame = read(obj2, inf);
numFrames = obj2.NumberOfFrames;



%estrazione frame e sift video2
for k2 = 1 : 3 % numFrames / 5
    disp(['Processing frame #', num2str(k2)]);
    this_frame2 = read(obj2, k2 * 5); % leggi solo un fotogramma ogni 5 per velocizzarle la cosa
    this_frame2 = imresize(this_frame2, 0.5); % rimpiccioliamolo per questioni di efficienza!
    K = single(rgb2gray(this_frame2)) ;
    [f2,d2] = vl_sift(K);  % estrazione feature

    features2{k2} = f2;     % salviamo le feautre e i relativi descrittori in delle celle
    descriptors2{k2} = d2;
end




save('feature2_input', 'features2');
save('descrittori2_input', 'descriptors2');

%%% un esempio di come ripescare i dati...
pippo2 = load('feature2_input');
newfeat2 = pippo2.features2;
pippo2 = load('descrittori2_input');
newdesc2 = pippo2.descriptors2;

for k2 = 1 : 3
    disp(['Le feature del fotogramma', num2str(k2), ' sono: ']);
    f2 = cell2mat( newfeat2(k2) );
    f2(:, 1:10) % ne mostriamo solo un pezzetto... le posizioni delle prime 10 features


end


[matches, scores] = vl_ubcmatch(d, d2, 1.5) ;

% sift points plot

    subplot(1,2,1);
    imshow(uint8(I));
    hold on;
    plot(f(1,matches(1,:)),f(2,matches(1,:)),'b*');


    subplot(1,2,2);
    imshow(uint8(K));
    hold on;
    plot(f2(1,matches(2,:)),f2(2,matches(2,:)),'r*');


    figure;

     %-------------  

 % RANSAC

X1 = f(1:2,matches(1,:)) ; X1(3,:) = 1 ;
X2 = f2(1:2,matches(2,:)) ; X2(3,:) = 1 ;


numMatches = size(matches,2) ;

for t = 1:100
  % estimate homograpyh
  subset = vl_colsubset(1:numMatches, 4) ;
  A = [] ;
  for i = subset
    A = cat(1, A, kron(X1(:,i)', vl_hat(X2(:,i)))) ;
  end


  [U,S,V] = svd(A) ;


H{t} = reshape(V(:,9),3,3) ;

  % score homography
  X2_ = H{t} * X1 ;
  du = X2_(1,:)./X2_(3,:) - X2(1,:)./X2(3,:) ;
  dv = X2_(2,:)./X2_(3,:) - X2(2,:)./X2(3,:) ;
  ok{t} = (du.*du + dv.*dv) < 6*6 ;
  score(t) = sum(ok{t}) ;
end



[score, best] = max(score) ;
H = H{best};
ok = ok{best};


% sift feature matching 

   dh1 = max(size(K,1)-size(I,1),0) ;
   dh2 = max(size(I,1)-size(K,1),0) ;


subplot(2,1,1) ;
imagesc([padarray(I,dh1,'post') padarray(K,dh2,'post')]) ;
 colormap (gray);
o = size(I,2) ;
line([f(1,matches(1,:));f2(1,matches(2,:))+o], ...
     [f(2,matches(1,:));f2(2,matches(2,:))]) ;


axis image off ;

subplot(2,1,2) ;
imagesc([padarray(I,dh1,'post') padarray(K,dh2,'post')]) ;
 colormap (gray);
o = size(I,2) ;
line([f(1,matches(1,ok));f2(1,matches(2,ok))+o], ...
     [f(2,matches(1,ok));f2(2,matches(2,ok))]) ;
title(sprintf('%d (%.2f%%) inliner matches out of %d', ...
              sum(ok), ...
              100*sum(ok)/numMatches, ...
              numMatches)) ;
axis image off ;

drawnow ;

end

【问题讨论】:

    标签: arrays matlab video matching feature-detection


    【解决方案1】:

    您正在使用[matches, scores] = vl_ubcmatch(d, d2, 1.5) ; 进行匹配。 d 仅包含最新帧的描述符。你应该这样做:

    for nFrames=1:3
       [matches{nFrames}, scores{nFrames}] = vl_ubcmatch(descriptors{nFrames}, descriptors2{nFrames}, 1.5);
    end
    

    从这里,您应该能够获得帧之间的匹配。

    【讨论】:

    • 有效!但是现在我遇到了单应性问题......将单元格内容分配给非单元格数组对象。 tesi_frame 中的错误(第 114 行) H{t} = reshape(V(:,9),3,3) ;这是新代码: X1 = f(1:2,matches{nFrames}(1,:)) ; X1(3,:) = 1; X2 = f2(1:2,matches{nFrames}(2,:)) ; X2(3,:) = 1; numMatches = 大小(匹配,2);对于 t = 1:100 % 估计 homographyh 子集 = vl_colsubset(1:numMatches, 4) ; A = [] ;对于 i = 子集 A = cat(1, A, kron(X1(:,i)', vl_hat(X2(:,i)))) ;结束 [U,S,V] = svd(A) ; H{t} = reshape(V(:,9),3,3) ;
    • @Sere_na 这很难说。发布您收到错误的行。事实上,这是一个非常常见的错误。您也可以自己搜索和了解解决方案。
    猜你喜欢
    • 2018-09-03
    • 1970-01-01
    • 2018-03-18
    • 2018-03-10
    • 1970-01-01
    • 2012-04-28
    • 2015-05-23
    • 2022-12-21
    • 1970-01-01
    相关资源
    最近更新 更多