Home
Research
Publication
Download
Teaching
Links
|
|
Robust Visual Tracking Based on Dynamic Group
Sparsity
.Introduction
Based on the observations that a target can be reconstructed from
several templates, and only some of the features with discriminative
power are significant to separate the target from the background,
we propose a novel online tracking algorithm with two stage sparse
optimization to jointly
minimize the target reconstruction error and maximize the discriminative
power. As the target template and discriminative features usually
have temporal and spatial relationship, dynamic group sparsity (DGS)
is utilized in our algorithm. The proposed method is compared with
three state-of-art trackers using five public challenging sequences,
which exhibit appearance changes, heavy occlusions, and pose variations.
Our algorithm is shown to outperform these methods.
Reference
Baiyang Liu, Junzhou Huang, Casimir Kulikowski,
Lin Yang, "Robust
Visual Tracking Using Local Sparse Appearance Model and K-Selection",
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume
35, Issue 12, pp. 2968-2981, December 2013.
Baiyang Liu, Lin Yang, Junzhou Huang, Peter Meer,
Leiguang Gong, Casimir Kulikowski, ”Robust
and Fast Collaborative Tracking with Two Stage Sparse Optimization”,
The 11th European Conference on Computer Vision, Crete, Greece, September,
2010. [PDF]
Junzhou Huang, Xiaolei Huang, Dimitris Metaxas,
”Learning with Dynamic Group Sparsity”,
The 12th International Conference on Computer Vision, Kyoto, Japan,
October 2009. [SLIDES] [CODE]
Notice: The codes was tested on Windows and MATLAB 2008. If you
have any suggestions or you have found a bug, please contact us via
email at jzhuang@uta.edu
|
Figure 1 The tracking
results of a car sequence in an open road environment. The vehicle
was driven beneath a bridge which led to large illumination
changes. Results from our algorithm, MIL, L1, and IVT are given
in the first, second, third, and fourth row, respectively. |
|
Figure 2 The tracking
results of a moving face sequence, which has large pose variation,
scaling, and illumination changes. Results from our algorithm,
MIL, L1, and IVT are given in the first, second, third, and
fourth row, respectively. |
|
Figure 3 The tracking
results of a plush toy moving around under different pose and
illumination conditions. Results from our algorithm, MIL, L1,
and IVT are given in the first, second, third, and fourth row,
respectively. |
|
Figure 4 The tracking
results of a face sequence, which includes a lot of pose variations,
partial or full occlusions. Results from our algorithm, MIL,
L1, and IVT are given in the first, second, third, and fourth
row, respectively. |
Related Sources
Incremental Learning
for Visual Tracking
Tracking
with Multiple Instance Learning
L1
Tracker
|