I’m happy to announce that our newest paper “Tracking-by-Trackers with a Distilled and Reinforced Model” has been accepted for publication by Asian Conference on Computer Vision (ACCV) 2020 which will be held next December. In this work, we propose a new tracking-by-trackers framework which unifies visual tracking goals that have been reasoned independently in the past.

Here is the abstract:

Visual object tracking was generally tackled by reasoning independently on fast processing algorithms, accurate online adaptation methods, and fusion of trackers. In this paper, we unify such goals by proposing a novel tracking methodology that takes advantage of other visual trackers, offline and online. A compact student model is trained via the marriage of knowledge distillation and reinforcement learning. The first allows to transfer and compress tracking knowledge of other trackers. The second enables the learning of evaluation measures which are then exploited online. After learning, the student can be ultimately used to build (i) a very fast single-shot tracker, (ii) a tracker with a simple and effective online adaptation mechanism, (iii) a tracker that performs fusion of other trackers. Extensive validation shows that the proposed algorithms compete with real-time state-of-the-art trackers.

and in the following you can access some resources (preprint paper, code, and spotlight presentation).

[arXiv preprint]