UCL  IRIS
Institutional Research Information Service
UCL Logo
Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:

Email: portico-services@ucl.ac.uk

Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Tracking by animation: Unsupervised learning of multi-object attentive trackers
  • Publication Type:
    Conference
  • Authors:
    He Z, Li J, Liu D, He H, Barber D
  • Publisher:
    IEEE
  • Publication date:
    09/01/2020
  • Pagination:
    1318, 1327
  • Published proceedings:
    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • Volume:
    2019-June
  • ISBN-13:
    9781728132938
  • Status:
    Published
  • Name of conference:
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • Conference place:
    Long Beach, CA, USA
  • Conference start date:
    15/06/2019
  • Conference finish date:
    20/06/2019
  • Print ISSN:
    1063-6919
Abstract
© 2019 IEEE. Online Multi-Object Tracking (MOT) from videos is a challenging computer vision task which has been extensively studied for decades. Most of the existing MOT algorithms are based on the Tracking-by-Detection (TBD) paradigm combined with popular machine learning approaches which largely reduce the human effort to tune algorithm parameters. However, the commonly used supervised learning approaches require the labeled data (e.g., bounding boxes), which is expensive for videos. Also, the TBD framework is usually suboptimal since it is not end-to-end, i.e., it considers the task as detection and tracking, but not jointly. To achieve both label-free and end-to-end learning of MOT, we propose a Tracking-by-Animation framework, where a differentiable neural model first tracks objects from input frames and then animates these objects into reconstructed frames. Learning is then driven by the reconstruction error through backpropagation. We further propose a Reprioritized Attentive Tracking to improve the robustness of data association. Experiments conducted on both synthetic and real video datasets show the potential of the proposed model. Our project page is publicly available at: Https://github.com/zhen-he/tracking-by-animation.
Publication data is maintained in RPS. Visit https://rps.ucl.ac.uk
 More search options
UCL Researchers
Author
Dept of Computer Science
University College London - Gower Street - London - WC1E 6BT Tel:+44 (0)20 7679 2000

© UCL 1999–2011

Search by