UCL  IRIS
Institutional Research Information Service
UCL Logo
Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:

Email: portico-services@ucl.ac.uk

Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Challenges and Perspectives in Neuromorphic-based Visual IoT Systems and Networks
  • Publication Type:
    Conference
  • Authors:
    Martini M, Khan N, Bi Y, Andreopoulos Y, Saki H, Shikh-Bahaei M
  • Publication date:
    14/05/2020
  • Pagination:
    8539, 8543
  • Published proceedings:
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
  • Volume:
    2020-May
  • ISBN-13:
    9781509066315
  • Status:
    Published
  • Name of conference:
    ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • Conference start date:
    04/05/2020
  • Conference finish date:
    08/05/2020
  • Print ISSN:
    1520-6149
Abstract
© 2020 IEEE. Neuromorphic sensors, a.k.a. dynamic vision sensors (DVS) or silicon retinas, do not capture full images (frames) at a fixed rate, but asynchronously capture spikes indicating changes of brightness in the scene, following the principles of biological vision and perception in mammals. DVS sensing and processing produces a data representation where the scene can be represented with a very high time resolution with a limited number of bits (an inherent data compression is performed at the time of acquisition). Such representation can be used locally to derive actionable responses and selected parts can be transmitted and then processed in another network location. Due to these features, such sensors represent an excellent choice as visual sensing technology for next-generation Internet-of-Things, e.g. in surveillance, drone technology, and robotics. It is in fact becoming evident that in this framework acquiring, processing, and transmitting frame-based video is inefficient in terms of energy consumption and reaction times, in particular in some scenarios. Hence, we explore here the feasibility of advanced Machine to Machine (M2M) communications systems that directly capture, compress and transmit spike-based visual information to cloud computing services in order to produce content classification or retrieval results with extremely low power and low latency.
Publication data is maintained in RPS. Visit https://rps.ucl.ac.uk
 More search options
UCL Researchers
Author
Dept of Electronic & Electrical Eng
University College London - Gower Street - London - WC1E 6BT Tel:+44 (0)20 7679 2000

© UCL 1999–2011

Search by