Institutional Research Information Service
UCL Logo
Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:

Email: portico-services@ucl.ac.uk

Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Self-Attentive hawkes process
  • Publication Type:
  • Authors:
    Zhang Q, Lipani A, Kirnap O, Yilmaz E
  • Publication date:
  • Pagination:
    11117, 11127
  • Published proceedings:
    37th International Conference on Machine Learning, ICML 2020
  • Volume:
  • ISBN-13:
  • Status:
  • Name of conference:
    37th International Conference on Machine Learning, ICML 2020
Capturing the occurrence dynamics is crucial to predicting which type of events will happen next and when. A common method to do this is through Hawkes processes. To enhance their capacity, recurrent neural networks (RNNs) have been incorporated due to RNNs successes in processing sequential data such as languages. Recent evidence suggests that self-Attention is more competent than RNNs in dealing with languages. However, we are unaware of the effectiveness of self-Attention in the context of Hawkes processes. This study aims to fill the gap by designing a self-Attentive Hawkes process (SAHP). SAHP employs self-Attention to summarise the influence of history events and compute the probability of the next event. One deficit of the conventional selfattention, when applied to event sequences, is that its positional encoding only considers the order of a sequence ignoring the time intervals between events. To overcome this deficit, we modify its encoding by translating time intervals into phase shifts of sinusoidal functions. Experiments on goodness-of-fit and prediction tasks show the improved capability of SAHP. Furthermore, SAHP is more interpretable than RNN-based counterparts because the learnt attention weights reveal contributions of one event type to the happening of another type. To the best of our knowledge, this is the first work that studies the effectiveness of self-Attention in Hawkes processes.
Publication data is maintained in RPS. Visit https://rps.ucl.ac.uk
 More search options
UCL Researchers
Dept of Computer Science
University College London - Gower Street - London - WC1E 6BT Tel:+44 (0)20 7679 2000

© UCL 1999–2011

Search by