UCL  IRIS
Institutional Research Information Service
UCL Logo
Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:

Email: portico-services@ucl.ac.uk

Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Law breaking trading algorithms: Emergence and deterrence
  • Publication Type:
    Thesis/Dissertation
  • Authors:
    Ashton H
  • Date awarded:
    2022
  • Awarding institution:
    UCL (University College London)
  • Language:
    English
Abstract
This thesis demonstrates that trading algorithms trained through Reinforcement Learning will learn to manipulate prices (thereby breaking the law) through a process called spoofing in a fully functioning limit order book environment. The regulatory definition of Spoofing requires the establishment of intent on part of the accused: it is defined by the US CFTC as the placement of orders with the intent to cancel them. This needs to be defined for auto-didactic algorithms where behaviour emerges somewhat independently from the programmer. I propose a high-level definition informed by current law then test to see whether it matches with a laypeople's natural understanding of the concept. Finally, I implement a constrained learning method in Reinforcement Learning using an appropriate definition of intent to cancel which allow auto-didactic trading algorithms to be trained and deployed safely without the risk of spoofing behaviour emerging. This subject is important because algorithmic trading leads other areas in the degree of agency that algorithmic actors are permitted. The simple pursuit of high-level objectives like profit maximisation can result in behaviour that contradicts the law. Without a method of encoding laws within the training and testing process, algorithms will likely learn to break laws when it is rational to do so.
Publication data is maintained in RPS. Visit https://rps.ucl.ac.uk
 More search options
UCL Researchers
Author
Dept of Computer Science
University College London - Gower Street - London - WC1E 6BT Tel:+44 (0)20 7679 2000

© UCL 1999–2011

Search by