Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to
your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
State2vec: Off-Policy Successor Features Approximators
-
Publication Type:Journal article
-
Authors:Madjiheurem S, Toni L
-
Keywords:cs.LG, cs.LG, stat.ML
-
Author URL:
Abstract
A major challenge in reinforcement learning (RL) is the design of agents that
are able to generalize across tasks that share common dynamics. A viable
solution is meta-reinforcement learning, which identifies common structures
among past tasks to be then generalized to new tasks (meta-test). In
meta-training, the RL agent learns state representations that encode prior
information from a set of tasks, used to generalize the value function
approximation. This has been proposed in the literature as successor
representation approximators. While promising, these methods do not generalize
well across optimal policies, leading to sampling-inefficiency during meta-test
phases. In this paper, we propose state2vec, an efficient and low-complexity
framework for learning successor features which (i) generalize across policies,
(ii) ensure sample-efficiency during meta-test. We extend the well known
node2vec framework to learn state embeddings that account for the discounted
future state transitions in RL. The proposed off-policy state2vec captures the
geometry of the underlying state space, making good basis functions for linear
value function approximation.
› More search options
UCL Researchers