Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to
your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Learning-Based Symbol Level Precoding: A Memory-Efficient Unsupervised Learning Approach
-
Publication Type:Conference
-
Authors:Mohammad A, Masouros C, Andreopoulos Y
-
Publication date:16/05/2022
-
Pagination:429, 434
-
Published proceedings:IEEE Wireless Communications and Networking Conference, WCNC
-
Volume:2022-April
-
ISBN-13:9781665442664
-
Status:Published
-
Name of conference:2022 IEEE Wireless Communications and Networking Conference (WCNC)
-
Print ISSN:1525-3511
Abstract
Symbol level precoding (SLP) has been proven to be an effective means of managing the interference in a multiuser downlink transmission and also enhancing the received signal power. This paper proposes an unsupervised-learning based SLP that applies to quantized deep neural networks (DNNs). Rather than simply training a DNN in a supervised mode, our proposal unfolds a power minimization SLP formulation in an imperfect channel scenario using the interior point method (IPM) proximal 'log' barrier function. We use binary and ternary quantizations to compress the DNN's weight values. The results show significant memory savings for our proposals compared to the existing full-precision SLP-DNet with significant model compression of ~ 21× and ~ 13× for both binary DNN-based SLP (RSLP-BDNet) and ternary DNN-based SLP (RSLP-TDNets), respectively.
› More search options
UCL Researchers