Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to
your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
A Memory-Efficient Learning Framework for SymbolLevel Precoding with
Quantized NN Weights
-
Publication Type:Journal article
-
Authors:Mohammad A, Masouros C, Andreopoulos Y
-
Publication date:13/10/2021
-
Keywords:eess.SP, eess.SP
-
Author URL:
-
Notes:13 pages, 10 figures, Journal
Abstract
This paper proposes a memory-efficient deep neural network (DNN)
framework-based symbol level precoding (SLP). We focus on a DNN with realistic
finite precision weights and adopt an unsupervised deep learning (DL) based SLP
model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtain
its corresponding quantized version called SLP-SQDNet. The proposed scheme
offers a scalable performance vs memory tradeoff, by quantizing a scale-able
percentage of the DNN weights, and we explore binary and ternary quantizations.
Our results show that while SLP-DNet provides near-optimal performance, its
quantized versions through SQ yield 3.46x and 2.64x model compression for
binary-based and ternary-based SLP-SQDNets, respectively. We also find that our
proposals offer 20x and 10x computational complexity reductions compared to SLP
optimization-based and SLP-DNet, respectively.
› More search options
UCL Researchers