Using Deep-Learning for 5G End-to-End Delay Estimation Based on Gaussian Mixture Models

Research output: Contribution to journalArticlepeer-review

9 Downloads (Pure)

Abstract

Deep learning is used in various applications due to its advantages over traditional Machine Learning (ML) approaches in tasks encompassing complex pattern learning, automatic feature extraction, scalability, adaptability, and performance in general. This paper proposes an end-to-end (E2E) delay estimation method for 5G networks through deep learning (DL) techniques based on Gaussian Mixture Models (GMM). In the first step, the components of a GMM are estimated through the Expectation-Maximization (EM) algorithm and are subsequently used as labeled data in a supervised deep learning stage. A multi-layer neural network model is trained using the labeled data and assuming different numbers of E2E delay observations for each training sample. The accuracy and computation time of the proposed deep learning estimator based on the Gaussian Mixture Model (DLEGMM) are evaluated for different 5G network scenarios. The simulation results show that the DLEGMM outperforms the GMM method based on the EM algorithm, in terms of the accuracy of the E2E delay estimates, although requiring a higher computation time. The estimation method is characterized for different 5G scenarios, and when compared to GMM, DLEGMM reduces the mean squared error (MSE) obtained with GMM between 1.7 to 2.6 times.
Original languageEnglish
Article number648
Number of pages12
JournalInformation (Switzerland)
Volume14
Issue number12
DOIs
Publication statusPublished - 5 Dec 2023

Keywords

  • end-to-end delay
  • estimation
  • heterogeneous networks
  • machine learning
  • quality of service

Fingerprint

Dive into the research topics of 'Using Deep-Learning for 5G End-to-End Delay Estimation Based on Gaussian Mixture Models'. Together they form a unique fingerprint.

Cite this