Machine Learning-Based Resource Allocation Algorithm to Mitigate Interference in D2D-Enabled Cellular Networks
aut.relation.endpage | 408 | |
aut.relation.issue | 11 | |
aut.relation.journal | Future Internet | |
aut.relation.startpage | 408 | |
aut.relation.volume | 16 | |
dc.contributor.author | Kamruzzaman, Md | |
dc.contributor.author | Sarkar, Nurul I | |
dc.contributor.author | Gutierrez, Jairo | |
dc.date.accessioned | 2024-11-11T23:16:11Z | |
dc.date.available | 2024-11-11T23:16:11Z | |
dc.date.issued | 2024-11-06 | |
dc.description.abstract | Mobile communications have experienced exponential growth both in connectivity and multimedia traffic in recent years. To support this tremendous growth, device-to-device (D2D) communications play a significant role in 5G and beyond 5G networks. However, enabling D2D communications in an underlay, heterogeneous cellular network poses two major challenges. First, interference management between D2D and cellular users directly affects a system’s performance. Second, achieving an acceptable level of link quality for both D2D and cellular networks is necessary. An optimum resource allocation is required to mitigate the interference and improve a system’s performance. In this paper, we provide a solution to interference management with an acceptable quality of services (QoS). To this end, we propose a machine learning-based resource allocation method to maximize throughput and achieve minimum QoS requirements for all active D2D pairs and cellular users. We first solve a resource optimization problem by allocating spectrum resources and controlling power transmission on demand. As resource optimization is an integer nonlinear programming problem, we address this problem by proposing a deep Q-network-based reinforcement learning algorithm (DRL) to optimize the resource allocation issue. The proposed DRL algorithm is trained with a decision-making policy to obtain the best solution in terms of spectrum efficiency, computational time, and throughput. The system performance is validated by simulation. The results show that the proposed method outperforms the existing ones. | |
dc.identifier.citation | Future Internet, ISSN: 1999-5903 (Online), MDPI AG, 16(11), 408-408. doi: 10.3390/fi16110408 | |
dc.identifier.doi | 10.3390/fi16110408 | |
dc.identifier.issn | 1999-5903 | |
dc.identifier.uri | http://hdl.handle.net/10292/18263 | |
dc.language | en | |
dc.publisher | MDPI AG | |
dc.relation.uri | https://www.mdpi.com/1999-5903/16/11/408 | |
dc.rights | © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | |
dc.rights.accessrights | OpenAccess | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | 46 Information and computing sciences | |
dc.title | Machine Learning-Based Resource Allocation Algorithm to Mitigate Interference in D2D-Enabled Cellular Networks | |
dc.type | Journal Article | |
pubs.elements-id | 574254 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Zaman-Sarkar futureinternet-16-00408.pdf
- Size:
- 1.42 MB
- Format:
- Adobe Portable Document Format
- Description:
- Journal article