Deep Reinforcement Learning for UAV-Based SDWSN Data Collection
| aut.relation.endpage | 398 | |
| aut.relation.issue | 11 | |
| aut.relation.journal | Future Internet | |
| aut.relation.startpage | 398 | |
| aut.relation.volume | 16 | |
| dc.contributor.author | Karegar, Pejman A | |
| dc.contributor.author | Al-Hamid, Duaa Zuhair | |
| dc.contributor.author | Chong, Peter Han Joo | |
| dc.date.accessioned | 2024-11-19T23:46:02Z | |
| dc.date.available | 2024-11-19T23:46:02Z | |
| dc.date.issued | 2024-10-30 | |
| dc.description.abstract | Recent advancements in Unmanned Aerial Vehicle (UAV) technology have made them effective platforms for data capture in applications like environmental monitoring. UAVs, acting as mobile data ferries, can significantly improve ground network performance by involving ground network representatives in data collection. These representatives communicate opportunistically with accessible UAVs. Emerging technologies such as Software Defined Wireless Sensor Networks (SDWSN), wherein the role/function of sensor nodes is defined via software, can offer a flexible operation for UAV data-gathering approaches. In this paper, we introduce the “UAV Fuzzy Travel Path”, a novel approach that utilizes Deep Reinforcement Learning (DRL) algorithms, which is a subfield of machine learning, for optimal UAV trajectory planning. The approach also involves the integration between UAV and SDWSN wherein nodes acting as gateways (GWs) receive data from the flexibly formulated group members via software definition. A UAV is then dispatched to capture data from GWs along a planned trajectory within a fuzzy span. Our dual objectives are to minimize the total energy consumption of the UAV system during each data collection round and to enhance the communication bit rate on the UAV-Ground connectivity. We formulate this problem as a constrained combinatorial optimization problem, jointly planning the UAV path with improved communication performance. To tackle the NP-hard nature of this problem, we propose a novel DRL technique based on Deep Q-Learning. By learning from UAV path policy experiences, our approach efficiently reduces energy consumption while maximizing packet delivery. | |
| dc.identifier.citation | Future Internet, ISSN: 1999-5903 (Print); 1999-5903 (Online), MDPI AG, 16(11), 398-398. doi: 10.3390/fi16110398 | |
| dc.identifier.doi | 10.3390/fi16110398 | |
| dc.identifier.issn | 1999-5903 | |
| dc.identifier.issn | 1999-5903 | |
| dc.identifier.uri | http://hdl.handle.net/10292/18361 | |
| dc.language | en | |
| dc.publisher | MDPI AG | |
| dc.relation.uri | https://www.mdpi.com/1999-5903/16/11/398 | |
| dc.rights | © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | |
| dc.rights.accessrights | OpenAccess | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | 4605 Data Management and Data Science | |
| dc.subject | 4606 Distributed Computing and Systems Software | |
| dc.subject | 46 Information and Computing Sciences | |
| dc.subject | 4602 Artificial Intelligence | |
| dc.subject | Machine Learning and Artificial Intelligence | |
| dc.subject | 46 Information and computing sciences | |
| dc.title | Deep Reinforcement Learning for UAV-Based SDWSN Data Collection | |
| dc.type | Journal Article | |
| pubs.elements-id | 574868 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Karegar et al_2024_Deep reinforcement learning.pdf
- Size:
- 1.89 MB
- Format:
- Adobe Portable Document Format
- Description:
- Journal article
