Repository logo
 

Deep Reinforcement Learning for UAV-Based SDWSN Data Collection

aut.relation.endpage398
aut.relation.issue11
aut.relation.journalFuture Internet
aut.relation.startpage398
aut.relation.volume16
dc.contributor.authorKaregar, Pejman A
dc.contributor.authorAl-Hamid, Duaa Zuhair
dc.contributor.authorChong, Peter Han Joo
dc.date.accessioned2024-11-19T23:46:02Z
dc.date.available2024-11-19T23:46:02Z
dc.date.issued2024-10-30
dc.description.abstractRecent advancements in Unmanned Aerial Vehicle (UAV) technology have made them effective platforms for data capture in applications like environmental monitoring. UAVs, acting as mobile data ferries, can significantly improve ground network performance by involving ground network representatives in data collection. These representatives communicate opportunistically with accessible UAVs. Emerging technologies such as Software Defined Wireless Sensor Networks (SDWSN), wherein the role/function of sensor nodes is defined via software, can offer a flexible operation for UAV data-gathering approaches. In this paper, we introduce the “UAV Fuzzy Travel Path”, a novel approach that utilizes Deep Reinforcement Learning (DRL) algorithms, which is a subfield of machine learning, for optimal UAV trajectory planning. The approach also involves the integration between UAV and SDWSN wherein nodes acting as gateways (GWs) receive data from the flexibly formulated group members via software definition. A UAV is then dispatched to capture data from GWs along a planned trajectory within a fuzzy span. Our dual objectives are to minimize the total energy consumption of the UAV system during each data collection round and to enhance the communication bit rate on the UAV-Ground connectivity. We formulate this problem as a constrained combinatorial optimization problem, jointly planning the UAV path with improved communication performance. To tackle the NP-hard nature of this problem, we propose a novel DRL technique based on Deep Q-Learning. By learning from UAV path policy experiences, our approach efficiently reduces energy consumption while maximizing packet delivery.
dc.identifier.citationFuture Internet, ISSN: 1999-5903 (Print); 1999-5903 (Online), MDPI AG, 16(11), 398-398. doi: 10.3390/fi16110398
dc.identifier.doi10.3390/fi16110398
dc.identifier.issn1999-5903
dc.identifier.issn1999-5903
dc.identifier.urihttp://hdl.handle.net/10292/18361
dc.languageen
dc.publisherMDPI AG
dc.relation.urihttps://www.mdpi.com/1999-5903/16/11/398
dc.rights© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
dc.rights.accessrightsOpenAccess
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subject4605 Data Management and Data Science
dc.subject4606 Distributed Computing and Systems Software
dc.subject46 Information and Computing Sciences
dc.subject4602 Artificial Intelligence
dc.subjectMachine Learning and Artificial Intelligence
dc.subject46 Information and computing sciences
dc.titleDeep Reinforcement Learning for UAV-Based SDWSN Data Collection
dc.typeJournal Article
pubs.elements-id574868

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Karegar et al_2024_Deep reinforcement learning.pdf
Size:
1.89 MB
Format:
Adobe Portable Document Format
Description:
Journal article