تعداد نشریات | 31 |
تعداد شمارهها | 499 |
تعداد مقالات | 4,850 |
تعداد مشاهده مقاله | 7,491,434 |
تعداد دریافت فایل اصل مقاله | 5,593,187 |
MDEU-A2C: A Mobility, Deadline, Energy and Utilization Aware Multi-Agent A2C Scheduling Approach to Support Fog and Edge Computing in IoT Applications | ||
Future Research on AI and IoT | ||
مقاله 6، دوره 1، شماره 1، شهریور 2025، صفحه 42-56 اصل مقاله (1.46 M) | ||
نوع مقاله: Research Article | ||
شناسه دیجیتال (DOI): 10.22080/frai.2025.29341.1018 | ||
نویسندگان | ||
Armin Mohammadi Ghaleh1؛ Sayed Gholam Hassan Tabatabaei* 2 | ||
1K. N. Toosi University of Technology, Tehran, Iran | ||
2Department of Electrical and Computer Engineering, Malek-e-Ashtar University of Technology, Tehran, Iran | ||
تاریخ دریافت: 05 خرداد 1404، تاریخ بازنگری: 12 خرداد 1404، تاریخ پذیرش: 11 خرداد 1404 | ||
چکیده | ||
Mobile Edge Computing reduces latency and response time by bringing computational resources closer to end-user. However, user mobility poses a significant challenge, as users continuously move between coverage areas of different edge nodes with limited range. This dynamic environment demands efficient scheduling mechanisms that can adapt to user movement while meeting application deadlines and optimizing edge resource utilization. This paper proposes an approach for scheduling based on Deep Reinforcement Learning, specifically using an Advantage Actor-Critic architecture within a Fog and Edge computing framework for IoT applications. The method enables distributed decision-making by deploying actor agents at edge nodes and a centralized critic at the fog node, facilitating continuous adaptation through system-wide feedback. User mobility is addressed using location prediction via RNN models embedded at each edge node, allowing proactive and informed offloading decisions. Experimental results demonstrate the proposed approach significantly improves task completion rate by 50%, failure rate by 26%, and response latency by 60%, while also adapting well to dynamic environments, outperforming state-of-the-art methods in real-world-inspired scenarios. | ||
کلیدواژهها | ||
Mobile Edge Computing (MEC)؛ Fog and Edge Computing (FEC)؛ Multi-Agent Reinforcement Learning؛ Advantage Actor-Critic (A2C)؛ Decentralized Scheduling | ||
مراجع | ||
[1] P. Li, Z. Xiao, X. Wang, K. Huang, Y. Huang and H. Gao, "EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing," IEEE Transactions on Intelligent Vehicles, vol. 9, no. 1, pp. 1830-1846, 2023. doi.org/10.1109/TIV.2023.3321679
[2] J. B. D. da Costa, A. M. de Souza, R. I. Meneguette, E. Cerqueira, D. Rosário, C. Sommer and L. Villas, "Mobility and Deadline-Aware Task Scheduling Mechanism for Vehicular Edge Computing," IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 10, pp. 11345-11359, 2023. doi.org/10.1109/TITS.2023.3276823
[3] Y. Fan, J. Ge, S. Zhang, J. Wu and B. Luo, "Decentralized Scheduling for Concurrent Tasks in Mobile Edge Computing via Deep Reinforcement Learning," IEEE Transactions on Mobile Computing, pp. 1-15, 2023. doi.org/10.1109/TMC.2023.3266226
[4] X. He, C. You and T. Q. S. Quek, “Age-Based Scheduling for Mobile Edge Computing: A Deep Reinforcement Learning Approach,” IEEE Transactions on Mobile Computing (Early Access), pp. 1-16, 2024. doi.org/10.1109/TMC.2024.3370101
[5] J. Lu, J. Yang, S. Li, Y. Li, W. Jiang and J. Dai, “A2C-DRL: Dynamic Scheduling for Stochastic Edge–Cloud Environments Using A2C and Deep Reinforcement Learning,” IEEE Internet of Things Journal, vol. 11, no. 9, pp. 16915-16927, 2024.10.1109/JIOT.2024.3366252 [6] L. Niu, X. Chen, N. Zhang, Y. Zhu, R. Yin and C. Wu, “Multiagent Meta-Reinforcement Learning for Optimized Task Scheduling in Heterogeneous Edge Computing Systems,” IEEE Internet of Things Journal, vol. 10, no. 12, pp. 10519-10531, 2023. doi.org/10.1109/JIOT.2023.3241222 [7] L. Liu, J. Feng, X. Mu, Q. Pei, D. Lan and M. Xiao, “Asynchronous Deep Reinforcement Learning for Collaborative Task Computing and On-Demand Resource Allocation in Vehicular Edge Computing,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 12, pp. 15513-15526, 2023. doi.org/10.1109/TITS.2023.3249745
[8] Z. Cao, X. Deng, S. Yue, P. Jiang, J. Ren and J. Gui, "Dependent Task Offloading in Edge Computing Using GNN and Deep Reinforcement Learning," IEEE Internet of Things Journal (Early Access), 2024. doi.org/10.1109/JIOT.2024.3374969
[9] D. Misra, “Mish: A Self Regularized Non-Monotonic Activation Function,” Arxiv, 2019. doi.org/10.48550/arXiv.1908.08681
[10] Y. LeCun, Y. Bengio and G. Hinton, “Deep learning,” Nature, vol. 521, p. 436–444, 2015.doi.org/10.1038/nature14539 [11] N. A. Rashed, Y. H. Ali, T. A. Rashid and A. Salih, “Unraveling the Versatility and Impact of Multi-Objective Optimization: Algorithms, Applications, and Trends for Solving Complex Real-World Problems,” Arxiv, 2024. doi.org/10.48550/arXiv.2407.08754 [12] H. Anysz, A. Nicał, Ž. Stević, M. Grzegorzewski and K. Sikora, “Pareto Optimal Decisions in Multi-Criteria Decision Making Explained with Construction Cost Cases,” Symmetry , vol. 13, no. 1, 2021. doi.org/10.3390/sym13010046 [13] "Docker," Docker, [Online]. Available on: www.docker.com.
[14] “Mosquitto,” Eclipse, [Online]. Available on: www.mosquitto.org
[15] “Python,” Python, [Online]. Available on: www.python.org.
[16] J. Kim and K. Lee, "Function Bench : A Suite of Workloads for Serverless Cloud Function Service," in IEEE International Conference on Cloud Computing, Milan, Italy, 2019. doi.org/10.1109/CLOUD.2019.00091
[17] “Node-Red,” IBM, [Online]. Available on: www.nodered.org.
[18] A. Biswas and H.-C. Wang, “Autonomous Vehicles Enabled by the Integration of IoT, Edge Intelligence, 5G, and Blockchain,” Sensors, vol. 23, no. 4, 2023. doi.org/10.3390/s23041963
[19] S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang and W. Shi, “Edge Computing for Autonomous Driving: Opportunities and Challenges,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1697-1716, 2019. doi.org/10.1109/JPROC.2019.2915983
[20] A. Hazra , P. Rana , M. Adhikari and T. Amgoth , “Fog computing for next-generation Internet of Things: Fundamental, state-of-the-art and research challenges,” Computer Science Review, vol.48,2023.doi.org/10.1016/j.cosrev.2023.100549
[21] S. N. Srirama, “Distributed Edge Analytics in Edge-Fog-Cloud Continuum,” Arxiv, 2023. doi.org/10.1002/itl2.562
[22] W. Qin , H. Chen , L. Wang , Y. Xia , A. Nascita and A. Pescapè , “MCOTM: Mobility-aware computation offloading and task migration for edge computing in industrial IoT,” Future Generation Computer Systems, vol. 151, 2024. doi.org/10.1016/j.future.2023.10.004
[23] M. Ferens, D. Hortelano, I. de Miguel, R. J. Durán Barroso, J. C. Aguado and L. Ruiz, “Deep Reinforcement Learning Applied to Computation Offloading of Vehicular Applications: A Comparison,” in International Balkan Conference on Communications and Networking, Sarajevo, Bosnia and Herzegovina, 2022. doi.org/10.1109/BalkanCom55633.2022.9900545
[24] N. Yang, J. Wen, M. Zhang and M. Tang, “Multi-objective Deep Reinforcement Learning for Mobile Edge Computing,” in International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks, Singapore, Singapore, 2023. doi.org/10.23919/WiOpt58741.2023.10349870
[25] B. Xie and H. Cui, “Deep reinforcement learning-based dynamical task offloading for mobile edge computing,” The Journal of Supercomputing, vol. 81, 2024. doi.org/10.1007/s11227-024-06603-x
| ||
آمار تعداد مشاهده مقاله: 73 تعداد دریافت فایل اصل مقاله: 58 |