Optimizing Vehicle-to-Vehicle (V2V) Charging in Electric Vehicles by Adaptive Q-learning Implication for Smart Tourism

Authors

DOI:

https://doi.org/10.31181/dmame7220241330

Keywords:

Electric vehicles; Adaptive Q-learning; Vehicle-to-Vehicle charging; Optimization; Smart tourism

Abstract

This study investigates the utilisation of adaptive Q-learning to optimise Vehicle-to-Vehicle (V2V) charging among electric vehicles (EVs) in dynamic smart tourism destinations within developing nations. V2V charging presents a viable solution to extend the range of EVs and improve operational efficiency by enabling direct energy transfer between vehicles. However, refining this process in volatile and high-demand sectors requires complex decision-making to ensure both energy efficiency and system integrity. To address these issues, this research introduces an advanced adaptive Q-learning approach that evaluates the current state and adjusts learning parameters accordingly. A bespoke simulation environment was developed to model a fleet of EVs capable of charging one another, incorporating factors such as energy demand, state of charge, and geographical location. The simulation environment also considers real-world variables, such as the vehicles' state of charge, their spatial positioning, and variable energy demands. The reward function favours an even and efficient energy flow, ensuring compatibility with the specific needs of smart tourism destinations. The simulation results demonstrate that the adaptive Q-learning algorithm significantly outperforms rule-based methods, achieving a 20% increase in energy efficiency, a 25% improvement in the average state of charge (SOC), better transfer efficiency, and enhanced system robustness. These findings underscore the potential of adaptive Q-learning as a scalable and effective solution for intelligent energy management in V2V charging systems. Future research should explore its integration with real-time traffic and vehicle movement patterns to further enhance its applicability in smart tourism ecosystems

Downloads

Download data is not yet available.

References

[1] Pannee, S. and J. Pitchaya. (2024). Enhanced Decision Making in Smart Grid Management by Optimizing Adaptive Multi-Agent Reinforcement Learning with Vehicle-to-Grid Systems. Decision Making: Applications in Management and Engineering, 7(1), 494-530. https://doi.org/10.31181/dmame7120241257

[2] Bozanic, D., et al. (2023). Ranking challenges, risks and threats using Fuzzy Inference System. Decision Making: Applications in Management and Engineering, 6(2), 933-947. https://doi.org/10.31181/dmame622023926

[3] Lim, I., et al. (2023). Design, Simulation, and Prototype of an 18-Wheeler Electric Vehicle with Range Extension using Solar PV and Regenerative Braking. WSEAS Transactions on Circuits and Systems, 22, 251-273. https://doi.org/10.37394/23201.2023.22.27

[4] Santos, D. and V. Santos. (2023). Power Flow Optimization with Energy Storage-Sal Island Case Study. WSEAS Transactions on Power Systems, 18, 216-231. https://doi.org/10.37394/232016.2023.18.23

[5] Mousa, A. (2023). Extended-deep Q-network: A functional reinforcement learning-based energy management strategy for plug-in hybrid electric vehicles. Engineering Science and Technology, an International Journal, 43, 101434. https://doi.org/10.1016/j.jestch.2023.101434

[6] Guo, J., et al. (2023). Model predictive adaptive cruise control of intelligent electric vehicles based on deep reinforcement learning algorithm FWOR driver characteristics. International Journal of Automotive Technology, 24(4), 1175-1187. https://doi.org/10.1007/s12239-023-0096-4

[7] Shafiqurrahman, A., V. Khadkikar, and A.K. Rathore. (2023). Electric vehicle-to-vehicle (V2V) power transfer: Electrical and communication developments. IEEE Transactions on Transportation Electrification, 10(3), 6258-6284. https://doi.org/10.1109/TTE.2023.3345738

[8] Mahdi, A.J. and D. Esztergár-Kiss. (2023). Supporting scheduling decisions by using genetic algorithm based on tourists’ preferences. Applied Soft Computing, 148, 110857. https://doi.org/10.1016/j.asoc.2023.110857

[9] Chen, Z., et al. (2021). Environmental and economic impact of electric vehicle adoption in the US. Environmental Research Letters, 16(4), 045011. https://doi.org/10.1088/1748-9326/abe2d0

[10] Hussein, H.M., et al. (2024). A review of battery state of charge estimation and management systems: Models and future prospective. Wiley Interdisciplinary Reviews: Energy and Environment, 13(1), e507. https://doi.org/10.1002/wene.507

[11] Preedakorn, K., D. Butler, and J. Mehnen. (2023). Challenges for the adoption of electric vehicles in Thailand: potential impacts, barriers, and public policy recommendations. Sustainability, 15(12), 9470. https://doi.org/10.3390/su15129470

[12] Zhang, J., et al. (2022). Proximal policy optimization via enhanced exploration efficiency. Information Sciences, 609, 750-765. https://doi.org/10.1016/j.ins.2022.07.111

[13] Fayyazi, M., et al. (2023). Real-time self-adaptive Q-learning controller for energy management of conventional autonomous vehicles. Expert Systems with Applications, 222, 119770. https://doi.org/10.1016/j.eswa.2023.119770

[14] Tsai, J.-F., et al. (2024). Electric vehicle adoption barriers in Thailand. Sustainability, 16(4), 1642. https://doi.org/10.3390/su16041642

[15] Manavaalan, G., et al. (2024). Energy saving and speed control in autonomous electric vehicle using enhanced manta ray foraging algorithm optimized intelligent systems. Journal of Power Sources, 619, 235217. https://doi.org/10.1016/j.jpowsour.2024.235217

[16] Ahmadian, S., M. Tahmasbi, and R. Abedi. (2023). Q-learning based control for energy management of series-parallel hybrid vehicles with balanced fuel consumption and battery life. Energy and AI, 11, 100217. https://doi.org/10.1016/j.egyai.2022.100217

[17] Divya, G. (2023). Design and modeling of hybrid electric vehicle powered by solar and fuel cell energy with quadratic buck/boost converter. WSEAS Transactions on Circuits and Systems, 22, 41-54. https://doi.org/10.37394/23201.2023.22.7

[18] Wiedemann, N., et al. (2024). Vehicle-to-grid for car sharing-A simulation study for 2030. Applied Energy, 372, 123731. https://doi.org/10.1016/j.apenergy.2024.123731

[19] Andrew, B. and S. Richard S. (2018). Reinforcement learning: an introduction. http://dlib.hust.edu.vn/handle/HUST/23971

[20] Watkins, C.J. and P. Dayan. (1992). Q-learning. Machine learning, 8, 279-292. https://doi.org/10.1007/BF00992698

[21] Wu, H., et al. (2024). Adaptive multi-agent reinforcement learning for flexible resource management in a virtual power plant with dynamic participating multi-energy buildings. Applied Energy, 374, 123998. https://doi.org/10.1016/j.apenergy.2024.123998

[22] Haarnoja, T., et al. (2018). Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905. https://doi.org/10.48550/arXiv.1812.05905

[23] Mnih, V., et al. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533. https://doi.org/10.1038/nature14236

[24] Tokic, M. Adaptive ε-greedy exploration in reinforcement learning based on value differences. in Annual conference on artificial intelligence. 2010. Springer. https://doi.org/10.1007/978-3-642-16111-7_23

[25] Dini, P., S. Saponara, and A. Colicelli. (2023). Overview on battery charging systems for electric vehicles. Electronics, 12(20), 4295. https://doi.org/10.3390/electronics12204295

[26] Bibak, B. and H. Tekiner-Mogulkoc. (2021). Influences of vehicle to grid (V2G) on power grid: An analysis by considering associated stochastic parameters explicitly. Sustainable Energy, Grids and Networks, 26, 100429. https://doi.org/10.1016/j.segan.2020.100429

[27] Cheng, H., et al. (2020). An integrated SRM powertrain topology for plug-in hybrid electric vehicles with multiple driving and onboard charging capabilities. IEEE Transactions on Transportation Electrification, 6(2), 578-591. https://doi.org/10.1109/TTE.2020.2987167

[28] Issa, M., et al., Grid integrated non-renewable based hybrid systems: Control strategies, optimization, and modeling, in Hybrid Technologies for Power Generation. 2022, Elsevier. p. 101-135. https://doi.org/10.1016/B978-0-12-823793-9.00004-8

[29] Salehpour, M.J. and M. Hossain. (2024). Leveraging machine learning for efficient EV integration as mobile battery energy storage systems: Exploring strategic frameworks and incentives. Journal of Energy Storage, 92, 112151. https://doi.org/10.1016/j.est.2024.112151

[30] Al-Heety, O.S., et al. (2024). Traffic Control Based on Integrated Kalman Filtering and Adaptive Quantized Q-Learning Framework for Internet of Vehicles. CMES-Computer Modeling in Engineering & Sciences, 138(3). https://doi.org/10.32604/cmes.2023.029509

[31] Das, L.C. and M. Won. Saint-acc: Safety-aware intelligent adaptive cruise control for autonomous vehicles using deep reinforcement learning. in International Conference on Machine Learning. 2021. PMLR. https://proceedings.mlr.press/v139/das21a/das21a.pdf

[32] Lu, G., et al. (2025). Enhanced Dynamic Expansion Planning Model Incorporating Q-Learning and Distributionally Robust Optimization for Resilient and Cost-Efficient Distribution Networks. Energies, 18(5), 1020. https://doi.org/10.3390/en18051020

[33] Alaskar, S. and M. Younis. (2024). Optimized Dynamic Vehicle-to-Vehicle Charging for Increased Profit. Energies, 17(10), 2243. https://doi.org/10.3390/en17102243

[34] Priyan, S.R., Amanesh and J. Niresh. (2022). Vehicle to Vehicle charging (V2V). International Research Journal of Engineering and Technology (IRJET), 9(10), 51-57. https://www.irjet.net/archives/V9/i10/IRJET-V9I1008.pdf

[35] Regehr, M.T. and A. Ayoub. (2021). An elementary proof that q-learning converges almost surely. arXiv preprint arXiv:2108.02827. https://doi.org/10.48550/arXiv.2108.02827

[36] Miyamoto, K., et al. Convergence of Q-value in case of Gaussian rewards. in Progress in Intelligent Decision Science: Proceeding of IDS 2020. 2021. Springer. https://doi.org/10.1007/978-3-030-66501-2_12

[37] Sun, C. Fundamental Q-learning algorithm in finding optimal policy. in 2017 International Conference on Smart Grid and Electrical Automation (ICSGEA). 2017. IEEE. https://doi.org/10.1109/ICSGEA.2017.84

[38] Parr, R., et al. An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning. in Proceedings of the 25th international conference on Machine learning. 2008. https://doi.org/10.1145/1390156.1390251

[39] De Ath, G., et al. (2021). Greed is good: Exploration and exploitation trade-offs in Bayesian optimisation. ACM Transactions on Evolutionary Learning and Optimization, 1(1), 1-22. https://doi.org/10.1145/3425501

[40] Kamthe, S. and M. Deisenroth. Data-efficient reinforcement learning with probabilistic model predictive control. in International conference on artificial intelligence and statistics. 2018. PMLR. https://proceedings.mlr.press/v84/kamthe18a/kamthe18a.pdf

[41] Cavinato, V., et al. Dynamic-aware autonomous exploration in populated environments. in 2021 IEEE International Conference on Robotics and Automation (ICRA). 2021. IEEE. https://doi.org/10.1109/ICRA48506.2021.9560933

[42] El Fallah, S., et al. (2023). State of charge estimation of an electric vehicle’s battery using Deep Neural Networks: Simulation and experimental results. Journal of Energy Storage, 62, 106904. https://doi.org/10.1016/j.est.2023.106904

[43] Ghimire, K. (2022). Coil Parameter Analysis in Wireless Electric Vehicle Charging. International Journal of Electrical Engineering and Computer Science, 4, 101-109. https://doi.org/10.37394/232027.2022.4.15

Downloads

Published

2024-12-29

How to Cite

Pitchaya Jamjuntr, Chanchai Techawatcharapaikul, & Pannee Suanpang. (2024). Optimizing Vehicle-to-Vehicle (V2V) Charging in Electric Vehicles by Adaptive Q-learning Implication for Smart Tourism. Decision Making: Applications in Management and Engineering, 7(2), 608–635. https://doi.org/10.31181/dmame7220241330