Robotics and Automation in Industry 4.0

Robot Path Planning in a Dynamic Environment Using Deep Q-Learning

Author(s): Rifaqat Ali* and Preeti Chandrakar

Pp: 9-33 (25)

DOI: 10.2174/9789815223491124010004

* (Excluding Mailing and Handling)

Abstract

Robot path planning is a necessary requirement for today’s autonomous industry as robots are becoming a crucial part of the industry. Planning a path in a dynamic environment that changes over time is a difficult challenge for mobile robots. The robot needs to continuously avoid all the obstacles in its path and plan a suitable trajectory from the given source point to the target point within a dynamically changing environment. In this study, we will use Deep Q-Learning (Q-Learning using neural networks) to avoid the obstacles in the environment, which are being dynamically created by the user. The main aim of the robot is to plan a path without any collision with any of the obstacles. The environment is simulated in the form of a grid that initially contains information on the starting and the target location of the robot. Robots need to plan an obstacle-free path for the given points. The user introduces obstacles whenever he/she wishes during the simulation to make the environment dynamic. The accuracy of the path is judged by the path planned by the robot. Various architectures of neural networks are compared in the study that follows. Simulation results are analyzed for the evaluation of an optimized path, and the robot is able to plan a path in the dynamic environment.


Keywords: Artificial neural network, Deep learning, Neural networks, Reinforcement learning, Robot.

Related Journals
Related Books
© 2024 Bentham Science Publishers | Privacy Policy