TY - GEN
T1 - Learning Soft Robotic Arm Control
T2 - 7th EAI International Conference on Robotics and Networks, ROSENET 2023
AU - Alkhodary, Abdelrahman
AU - Gur, Berke
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Due to their nonlinear and intricate dynamics, developing analytic models and learning the control of soft robotic arms presents a significant challenge. Additionally, the challenge associated with developing analytical models for soft robotic arms is compounded by the often unpredictable variability of relevant mechanical properties inherent in these systems. Recent efforts in this domain have focused on exploring the potential of employing neural network-based, data-driven methods as a promising solution for controlling these manipulators. This paper introduces a comprehensive learning framework that seeks to acquire the control policy for a soft robotic arm through the application of reinforcement learning techniques. This framework proposes an innovative method for direct acquisition of the forward dynamics of a soft robotic arm, utilizing data collected directly from the soft arm itself. The forward dynamic model (dubbed DynaFormer) is meticulously crafted using a transformer-based architectural approach. To further advance the capabilities of this system, a reinforcement learning agent is subsequently trained using the twin-delayed deep deterministic policy gradient (TD3) algorithm. The purpose of this training is to enable the soft robotic arm to execute a specific task, namely, the precise reaching of a designated point.
AB - Due to their nonlinear and intricate dynamics, developing analytic models and learning the control of soft robotic arms presents a significant challenge. Additionally, the challenge associated with developing analytical models for soft robotic arms is compounded by the often unpredictable variability of relevant mechanical properties inherent in these systems. Recent efforts in this domain have focused on exploring the potential of employing neural network-based, data-driven methods as a promising solution for controlling these manipulators. This paper introduces a comprehensive learning framework that seeks to acquire the control policy for a soft robotic arm through the application of reinforcement learning techniques. This framework proposes an innovative method for direct acquisition of the forward dynamics of a soft robotic arm, utilizing data collected directly from the soft arm itself. The forward dynamic model (dubbed DynaFormer) is meticulously crafted using a transformer-based architectural approach. To further advance the capabilities of this system, a reinforcement learning agent is subsequently trained using the twin-delayed deep deterministic policy gradient (TD3) algorithm. The purpose of this training is to enable the soft robotic arm to execute a specific task, namely, the precise reaching of a designated point.
KW - Machine learning
KW - Reinforcement learning
KW - Soft robotics
UR - http://www.scopus.com/inward/record.url?scp=85202298307&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-64495-5_2
DO - 10.1007/978-3-031-64495-5_2
M3 - Conference contribution
AN - SCOPUS:85202298307
SN - 9783031644948
T3 - EAI/Springer Innovations in Communication and Computing
SP - 17
EP - 30
BT - 7th EAI International Conference on Robotic Sensor Networks - EAI ROSENET 2023
A2 - Gül, Ömer Melih
A2 - Fiorini, Paolo
A2 - Kadry, Seifedine Nimer
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 15 December 2023 through 16 December 2023
ER -