OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 21:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Continuous control with deep reinforcement learning

2016·6.769 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

6.769

Zitationen

8

Autoren

2016

Jahr

Abstract

Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Reinforcement Learning in RoboticsAdversarial Robustness in Machine LearningModel Reduction and Neural Networks
Volltext beim Verlag öffnen