DDPG¶
Example¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | from rlzoo.common.env_wrappers import build_env
from rlzoo.common.utils import call_default_params
from rlzoo.algorithms import DDPG
AlgName = 'DDPG'
EnvName = 'Pendulum-v0' # only continuous action
EnvType = 'classic_control'
# EnvName = 'BipedalWalker-v2'
# EnvType = 'box2d'
# EnvName = 'Ant-v2'
# EnvType = 'mujoco'
# EnvName = 'FetchPush-v1'
# EnvType = 'robotics'
# EnvName = 'FishSwim-v0'
# EnvType = 'dm_control'
# EnvName = 'ReachTarget'
# EnvType = 'rlbench'
env = build_env(EnvName, EnvType)
alg_params, learn_params = call_default_params(env, EnvType, AlgName)
alg = eval(AlgName+'(**alg_params)')
alg.learn(env=env, mode='train', render=False, **learn_params)
alg.learn(env=env, mode='test', render=True, **learn_params)
|
Deep Deterministic Policy Gradient¶
-
class
rlzoo.algorithms.ddpg.ddpg.
DDPG
(net_list, optimizers_list, replay_buffer_size, action_range=1.0, tau=0.01)[source]¶ DDPG class
-
get_action
(s, noise_scale)[source]¶ Choose action with exploration
Parameters: s – state Returns: action
-
learn
(env, train_episodes=200, test_episodes=100, max_steps=200, save_interval=10, explore_steps=500, mode='train', render=False, batch_size=32, gamma=0.9, noise_scale=1.0, noise_scale_decay=0.995, plot_func=None)[source]¶ learn function
Parameters: - env – learning environment
- train_episodes – total number of episodes for training
- test_episodes – total number of episodes for testing
- max_steps – maximum number of steps for one episode
- save_interval – time steps for saving
- explore_steps – for random action sampling in the beginning of training
- mode – train or test mode
- render – render each step
- batch_size – update batch size
- gamma – reward decay factor
- noise_scale – range of action noise for exploration
- noise_scale_decay – noise scale decay factor
- plot_func – additional function for interactive module
Returns: None
-