DDPG

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
 from rlzoo.common.env_wrappers import build_env
 from rlzoo.common.utils import call_default_params
 from rlzoo.algorithms import DDPG

 AlgName = 'DDPG'
 EnvName = 'Pendulum-v0'  # only continuous action
 EnvType = 'classic_control'

 # EnvName = 'BipedalWalker-v2'
 # EnvType = 'box2d'

 # EnvName = 'Ant-v2'
 # EnvType = 'mujoco'

 # EnvName = 'FetchPush-v1'
 # EnvType = 'robotics'

 # EnvName = 'FishSwim-v0'
 # EnvType = 'dm_control'

 # EnvName = 'ReachTarget'
 # EnvType = 'rlbench'

 env = build_env(EnvName, EnvType)
 alg_params, learn_params = call_default_params(env, EnvType, AlgName)
 alg = eval(AlgName+'(**alg_params)')
 alg.learn(env=env, mode='train', render=False, **learn_params)
 alg.learn(env=env, mode='test', render=True, **learn_params)

Deep Deterministic Policy Gradient

class rlzoo.algorithms.ddpg.ddpg.DDPG(net_list, optimizers_list, replay_buffer_size, action_range=1.0, tau=0.01)[source]

DDPG class

ema_update()[source]

Soft updating by exponential smoothing

Returns:None
get_action(s, noise_scale)[source]

Choose action with exploration

Parameters:s – state
Returns:action
get_action_greedy(s)[source]

Choose action

Parameters:s – state
Returns:action
learn(env, train_episodes=200, test_episodes=100, max_steps=200, save_interval=10, explore_steps=500, mode='train', render=False, batch_size=32, gamma=0.9, noise_scale=1.0, noise_scale_decay=0.995, plot_func=None)[source]

learn function

Parameters:
  • env – learning environment
  • train_episodes – total number of episodes for training
  • test_episodes – total number of episodes for testing
  • max_steps – maximum number of steps for one episode
  • save_interval – time steps for saving
  • explore_steps – for random action sampling in the beginning of training
  • mode – train or test mode
  • render – render each step
  • batch_size – update batch size
  • gamma – reward decay factor
  • noise_scale – range of action noise for exploration
  • noise_scale_decay – noise scale decay factor
  • plot_func – additional function for interactive module
Returns:

None

load_ckpt(env_name)[source]

load trained weights

Returns:None
sample_action()[source]

generate random actions for exploration

save_ckpt(env_name)[source]

save trained weights

Returns:None
store_transition(s, a, r, s_, d)[source]

Store data in data buffer

Parameters:
  • s – state
  • a – act
  • r – reward
  • s – next state
Returns:

None

update(batch_size, gamma)[source]

Update parameters

Parameters:
  • batch_size – update batch size
  • gamma – reward decay factor
Returns:

Default Hyper-parameters

rlzoo.algorithms.ddpg.default.box2d(env, default_seed=True)[source]
rlzoo.algorithms.ddpg.default.classic_control(env, default_seed=True)[source]
rlzoo.algorithms.ddpg.default.dm_control(env, default_seed=True)[source]
rlzoo.algorithms.ddpg.default.mujoco(env, default_seed=True)[source]
rlzoo.algorithms.ddpg.default.rlbench(env, default_seed=True)[source]
rlzoo.algorithms.ddpg.default.robotics(env, default_seed=True)[source]