PPO

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
 from rlzoo.common.env_wrappers import build_env
 from rlzoo.common.utils import call_default_params
 from rlzoo.algorithms import PPO

 EnvName = 'PongNoFrameskip-v4'
 EnvType = 'atari'

 # EnvName = 'Pendulum-v0'
 # EnvType = 'classic_control'

 # EnvName = 'BipedalWalker-v2'
 # EnvType = 'box2d'

 # EnvName = 'Ant-v2'
 # EnvType = 'mujoco'

 # EnvName = 'FetchPush-v1'
 # EnvType = 'robotics'

 # EnvName = 'FishSwim-v0'
 # EnvType = 'dm_control'

 # EnvName = 'ReachTarget'
 # EnvType = 'rlbench'

 env = build_env(EnvName, EnvType)
 alg_params, learn_params = call_default_params(env, EnvType, 'PPO')
 alg = PPO(method='clip', **alg_params) # specify 'clip' or 'penalty' method for PPO
 alg.learn(env=env,  mode='train', render=False, **learn_params)
 alg.learn(env=env,  mode='test', render=False, **learn_params)

Proximal Policy Optimization (Penalty)

class rlzoo.algorithms.ppo_penalty.ppo_penalty.PPO_PENALTY(net_list, optimizers_list, kl_target=0.01, lam=0.5)[source]

PPO class

a_train(tfs, tfa, tfadv, oldpi_prob)[source]

Update policy network

Parameters:
  • tfs – state
  • tfa – act
  • tfadv – advantage
Returns:

c_train(tfdc_r, s)[source]

Update actor network

Parameters:
  • tfdc_r – cumulative reward
  • s – state
Returns:

None

cal_adv(tfs, tfdc_r)[source]

Calculate advantage

Parameters:
  • tfs – state
  • tfdc_r – cumulative reward
Returns:

advantage

get_action(s)[source]

Choose action

Parameters:s – state
Returns:clipped act
get_action_greedy(s)[source]

Choose action

Parameters:s – state
Returns:clipped act
get_v(s)[source]

Compute value

Parameters:s – state
Returns:value
learn(env, train_episodes=1000, test_episodes=10, max_steps=200, save_interval=10, gamma=0.9, mode='train', render=False, batch_size=32, a_update_steps=10, c_update_steps=10, plot_func=None)[source]

learn function

Parameters:
  • env – learning environment
  • train_episodes – total number of episodes for training
  • test_episodes – total number of episodes for testing
  • max_steps – maximum number of steps for one episode
  • save_interval – time steps for saving
  • gamma – reward discount factor
  • mode – train or test
  • render – render each step
  • batch_size – update batch size
  • a_update_steps – actor update iteration steps
  • c_update_steps – critic update iteration steps
  • plot_func – additional function for interactive module
Returns:

None

load_ckpt(env_name)[source]

load trained weights

Returns:None
save_ckpt(env_name)[source]

save trained weights

Returns:None
update(s, a, r, a_update_steps, c_update_steps)[source]

Update parameter with the constraint of KL divergent

Parameters:
  • s – state
  • a – act
  • r – reward
Returns:

None

Proximal Policy Optimization (Clip)

class rlzoo.algorithms.ppo_clip.ppo_clip.PPO_CLIP(net_list, optimizers_list, epsilon=0.2)[source]

PPO class

a_train(tfs, tfa, tfadv, oldpi_prob)[source]

Update policy network

Parameters:
  • tfs – state
  • tfa – act
  • tfadv – advantage
  • oldpi_prob – old policy distribution
Returns:

None

c_train(tfdc_r, s)[source]

Update actor network

Parameters:
  • tfdc_r – cumulative reward
  • s – state
Returns:

None

cal_adv(tfs, tfdc_r)[source]

Calculate advantage

Parameters:
  • tfs – state
  • tfdc_r – cumulative reward
Returns:

advantage

get_action(s)[source]

Choose action

Parameters:s – state
Returns:clipped act
get_action_greedy(s)[source]

Choose action :param s: state :return: clipped act

get_v(s)[source]

Compute value

Parameters:s – state
Returns:value
learn(env, train_episodes=200, test_episodes=100, max_steps=200, save_interval=10, gamma=0.9, mode='train', render=False, batch_size=32, a_update_steps=10, c_update_steps=10, plot_func=None)[source]

learn function

Parameters:
  • env – learning environment
  • train_episodes – total number of episodes for training
  • test_episodes – total number of episodes for testing
  • max_steps – maximum number of steps for one episode
  • save_interval – timesteps for saving
  • gamma – reward discount factor
  • mode – train or test
  • render – render each step
  • batch_size – udpate batchsize
  • a_update_steps – actor update iteration steps
  • c_update_steps – critic update iteration steps
  • plot_func – additional function for interactive module
Returns:

None

load_ckpt(env_name)[source]

load trained weights

Returns:None
save_ckpt(env_name)[source]

save trained weights

Returns:None
update(s, a, r, a_update_steps, c_update_steps)[source]

Update parameter with the constraint of KL divergent

Parameters:
  • s – state
  • a – act
  • r – reward
Returns:

None

Default Hyper-parameters

rlzoo.algorithms.ppo.default.atari(env, default_seed=True)[source]
rlzoo.algorithms.ppo.default.box2d(env, default_seed=True)[source]
rlzoo.algorithms.ppo.default.classic_control(env, default_seed=True)[source]
rlzoo.algorithms.ppo.default.dm_control(env, default_seed=True)[source]
rlzoo.algorithms.ppo.default.mujoco(env, default_seed=True)[source]
rlzoo.algorithms.ppo.default.rlbench(env, default_seed=True)[source]
rlzoo.algorithms.ppo.default.robotics(env, default_seed=True)[source]