AC

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 from rlzoo.common.env_wrappers import build_env
 from rlzoo.common.utils import call_default_params
 from rlzoo.algorithms import AC

 AlgName = 'AC'
 EnvName = 'PongNoFrameskip-v4'
 EnvType = 'atari'

 # EnvName = 'Pendulum-v0'
 # EnvType = 'classic_control'

 # EnvName = 'BipedalWalker-v2'
 # EnvType = 'box2d'

 # EnvName = 'Ant-v2'
 # EnvType = 'mujoco'

 # EnvName = 'FetchPush-v1'
 # EnvType = 'robotics'

 # EnvName = 'FishSwim-v0'
 # EnvType = 'dm_control'

 # EnvName = 'ReachTarget'
 # EnvType = 'rlbench'

 env = build_env(EnvName, EnvType)
 alg_params, learn_params = call_default_params(env, EnvType, AlgName)
 alg = eval(AlgName+'(**alg_params)')
 alg.learn(env=env, mode='train', render=False, **learn_params)
 alg.learn(env=env, mode='test', render=True, **learn_params)

Actor-Critic

class rlzoo.algorithms.ac.ac.AC(net_list, optimizers_list, gamma=0.9)[source]
get_action(s)[source]
get_action_greedy(s)[source]
learn(env, train_episodes=1000, test_episodes=500, max_steps=200, save_interval=100, mode='train', render=False, plot_func=None)[source]
Parameters:
  • env – learning environment
  • train_episodes – total number of episodes for training
  • test_episodes – total number of episodes for testing
  • max_steps – maximum number of steps for one episode
  • save_interval – time steps for saving the weights and plotting the results
  • mode – ‘train’ or ‘test’
  • render – if true, visualize the environment
  • plot_func – additional function for interactive module
load_ckpt(env_name)[source]
save_ckpt(env_name)[source]
update(s, a, r, s_)[source]

Default Hyper-parameters

rlzoo.algorithms.ac.default.atari(env, default_seed=True)[source]
rlzoo.algorithms.ac.default.box2d(env, default_seed=True)[source]
rlzoo.algorithms.ac.default.classic_control(env, default_seed=True)[source]
rlzoo.algorithms.ac.default.dm_control(env, default_seed=True)[source]
rlzoo.algorithms.ac.default.mujoco(env, default_seed=True)[source]
rlzoo.algorithms.ac.default.rlbench(env, default_seed=True)[source]
rlzoo.algorithms.ac.default.robotics(env, default_seed=True)[source]