DQN Agent playing LunarLander-v2

This is a trained model of a DQN agent playing LunarLander-v2 using the stable-baselines3 library.


Usage (with Stable-Baselines3)

from huggingface_sb3 import load_from_hub
from stable_baselines3 import DQN
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.evaluation import evaluate_policy
# Download checkpoint
checkpoint = load_from_hub("araffin/dqn-LunarLander-v2", "dqn-LunarLander-v2.zip")
# Remove warning
kwargs = dict(target_update_interval=30)
# Load the model
model = DQN.load(checkpoint, **kwargs)
env = make_vec_env("LunarLander-v2", n_envs=1)
# Evaluate
print("Evaluating model")
mean_reward, std_reward = evaluate_policy(
model,
env,
n_eval_episodes=20,
deterministic=True,
)
print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}")
# Start a new episode
obs = env.reset()
try:
while True:
action, _states = model.predict(obs, deterministic=True)
obs, rewards, dones, info = env.step(action)
env.render()
except KeyboardInterrupt:
pass


Training Code (with Stable-baselines3)

from stable_baselines3 import DQN
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.callbacks import EvalCallback
# Create the environment
env_id = "LunarLander-v2"
n_envs = 8
env = make_vec_env(env_id, n_envs=n_envs)
# Create the evaluation envs
eval_envs = make_vec_env(env_id, n_envs=5)
# Adjust evaluation interval depending on the number of envs
eval_freq = int(1e5)
eval_freq = max(eval_freq // n_envs, 1)
# Create evaluation callback to save best model
# and monitor agent performance
eval_callback = EvalCallback(
eval_envs,
best_model_save_path="./logs/",
eval_freq=eval_freq,
n_eval_episodes=10,
)
# Instantiate the agent
# Hyperparameters from https://github.com/DLR-RM/rl-baselines3-zoo
model = DQN(
"MlpPolicy",
env,
learning_starts=0,
batch_size=128,
buffer_size=100000,
learning_rate=7e-4,
target_update_interval=250,
train_freq=1,
gradient_steps=4,
# Explore for 40_000 timesteps
exploration_fraction=0.08,
exploration_final_eps=0.05,
policy_kwargs=dict(net_arch=[256, 256]),
verbose=1,
)
# Train the agent (you can kill it before using ctrl+c)
try:
model.learn(total_timesteps=int(5e5), callback=eval_callback)
except KeyboardInterrupt:
pass
# Load best model
model = DQN.load("logs/best_model.zip")

数据统计

数据评估

araffin/dqn-LunarLander-v2浏览人数已经达到27,如你需要查询该站的相关权重信息,可以点击"5118数据""爱站数据""Chinaz数据"进入;以目前的网站数据参考,建议大家请以爱站数据为准,更多网站价值评估因素如:araffin/dqn-LunarLander-v2的访问速度、搜索引擎收录以及索引量、用户体验等;当然要评估一个站的价值,最主要还是需要根据您自身的需求以及需要,一些确切的数据则需要找araffin/dqn-LunarLander-v2的站长进行洽谈提供。如该站的IP、PV、跳出率等!

关于araffin/dqn-LunarLander-v2特别声明

本站OpenI提供的araffin/dqn-LunarLander-v2都来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由OpenI实际控制,在2023年 5月 26日 下午6:16收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,OpenI不承担任何责任。

相关导航

暂无评论

暂无评论...