如何在OpenAI中创建新的体育馆环境?


Answers:


121

看到我banana-gym的一个非常小的环境。

创建新环境

请参见存储库的主页:

https://github.com/openai/gym/blob/master/docs/creating-environments.md

这些步骤是:

  1. 创建具有PIP包结构的新存储库

它应该看起来像这样

gym-foo/
  README.md
  setup.py
  gym_foo/
    __init__.py
    envs/
      __init__.py
      foo_env.py
      foo_extrahard_env.py

有关其内容,请单击上面的链接。这里没有提到的细节特别是其中某些功能的foo_env.py外观。查看示例,并访问Gym.openai.com/docs/帮助。这是一个例子:

class FooEnv(gym.Env):
    metadata = {'render.modes': ['human']}

    def __init__(self):
        pass

    def _step(self, action):
        """

        Parameters
        ----------
        action :

        Returns
        -------
        ob, reward, episode_over, info : tuple
            ob (object) :
                an environment-specific object representing your observation of
                the environment.
            reward (float) :
                amount of reward achieved by the previous action. The scale
                varies between environments, but the goal is always to increase
                your total reward.
            episode_over (bool) :
                whether it's time to reset the environment again. Most (but not
                all) tasks are divided up into well-defined episodes, and done
                being True indicates the episode has terminated. (For example,
                perhaps the pole tipped too far, or you lost your last life.)
            info (dict) :
                 diagnostic information useful for debugging. It can sometimes
                 be useful for learning (for example, it might contain the raw
                 probabilities behind the environment's last state change).
                 However, official evaluations of your agent are not allowed to
                 use this for learning.
        """
        self._take_action(action)
        self.status = self.env.step()
        reward = self._get_reward()
        ob = self.env.getState()
        episode_over = self.status != hfo_py.IN_GAME
        return ob, reward, episode_over, {}

    def _reset(self):
        pass

    def _render(self, mode='human', close=False):
        pass

    def _take_action(self, action):
        pass

    def _get_reward(self):
        """ Reward is given for XY. """
        if self.status == FOOBAR:
            return 1
        elif self.status == ABC:
            return self.somestate ** 2
        else:
            return 0

使用你的环境

import gym
import gym_foo
env = gym.make('MyEnv-v0')

例子

  1. https://github.com/openai/gym-soccer
  2. https://github.com/openai/gym-wikinav
  3. https://github.com/alibaba/gym-starcraft
  4. https://github.com/endgameinc/gym-malware
  5. https://github.com/hackthemarket/gym-trading
  6. https://github.com/tambetm/gym-minecraft
  7. https://github.com/ppaquette/gym-doom
  8. https://github.com/ppaquette/gym-super-mario
  9. https://github.com/tuzzer/gym-maze

1
我得到一个难看的“gym_foo进口但未使用”。我该如何摆脱呢?
hipoglucido

@hipoglucido要摆脱“ gym_foo导入但未使用”的情况,您需要告诉编辑器忽略此导入。这通常是通过import gym_foo # noqa
Martin Thoma

5
我认为应该大声说您不需要任何这些,只需要派生类就可以了吗?如果您不通过健身房生态系统进行锻炼,确实没有理由创建一个套餐吗?
mathtick

执行上述步骤后,发现“ gym_foo”导入错误,执行pip install -e . 命令帮助@hipoglucido
praneeth

17

绝对有可能。他们在结尾处的“文档”页面中这样说。

https://gym.openai.com/docs

至于如何做,您应该查看现有环境的源代码以获取启发。它在github中可用:

https://github.com/openai/gym#installation

他们的大多数环境不是从头开始实现的,而是围绕现有环境创建了一个包装器,并为其提供了方便强化学习的所有接口。

如果您想自己做,您可能应该朝这个方向发展,并尝试使健身房界面中已经存在的内容适应。尽管这很有可能非常耗时。

对于您的目的,还有另一种可能有趣的选择。这是OpenAI的宇宙

https://universe.openai.com/

例如,它可以与网站集成,以便您在kongregate游戏上训练模型。但是Universe不像Gym那样容易使用。

如果您是初学者,我的建议是从标准环境下的原始实现入手。通过基础知识的问题后,请继续增加...


如果要为井字游戏或魔方的非数字活动创建环境,该环境可能的状态是有限的并且可以很好地定义,该怎么办?请问我只列出所有可能的状态吗?模拟如何从给定状态中找出有效的目的地状态?
Hendrik
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.