打雪仗KoTH!


35

结果(UTC 2017年5月22日21:40:37)

Master赢了18轮,输了2轮,并绑了0轮,
Save One赢了15轮,输了3轮,并绑了2轮,
Machine Gun赢了14轮,输了3轮,并绑了3轮,
Monte Bot赢了14轮,输了3轮,并绑了3轮,
Amb Bot赢了12 轮局,输掉8局,并绑0局,
Coward赢了11局,输了3局,绑了6局,
Pain in the Nash赢了11局,输了9局,绑局0局,
Nece Bot赢了10局,输了7局,并绑了3局,
Naming Things is Hard赢了10局,输了7局,并列3局,
The Procrastinator赢得了10局,输了8局,并列了2局,
Yggdrasil赢得了10局,输了10局,并列了0局,
Simple Bot赢得了9局,输了4局,并列了7局,
Table Bot赢得了9局,并输了6局。局,并列5局
Prioritized Random Bot获胜8局,输掉7局,并列5局
Upper Hand Bot赢了7轮,输了13轮,并绑0轮,
Aggressor赢了6轮,输了10轮,并绑了4轮,
Insane赢了5轮,输了15轮,并绑0轮,
The Ugly Duckling赢了4轮,输了16轮,并绑了0轮,
Know Bot赢了3局,输掉14局,并并列3局,
Paranoid Bot赢得了0局,输了19局,并列了1局,
Panic Bot赢得了0局,输了19局,并列了1局

不幸的是,我无法测试The Crazy X-Code Randomess,因为我无法使其从Linux上的bash运行。如果可以使用,我将包括在内。

完整的控制器输出


游戏

这是一个非常简单的KoTH游戏。这是一对一的打雪仗。您有一个最初为空的容器,可以容纳k雪球。你可以躲起来j。每回合,两个玩家都被要求同时做出选择。有以下三个步骤:

  • 重新加载:再给您一个雪球(最多k
  • throw:扔雪球,如果其他玩家决定重新加载,则该雪球将杀死他们。如果双方都扔雪球,没人会死(他们的目标如此之好,以至于他们会击中对方的雪球)
  • 鸭子:什么也不做,并且避免在其他玩家扔雪球时被击中。如果您再也没有鸭子了,那么什么也不会发生,如果另一个玩家扔了一个雪球,您就会丧命。

目的

不要死

挑战挑战规格

您的程序可以用任何语言编写。您将在每次执行时将这些变量中的每一个作为参数:

[turn, snowballs, opponent_snowballs, ducks, opponent_ducks, max_snowballs]

turn-旋转了多少圈(0第一次迭代)
snowballs-您有
opponent_snowballs多少个雪球
ducks- 对手有多少个雪球-您可以躲避
opponent_ducks多少次
max_snowballs- 对手可以躲避多少次-您可以雪橇的最大数量储存(k

关键功能的输出应该是0重装,1掷和2鸭。您必须输出您的移动,换行符终止。请不要输出无效的动作,但是控制器输出的动作非常灵活,即使输出的不是整数,控制器也不会损坏。但是必须以换行符结尾。如果没有移动[0, 1, 2],它将默认移动到0。获胜者将被确定为在完整的循环赛中获胜最多的球员。

规则

您可以在一个文件之间读取/写入一个文件,以在两次迭代之间进行内存存储。您的漫游器将放置在其自己的目录中,因此不会发生文件名冲突。您不能更改内置函数(例如随机生成器)。第一次完成,这很有趣,但是现在不再了。您的程序不允许执行公然的执行停滞。适用标准漏洞

测试中

控制器的源代码可以在这里找到。运行示例:java Controller "python program1/test1.py" "python program2/test2.py" 10 510个雪球和5个鸭子。

评判

获胜者将通过全面轮循后选择获胜最多的人来决定。虽然这是平局,但请移除所有赢利最多的人。然后,重复直到一个人获胜。评审标准为50个雪球和25个鸭子。

祝您一切顺利!

编辑:如果1000回合通过,游戏将被宣布为平局。您的机器人可能会认为turn < 1000


评论不作进一步讨论;此对话已转移至聊天
丹尼斯,

@HyperNeutrino更多问题:我认为“判断标准”将是50个雪球和25个鸭子?为何在大约18轮后有时会出现平局?
CommonGuy

@Manu Ehh废话,我忘记更改VM参数中的设置。而且,这是因为,如果他们陷入无休止的雪球碰撞循环,则它会在重复1期或2期循环10轮后结束。
HyperNeutrino

1
那么,还会再进行一轮吗?因为我想上传我的机器人,并且很好奇他的工作状况。
erbsenhirn

@erbsenhirn如果您上传一个机器人并在聊天中第19个字节上对我执行ping操作,那么我将再次运行。
HyperNeutrino

Answers:


13

硕士,C#

我训练了一个小型神经网络(使用Sharpneat)。好像喜欢捡雪球和躲避...

在以前版本的控制器中,它甚至发现了一个错误。当发现如何通过作弊获胜时,它从0%获胜变为100%。

编辑:我忘记了重置网络内部状态并训练了网络错误。新近训练的网络要小得多。

using System;
using System.Collections.Generic;

public class Master
{
    public CyclicNetwork _network;

    public static void Main(string[] args)
    {
        int s = int.Parse(args[1]);
        int os = int.Parse(args[2]);
        int d = int.Parse(args[3]);
        int od = int.Parse(args[4]);
        int ms = int.Parse(args[5]);

        var move = new Master().GetMove(s, os, d, od, ms);
        Console.WriteLine(move);
    }

    public Master()
    {
        var nodes = new List<Neuron>
        {
            new Neuron(0, NodeType.Bias),
            new Neuron(1, NodeType.Input),
            new Neuron(2, NodeType.Input),
            new Neuron(3, NodeType.Input),
            new Neuron(4, NodeType.Input),
            new Neuron(5, NodeType.Input),
            new Neuron(6, NodeType.Output),
            new Neuron(7, NodeType.Output),
            new Neuron(8, NodeType.Output),
            new Neuron(9, NodeType.Hidden)
        };
        var connections = new List<Connection>
        {
            new Connection(nodes[1], nodes[6], -1.3921811701131295),
            new Connection(nodes[6], nodes[6], 0.04683387519679514),
            new Connection(nodes[3], nodes[7], -4.746164930591382),
            new Connection(nodes[8], nodes[8], -0.025484025422054933),
            new Connection(nodes[4], nodes[9], -0.02084856381644095),
            new Connection(nodes[9], nodes[6], 4.9614062853759124),
            new Connection(nodes[9], nodes[9], -0.008672587457112968)
        };
        _network = new CyclicNetwork(nodes, connections, 5, 3, 2);
    }

    public int GetMove(int snowballs, int opponentBalls, int ducks, int opponentDucks, int maxSnowballs)
    {
        _network.InputSignalArray[0] = snowballs;
        _network.InputSignalArray[1] = opponentBalls;
        _network.InputSignalArray[2] = ducks;
        _network.InputSignalArray[3] = opponentDucks;
        _network.InputSignalArray[4] = maxSnowballs;

        _network.Activate();

        double max = double.MinValue;
        int best = 0;
        for (var i = 0; i < _network.OutputCount; i++)
        {
            var current = _network.OutputSignalArray[i];

            if (current > max)
            {
                max = current;
                best = i;
            }
        }

        _network.ResetState();

        return best;
    }
}

public class CyclicNetwork
{
    protected readonly List<Neuron> _neuronList;
    protected readonly List<Connection> _connectionList;
    protected readonly int _inputNeuronCount;
    protected readonly int _outputNeuronCount;
    protected readonly int _inputAndBiasNeuronCount;
    protected readonly int _timestepsPerActivation;
    protected readonly double[] _inputSignalArray;
    protected readonly double[] _outputSignalArray;
    readonly SignalArray _inputSignalArrayWrapper;
    readonly SignalArray _outputSignalArrayWrapper;

    public CyclicNetwork(List<Neuron> neuronList, List<Connection> connectionList, int inputNeuronCount, int outputNeuronCount, int timestepsPerActivation)
    {
        _neuronList = neuronList;
        _connectionList = connectionList;
        _inputNeuronCount = inputNeuronCount;
        _outputNeuronCount = outputNeuronCount;
        _inputAndBiasNeuronCount = inputNeuronCount + 1;
        _timestepsPerActivation = timestepsPerActivation;

        _inputSignalArray = new double[_inputNeuronCount];
        _outputSignalArray = new double[_outputNeuronCount];

        _inputSignalArrayWrapper = new SignalArray(_inputSignalArray, 0, _inputNeuronCount);
        _outputSignalArrayWrapper = new SignalArray(_outputSignalArray, 0, outputNeuronCount);
    }
    public int OutputCount
    {
        get { return _outputNeuronCount; }
    }
    public SignalArray InputSignalArray
    {
        get { return _inputSignalArrayWrapper; }
    }
    public SignalArray OutputSignalArray
    {
        get { return _outputSignalArrayWrapper; }
    }
    public virtual void Activate()
    {
        for (int i = 0; i < _inputNeuronCount; i++)
        {
            _neuronList[i + 1].OutputValue = _inputSignalArray[i];
        }

        int connectionCount = _connectionList.Count;
        int neuronCount = _neuronList.Count;
        for (int i = 0; i < _timestepsPerActivation; i++)
        {
            for (int j = 0; j < connectionCount; j++)
            {
                Connection connection = _connectionList[j];
                connection.OutputValue = connection.SourceNeuron.OutputValue * connection.Weight;
                connection.TargetNeuron.InputValue += connection.OutputValue;
            }
            for (int j = _inputAndBiasNeuronCount; j < neuronCount; j++)
            {
                Neuron neuron = _neuronList[j];
                neuron.OutputValue = neuron.Calculate(neuron.InputValue);
                neuron.InputValue = 0.0;
            }
        }
        for (int i = _inputAndBiasNeuronCount, outputIdx = 0; outputIdx < _outputNeuronCount; i++, outputIdx++)
        {
            _outputSignalArray[outputIdx] = _neuronList[i].OutputValue;
        }
    }
    public virtual void ResetState()
    {
        for (int i = 1; i < _inputAndBiasNeuronCount; i++)
        {
            _neuronList[i].OutputValue = 0.0;
        }
        int count = _neuronList.Count;
        for (int i = _inputAndBiasNeuronCount; i < count; i++)
        {
            _neuronList[i].InputValue = 0.0;
            _neuronList[i].OutputValue = 0.0;
        }
        count = _connectionList.Count;
        for (int i = 0; i < count; i++)
        {
            _connectionList[i].OutputValue = 0.0;
        }
    }
}
public class Connection
{
    readonly Neuron _srcNeuron;
    readonly Neuron _tgtNeuron;
    readonly double _weight;
    double _outputValue;

    public Connection(Neuron srcNeuron, Neuron tgtNeuron, double weight)
    {
        _tgtNeuron = tgtNeuron;
        _srcNeuron = srcNeuron;
        _weight = weight;
    }
    public Neuron SourceNeuron
    {
        get { return _srcNeuron; }
    }
    public Neuron TargetNeuron
    {
        get { return _tgtNeuron; }
    }
    public double Weight
    {
        get { return _weight; }
    }
    public double OutputValue
    {
        get { return _outputValue; }
        set { _outputValue = value; }
    }
}

public class Neuron
{
    readonly uint _id;
    readonly NodeType _neuronType;
    double _inputValue;
    double _outputValue;

    public Neuron(uint id, NodeType neuronType)
    {
        _id = id;
        _neuronType = neuronType;

        // Bias neurons have a fixed output value of 1.0
        _outputValue = (NodeType.Bias == _neuronType) ? 1.0 : 0.0;
    }
    public double InputValue
    {
        get { return _inputValue; }
        set
        {
            if (NodeType.Bias == _neuronType || NodeType.Input == _neuronType)
            {
                throw new Exception("Attempt to set the InputValue of bias or input neuron. Bias neurons have no input, and Input neuron signals should be passed in via their OutputValue property setter.");
            }
            _inputValue = value;
        }
    }
    public double Calculate(double x)
    {
        return 1.0 / (1.0 + Math.Exp(-4.9 * x));
    }
    public double OutputValue
    {
        get { return _outputValue; }
        set
        {
            if (NodeType.Bias == _neuronType)
            {
                throw new Exception("Attempt to set the OutputValue of a bias neuron.");
            }
            _outputValue = value;
        }
    }
}

public class SignalArray
{
    readonly double[] _wrappedArray;
    readonly int _offset;
    readonly int _length;

    public SignalArray(double[] wrappedArray, int offset, int length)
    {
        if (offset + length > wrappedArray.Length)
        {
            throw new Exception("wrappedArray is not long enough to represent the requested SignalArray.");
        }

        _wrappedArray = wrappedArray;
        _offset = offset;
        _length = length;
    }

    public double this[int index]
    {
        get
        {
            return _wrappedArray[_offset + index];
        }
        set
        {
            _wrappedArray[_offset + index] = value;
        }
    }
}

public enum NodeType
{
    /// <summary>
    /// Bias node. Output is fixed to 1.0
    /// </summary>
    Bias,
    /// <summary>
    /// Input node.
    /// </summary>
    Input,
    /// <summary>
    /// Output node.
    /// </summary>
    Output,
    /// <summary>
    /// Hidden node.
    /// </summary>
    Hidden
}

显然,重置网络状态使性能提高了很多:)
HyperNeutrino

您针对神经网络训练了什么?反对这里发布的其他漫游器?
JAD

@JarkoDubbeldam是的,我将其中的几个移植到C#并针对他们训练了网络。这就是为什么它可能会与新的僵尸程序放在一起。
CommonGuy

或者只是针对机器人对另一个网络进行培训,而这个网络:p
JAD

笏。神经网络票8票!
Christopher

6

保存一个,Python

立即投掷大部分雪球,但始终可以节省一枚,以防对手观看缺少弹药的情况。然后,除非有保证的安全装弹或保证的击杀,否则在重新装弹之前,它会尽可能长的回避(再次节省1)。

import sys
turn, snowballs, opponent_snowballs, ducks, opponent_ducks, max_snowballs = map(int, sys.argv[1:])

reload_snowball=0
throw=1
duck=2

if snowballs<=1:
    if opponent_snowballs==0:
        if opponent_ducks==0:
            print throw
        else:
            print reload_snowball
    elif ducks > 1:
        print duck
    else:
        print reload_snowball
else:
    print throw

2
如果您有0个雪球,它将尝试掷1
卡尔·博世

@CarlBosch应该达到不可能的状态(除了以0开头),但无论如何我都会进行编辑以涵盖该情况
SnoringFrog

2
@SnoringFrog要阐明规则,您必须从0个雪球开始
PhiNotPi

@PhiNotPi我必须完全忽略了这一点。感谢您的澄清
SnoringFrog

6

Java的优先RandomBot

import java.util.Random;

public class PrioritizedRandomBot implements SnowballFighter {
    static int RELOAD = 0;
    static int THROW = 1;
    static int DUCK = 2;
    static Random rand = new Random();

    public static void main(String[] args) {
        int t = Integer.parseInt(args[0]);
        int s = Integer.parseInt(args[1]);
        int os = Integer.parseInt(args[2]);
        int d = Integer.parseInt(args[3]);
        int od = Integer.parseInt(args[4]);
        int ms = Integer.parseInt(args[5]);
        if (s > os + od) {
            System.out.println(THROW);
            return;
        }
        if (os == 0) {
            if (s == ms || s > 0 && s == od && rand.nextInt(1001 - t) == 0) {
                System.out.println(THROW);
            } else {
                System.out.println(RELOAD);
            }
            return;
        }
        if (os == ms && d > 0) {
            System.out.println(DUCK);
            return;
        }
        int r = rand.nextInt(os + od);
        if (r < s) {
            System.out.println(THROW);
        } else if (r < s + d) {
            System.out.println(DUCK);
        } else {
            System.out.println(RELOAD);
        }
    }
}

此漫游选择的范围内的随机整数0os + od,然后选择将掷,鸭肉,或重装,与由它的当前雪球和鸭的数量来确定的阈值。

要实现的重要一件事是,一旦一个机器人的雪球数量超过另一个机器人的雪球+鸭子数量,那么您就可以赢得胜利。由此,我们可以提出“积分”的概念:

my points = s - os - od
op points = os - s - d

 effects of moves on my points
        OPPONENT
       R    T    D
   R        L   ++
 M T   W          
 E D   -    +    +

如果这些数字中的任何一个变为正数,则该玩家便可以强制胜利。

points dif = p - op = 2*(s - os) + d - od

 effects of moves on the difference in points (me - my opponent)
        OPPONENT
       R    T    D
   R        L   +++
 M T   W         -
 E D  ---   +   


points sum = p + op = - (d + od)

 effects of moves on the sum of points (me + my opponent)
        OPPONENT
       R    T    D
   R        L    +
 M T   W         +
 E D   +    +   ++

“积分差”表构成了这场比赛的博弈论基础。它并不能完全捕获所有信息,但是它确实说明了雪球从根本上比鸭子更有价值(因为雪球既有进攻性又有防御性)。如果对手扔了一个雪球而您成功躲避,那么您就比强制胜利还近了一步,因为对手用光了更有价值的资源。该表还描述了在许多特殊情况下(例如,某些移动选项不可用时)应该执行的操作。

“分数总和”表显示了随着时间的流逝,分数总和如何接近零(因为两个玩家都没了鸭子),此时第一个犯错的玩家(不需要时重新加载)输了

现在,让我们尝试将这种强制策略扩展到实际上不是强制性的情况(例如,我们大获全胜,但对手方面的思想阅读会击败我们)。基本上,我们有s滚雪球,但需要连续不断地对对手s+1(或s+2等)滚雪球才能赢。在这种情况下,我们想要执行一些低速处理或重新加载以购买一些时间。

现在,这个机器人总是试图潜入一些鸭子,仅仅是因为这样就不会冒立即损失的风险:我们假设对手正在遵循类似的策略,即尽可能多地打雪仗,所以尝试重装真的危险的。同样,为了防止可预测性,我们希望遵循均匀随机的分布来潜行:闪避的概率与我们需要执行多少只鸭子相对于我们需要扔的雪球的数量有关。

如果我们损失惨重,s + d < os + od那么在这种情况下,除了使用所有鸭子之外,我们还需要潜入一些重新加载,在这种情况下,我们希望随机重新加载,但只需要多次。

这就是为什么我们的机器人按投掷,降落和重新加载的顺序进行优先级排序并用于os + od生成随机数的原因,因为这是我们需要进行的移动的阈值数。

该机器人当前处理的是一种边缘情况,另外两种是特殊情况。极端的情况是,对手既没有雪球也没有鸭子,因此随机分配不起作用,因此如果可能,我们投掷,否则重新加载。另一种特殊情况是,当对手无法重新装弹时,投掷没有好处(因为对手要么躲避要么丢球),所以我们总是躲避(因为保存雪球比保存鸭子更有价值)。最后的特殊情况是,如果对手没有雪球,在这种情况下,我们会安全地进行比赛,并在可能的情况下重新装弹。


这可能最终会打印多个数字,这些数字可能无法正常工作。
HyperNeutrino

@HyperNeutrino当我重写了该机器人以免使用return打印语句时,我忘记添加一个“ else”块。
PhiNotPi

1
@HyperNeutrino它为我做了,我认为它很

啊。是的,很抱歉弄乱了您的代码:P但是,第一个使用随机性的程序不错!
HyperNeutrino

6

NeceBot-Python

这是游戏的游戏理论表:

        OPPONENT
       R    T     D
   R   ~    L   +D+S
 M T   W    ~   +D-S 
 E D -D-S  -D+S   ~

哪里~没有优势,W就是赢,L就是输,+-S意味着在对手身上赢了/输了一个雪球,并且在对手身上赢了/输了+-D一个鸭子。这是一个完全对称的游戏。

请注意,我的解决方案未考虑该表。因为我数学不好。

import sys

RELOAD = 0
THROW = 1
DUCK = 2

def main(turn, snowballs, opponent_snowballs, ducks, opponent_ducks, max_snowballs):
    if 2 + ducks <3:
        if 2 + snowballs <3:
            return RELOAD
        if 2 + opponent_ducks <3 or 2 + opponent_snowballs <3:
            return THROW
        return RELOAD
    if 2 + snowballs <3:
        if -opponent_snowballs <3 - 5 or 2 + abs(opponent_snowballs - 1) <3:
            return DUCK
        return RELOAD
    if 2 + opponent_ducks <3 or 2 + abs(snowballs - max_snowballs) <3:
        return THROW
    if -snowballs <3 - 6 or turn % 5 <3:
        return THROW
    return DUCK

print(main(*map(int, sys.argv[1:])))

之所以称为NeceBot,是因为它首先尝试减少必需的东西。在那之后它有一些任意的策略,我希望它能起作用。


4
如此多<3的笑声。+1拥有游戏桌,然后不使用它:P但不错的解决方案:)
HyperNeutrino

3 + opponent_snowballs <3这可能是一个错误吗?
PhiNotPi

@PhiNotPi是的。打算是2。现在修复,谢谢!
Artyer

不幸的是,大量的<3s使代码很难理解:(
CalculatorFeline

5

ward夫-斯卡拉

投掷(如果对手没有任何弹药),否则(按优先顺序)投掷,投掷或重新装弹。

object Test extends App {
  val s = args(1).toInt
  val os = args(2).toInt
  val d = args(3).toInt

  val move = 
    if(os == 0)
      if(s > 0)
        1
      else
        0
    else if(d > 0)
        2
    else if(s > 0)
      1
    else
      0

  println(move)
}

似乎这
困扰了

5

TheUglyDuckling-Python

如果对手是空的,它将一直躲闪直到不能再尝试投掷;如果双方都为空,则将重新装填。将重装作为最后的手段。

import sys

arguments = sys.argv;

turn = int(arguments[1])
snowballs = int(arguments[2])
opponent_snowballs = int(arguments[3])
ducks = int(arguments[4])
opponent_ducks = int(arguments[5])
max_snowballs = int(arguments[6])

if ducks > 0:
    print 2
elif opponent_snowballs == 0 and snowballs > 0:
    print 1
elif opponent_snowballs == 0 and snowballs <= 0:
    print 0
elif snowballs > 0:
    print 1
elif snowballs <= 0:
    print 0

5

SimpleBot-Python 2

import sys
turn, snowballs, opponent_snowballs, ducks, opponent_ducks, max_snowballs = map(int, sys.argv[1:])

if opponent_snowballs > 0 and ducks > 0: print 2
elif snowballs: print 1
else: print 0

简单的东西。

  • 如果对手有雪球,而您有鸭子,那您就低头。
  • 如果对手没有雪球而您有,那么您就投掷。
  • 在其他情况下,请重新加载。

5

命名很困难的机器人-VB.NET

命名很难,而且我不确定我是否有衔接策略来命名。

尝试在前几轮赌博中获得较早的胜利。之后,在剩下的时间里打得更安全,尝试通过消耗来赢球。

Module SnowballFight

    Private Enum Action
        Reload = 0
        ThrowSnowball = 1
        Duck = 2
    End Enum

    Sub Main(args As String())
        Dim turn As Integer = args(0)
        Dim mySnowballs As Integer = args(1)
        Dim opponentSnowballs As Integer = args(2)
        Dim myDucks As Integer = args(3)
        Dim opponentDucks As Integer = args(4)
        Dim maxSnowballs As Integer = args(5)

        If mySnowballs = 0 AndAlso opponentSnowballs = 0 Then
            ' can't throw, no need to duck
            Console.WriteLine(Action.Reload)
            Exit Sub
        End If

        If turn = 2 AndAlso opponentSnowballs > 0 Then
            ' everyone will probably reload and then throw, so try and duck, and throw turn 3
            Console.WriteLine(Action.Duck)
            Exit Sub
        End If

        If turn = 3 AndAlso opponentSnowballs = 0 Then
            ' they threw on turn 2, get them!
            Console.WriteLine(Action.ThrowSnowball)
            Exit Sub
        End If

        If mySnowballs > 0 AndAlso opponentSnowballs = 0 Then
            ' hope they don't duck
            Console.WriteLine(Action.ThrowSnowball)
            Exit Sub
        End If

        If mySnowballs = 0 AndAlso opponentSnowballs > 0 Then
            If myDucks > 0 Then
                ' watch out!
                Console.WriteLine(Action.Duck)
                Exit Sub
            Else
                ' well, maybe we'll get lucky
                Console.WriteLine(Action.Reload)
                Exit Sub
            End If
        End If

        If opponentSnowballs > 0 AndAlso myDucks > 5 Then
            ' play it safe
            Console.WriteLine(Action.Duck)
            Exit Sub
        End If

        If mySnowballs > 5 OrElse opponentDucks < 5 Then
            ' have a bunch saved up, start throwing them
            Console.WriteLine(Action.ThrowSnowball)
            Exit Sub
        End If

        ' start saving up
        Console.WriteLine(Action.Reload)
    End Sub

End Module

5

MachineGun,Python 3

尝试保存雪球,直到可以保证杀死对手,或者直到它消失为止(在这种情况下,它开始像机枪一样盲目发射所有雪球)

每当对手下雪球时,它也会回避,因为它不想死。

from os import sys
args = sys.argv[1:]
turn = int(args[0])
snowballs = int(args[1])
opponent_snowballs = int(args[2])
ducks = int(args[3])
opponent_ducks = int(args[4])
max_snowballs = int(args[5])
if ducks > 0 and opponent_snowballs > 0:
    print("2")
elif snowballs > 0 and opponent_snowballs == 0 and opponent_ducks == 0:
    print("1")
elif ducks == 0 and snowballs > 0:
    print("1")
elif snowballs < max_snowballs:
    print("0")
elif snowballs == max_snowballs:
    print("1")
else:
    print("0")

5

Knowbot,Python3

跟踪先前动作的频率,假设对手再次做出最频繁的动作,防御这一点。

**更新为不期望对手无法做出动作**

import sys,pickle
TURN,BALLS,OTHROWS,DUCKS,ODUCKS,MAXB,OLOADS = [i for i in range(7)]

def save_state(data,prob):
    with open('snowball.pickle', 'wb') as f:
        pickle.dump((data,prob), f)

def load_state():
    with open('snowball.pickle', 'rb') as f:
        return pickle.load(f)

def reload(data = None):
    if not data or data[BALLS]<data[MAXB]:
        print(0)
        return True
    return False

def throw(data):
    if data[BALLS]>0:
        print(1)
        return True
    return False
def duck(data):
    if data[DUCKS]>0:
        print(2)
        return True
    return False


data = [int(v) for v in sys.argv[1:]]
data.append(0)

if data[TURN] > 0:
    last_data,prob = load_state()
    delta = [l-n for l,n in zip(last_data, data)]
    if delta[OTHROWS]<0:
        delta[OTHROWS]=0
        delta[OLOADS]=1
    prob = [p+d for p,d in zip(prob,delta)]
else:
    prob = [0]*7

expected = sorted(((prob[action],action) for action in [OTHROWS, ODUCKS, OLOADS]),
                      reverse=True)
expect = next( (a for p,a in expected if data[a]>0), OLOADS)

if expect == OTHROWS:
    duck(data) or throw(data) or reload()
elif expect == ODUCKS:
    reload(data) or duck(data) or throw(data) or reload()
else:
    throw(data) or reload(data) or duck(data) or reload()

save_state(data,prob);

我不确定这是如何工作的,但是不幸的是,如果它在回合之间存储数据(而不是回合),那么不幸的是,所有数据都在回合之间删除。它不会使您的解决方案无效,但请记住:)
HyperNeutrino

它并不希望在两轮之间保留数据,只是希望当前的对手保持一致。
AShelly '17

好的。好的。我只是想确保没有误解。:)
HyperNeutrino

4

Braingolf,侵略者

<<?1:0

侵略者不是胆小鬼!如果他有雪球,他将抛出!如果他没有雪球,他会赚更多!

疯狂的脑哥

这实际上不是机器人,只是我绑架的程序员,被迫将他所做的每个项目都移植到了golfgolf。他不再有任何理智。

<3r!?:1+|%

生成小于3的随机数,并输出t % r其中t是当前匝数,r是随机数

要运行这些,您需要braingolf.py从github 下载,然后将Braingolf代码保存到文件中并运行

python3 braingolf.py -f filename <space separated inputs>

或直接像这样直接插入代码

python3 braingolf.py -c '<<?1:0' <space separated inputs>

只要代码/文件名后面的第二个参数是Aggressor拥有的雪球数量,输入就没有任何关系。

注意:攻击者的行为实际上与TestBot相同,我只想在Braingolf中输入

Braingolf,The Brainy [现在断线]

VR<<<!?v1:v0|R>!?v1:v0|>R<<!?v1:v0|>R<!?v1:v0|<VR<<.m<.m~v<-?~v0:~v1|>vc
VRv.<.>+1-?2_;|>.M<v?:0_;|1

当然,有人必须这样做:D不错,甚至打高尔夫球!:D
HyperNeutrino

哦,等等,除了gofier,这和我的一样。大声笑
-HyperNeutrino

@HyperNeutrino是的,我现在正在使用一种真实的语言来研究一种真实的语言。我会用Braingolf制作一个真实的,但它不能做嵌套的条件语句,所以事情变得很困难
Skidsdev

2
我认为您应该将“ The Brainy”作为单独的答案发布。另外,我认为这是错误的。
暴民埃里克(Erik the Outgolfer)'17年

“疯狂”不是一个稳定的机器人,因此我不确定@HyperNeutrino将如何检查它。
暴民埃里克(Erik the Outgolfer)'17年

3

TestBot-Python

这是一个测试提交,以向您显示有效提交的外观。策略:交替装弹和投掷。相当糟糕的策略,但是它可以使您了解程序的工作方式。

from os import sys
arguments = sys.argv;
turn = int(arguments[1])
print(turn % 2)

_, turn, snowballs, opponent_snowballs, ducks, opponent_ducks, max_snowballs = sys.argv是争论吗?
Artyer

@Artyer是的。事实证明,第一个参数具有文件名。
HyperNeutrino

sys.argv[1:]如果您不想_
搞混的

2

UpperHandBot,Python 3

import sys
turn, snowballs, opponent_snowballs, ducks, opponent_ducks, max_snowballs = map(int, sys.argv[1:])

if snowballs <= opponent_snowballs:
  if opponent_snowballs > 0 and ducks > 0:
    print(2)
  else:
    if snowballs < max_snowballs:
      print(0)
    else:
      print(1)
else:
  print(1)

该机器人试图收集的雪球多于其对手,然后开始投掷。如果UHB在任何时候都没有比对手多的雪球,它将:

  • 鸭子,如果对手有雪球而又有鸭子
  • 否则,请重新加载(除非UHB达到最大,然后它会抛出,尽管我认为这种情况永远不会发生)

2

爪哇Yggdrasli

import java.util.HashMap;
import java.util.Map;
import java.util.Random;

public class Yggdrasil implements SnowballFighter {
    public static boolean debug = false;
    static int RELOAD = 0;
    static int THROW = 1;
    static int DUCK = 2;
    static int INVALID = -3;
    static Random rand = new Random();

    public static void main(String[] args) {
        int t = Integer.parseInt(args[0]);
        int s = Integer.parseInt(args[1]);
        int os = Integer.parseInt(args[2]);
        int d = Integer.parseInt(args[3]);
        int od = Integer.parseInt(args[4]);
        int ms = Integer.parseInt(args[5]);
        System.out.println((new Yggdrasil()).move(t, s, os, d, od, ms));
    }

    public final int move(int t, int s, int os, int d, int od, int ms) {
        State state = State.get(s, os, d, od);
        double val = state.val(4);
        double[] strat = state.strat;
        int move = INVALID;
        if (debug) {
            System.out.println(val + " : " + strat[0] + " " + strat[1] + " " + strat[2]);
        }
        while (move == INVALID) {
            double r = rand.nextDouble();
            if (r < strat[RELOAD] && strat[RELOAD] > 0.0001) {
                move = RELOAD;
            } else if (r < strat[RELOAD] + strat[THROW] && strat[THROW] > 0.0001) {
                move = THROW;
            } else if (r < strat[RELOAD] + strat[THROW] + strat[DUCK] && strat[DUCK] > 0.0001) {
                move = DUCK;
            }
        }
        return move;
    }

    public static class State {

        public static boolean debug = false;
        public static int ms = 50;
        public int s;
        public int os;
        public static int md = 25;
        public int d;
        public int od;

        public State(int s, int os, int d, int od) {
            super();
            this.s = s;
            this.os = os;
            this.d = d;
            this.od = od;
        }

        Double val;
        int valdepth;
        double[] strat = new double[3];

        public Double val(int maxdepth) {
            if (s < 0 || s > ms || d < 0 || d > md || os < 0 || os > ms || od < 0 || od > md) {
                return null;
            } else if (val != null && valdepth >= maxdepth) {
                return val;
            }
            if (s > os + od) {
                val = 1.0; // force win
                strat = new double[] { 0, 1, 0 };
            } else if (os > s + d) {
                val = -1.0; // force loss
                strat = new double[] { 1.0 / (1.0 + s + d), s / (1.0 + s + d), d / (1.0 + s + d) };
            } else if (d == 0 && od == 0) {
                val = 0.0; // perfect tie
                if (s > 0) {
                    strat = new double[] { 0, 1, 0 };
                } else {
                    strat = new double[] { 1, 0, 0 };
                }
            } else if (maxdepth <= 0) {
                double togo = 1 - s + os + od;
                double otogo = 1 - os + s + d;
                double per = otogo * otogo / (togo * togo + otogo * otogo);
                double oper = togo * togo / (togo * togo + otogo * otogo);
                val = per - oper;
            } else {
                Double[][] fullmatrix = new Double[3][3];
                boolean[] vm = new boolean[3];
                boolean[] ovm = new boolean[3];
                for (int i = 0; i < 3; i++) {
                    int dest_s = s;
                    int dest_d = d;
                    if (i == 0) {
                        dest_s++;
                    } else if (i == 1) {
                        dest_s--;
                    } else {
                        dest_d--;
                    }
                    for (int j = 0; j < 3; j++) {
                        int dest_os = os;
                        int dest_od = od;
                        if (j == 0) {
                            dest_os++;
                        } else if (j == 1) {
                            dest_os--;
                        } else {
                            dest_od--;
                        }
                        if (i == 0 && j == 1 && dest_os >= 0 && dest_s <= ms) {
                            fullmatrix[i][j] = -1.0; // kill
                        } else if (i == 1 && j == 0 && dest_s >= 0 && dest_os <= ms) {
                            fullmatrix[i][j] = 1.0; // kill
                        } else {
                            fullmatrix[i][j] = get(dest_s, dest_os, dest_d, dest_od).val(maxdepth - 1);
                        }
                        if (fullmatrix[i][j] != null) {
                            vm[i] = true;
                            ovm[j] = true;
                        }
                    }
                }

                if (debug) {
                    System.out.println();
                    System.out.println(maxdepth);
                    System.out.println(s + " " + os + " " + d + " " + od);
                    for (int i = 0; i < 3; i++) {
                        System.out.print(vm[i]);
                    }
                    System.out.println();
                    for (int i = 0; i < 3; i++) {
                        System.out.print(ovm[i]);
                    }
                    System.out.println();
                    for (int i = 0; i < 3; i++) {
                        for (int j = 0; j < 3; j++) {
                            System.out.printf(" %7.4f", fullmatrix[i][j]);
                        }
                        System.out.println();
                    }
                }
                // really stupid way to find an approximate best strategy
                val = -1.0;
                double[] p = new double[3];
                for (p[0] = 0; p[0] < 0.0001 || vm[0] && p[0] <= 1.0001; p[0] += 0.01) {
                    for (p[1] = 0; p[1] < 0.0001 || vm[1] && p[1] <= 1.0001 - p[0]; p[1] += 0.01) {
                        p[2] = 1.0 - p[0] - p[1];
                        if (p[2] < 0.0001 || vm[2]) {
                            double min = 1;
                            for (int j = 0; j < 3; j++) {
                                if (ovm[j]) {
                                    double sum = 0;
                                    for (int i = 0; i < 3; i++) {
                                        if (vm[i]) {
                                            sum += fullmatrix[i][j] * p[i];
                                        }
                                    }
                                    min = Math.min(min, sum);
                                }
                            }
                            if (min > val) {
                                val = min;
                                strat = p.clone();
                            }
                        }
                    }
                }
                if (debug) {
                    System.out.println("v:" + val);
                    System.out.println("s:" + strat[0] + " " + strat[1] + " " + strat[2]);
                }
            }
            valdepth = maxdepth;
            return val;
        }

        static Map<Integer, State> cache = new HashMap<Integer, State>();

        static State get(int s, int os, int d, int od) {
            int key = (((s) * 100 + os) * 100 + d) * 100 + od;
            if (cache.containsKey(key)) {
                return cache.get(key);
            }
            State res = new State(s, os, d, od);
            cache.put(key, res);
            return res;
        }
    }
}

我将此机器人命名为“ Yggdrasil”,因为它实际上是在游戏树中向前看并执行状态评估,由此可以计算出近似理想的混合策略。因为它依赖于混合策略,这是非常不确定的。我不知道这件事在真实竞争中的表现如何。

关于该机器人的一些事情:

  • 核心是一个递归函数,可为任何特定游戏状态计算价值和接近理想的混合策略。现在,我将其设置为向前看4个步骤。
  • 它具有极大的吸引力,因为在许多情况下,此机器人等同于“在石头剪刀布中随意移动”。它坚持自己的立场,并希望其对手能在统计上获得优势。如果该机器人是完美的(不是这样),那么您可以对付的最好方法是获胜50%,损失50%。结果,没有对手一直打败,也没有对手一直打败。

我还是不明白这个名字...:P
HyperNeutrino

@HyperNeutrino Yggdrasil是一棵神话树,在这种情况下,我指的是游戏树。
PhiNotPi

哦,对,我觉得我应该记得这一点。:P好!
HyperNeutrino

2

纳什之痛(C ++)

之所以这么称呼是因为我必须编写自己的Nash平衡求解器这一事实确实让我感到痛苦。我很惊讶没有任何现成的Nash解决库!

#include <fstream>
#include <iostream>
#include <vector>
#include <array>
#include <random>
#include <utility>

typedef double NumT;
static const NumT EPSILON = 1e-5;

struct Index {
    int me;
    int them;

    Index(int me, int them) : me(me), them(them) {}
};

struct Value {
    NumT me;
    NumT them;

    Value(void) : me(0), them(0) {}

    Value(NumT me, NumT them) : me(me), them(them) {}
};

template <int subDimMe, int subDimThem>
struct Game {
    const std::array<NumT, 9> *valuesMe;
    const std::array<NumT, 9> *valuesThemT;

    std::array<int, subDimMe> coordsMe;
    std::array<int, subDimThem> coordsThem;

    Game(
        const std::array<NumT, 9> *valuesMe,
        const std::array<NumT, 9> *valuesThemT
    )
        : valuesMe(valuesMe)
        , valuesThemT(valuesThemT)
        , coordsMe{}
        , coordsThem{}
    {}

    Index baseIndex(Index i) const {
        return Index(coordsMe[i.me], coordsThem[i.them]);
    }

    Value at(Index i) const {
        Index i2 = baseIndex(i);
        return Value(
            (*valuesMe)[i2.me * 3 + i2.them],
            (*valuesThemT)[i2.me + i2.them * 3]
        );
    }

    Game<2, 2> subgame22(int me0, int me1, int them0, int them1) const {
        Game<2, 2> b(valuesMe, valuesThemT);
        b.coordsMe[0] = coordsMe[me0];
        b.coordsMe[1] = coordsMe[me1];
        b.coordsThem[0] = coordsThem[them0];
        b.coordsThem[1] = coordsThem[them1];
        return b;
    }
};

struct Strategy {
    std::array<NumT, 3> probMe;
    std::array<NumT, 3> probThem;
    Value expectedValue;
    bool valid;

    Strategy(void)
        : probMe{}
        , probThem{}
        , expectedValue()
        , valid(false)
    {}

    void findBestMe(const Strategy &b) {
        if(b.valid && (!valid || b.expectedValue.me > expectedValue.me)) {
            *this = b;
        }
    }
};

template <int dimMe, int dimThem>
Strategy nash_pure(const Game<dimMe, dimThem> &g) {
    Strategy s;
    int choiceMe = -1;
    int choiceThem = 0;
    for(int me = 0; me < dimMe; ++ me) {
        for(int them = 0; them < dimThem; ++ them) {
            const Value &v = g.at(Index(me, them));
            bool valid = true;
            for(int me2 = 0; me2 < dimMe; ++ me2) {
                if(g.at(Index(me2, them)).me > v.me) {
                    valid = false;
                }
            }
            for(int them2 = 0; them2 < dimThem; ++ them2) {
                if(g.at(Index(me, them2)).them > v.them) {
                    valid = false;
                }
            }
            if(valid) {
                if(choiceMe == -1 || v.me > s.expectedValue.me) {
                    s.expectedValue = v;
                    choiceMe = me;
                    choiceThem = them;
                }
            }
        }
    }
    if(choiceMe != -1) {
        Index iBase = g.baseIndex(Index(choiceMe, choiceThem));
        s.probMe[iBase.me] = 1;
        s.probThem[iBase.them] = 1;
        s.valid = true;
    }
    return s;
}

Strategy nash_mixed(const Game<2, 2> &g) {
    //    P    Q
    // p a A  b B
    // q c C  d D

    Value A = g.at(Index(0, 0));
    Value B = g.at(Index(0, 1));
    Value C = g.at(Index(1, 0));
    Value D = g.at(Index(1, 1));

    // q = 1-p, Q = 1-P
    // Pick p such that choice of P,Q is arbitrary

    // p*A+(1-p)*C = p*B+(1-p)*D
    // p*A+C-p*C = p*B+D-p*D
    // p*(A+D-B-C) = D-C
    // p = (D-C) / (A+D-B-C)

    NumT p = (D.them - C.them) / (A.them + D.them - B.them - C.them);

    // P*a+(1-P)*b = P*c+(1-P)*d
    // P*a+b-P*b = P*c+d-P*d
    // P*(a+d-b-c) = d-b
    // P = (d-b) / (a+d-b-c)

    NumT P = (D.me - B.me) / (A.me + D.me - B.me - C.me);

    Strategy s;
    if(p >= -EPSILON && p <= 1 + EPSILON && P >= -EPSILON && P <= 1 + EPSILON) {
        if(p <= 0) {
            p = 0;
        } else if(p >= 1) {
            p = 1;
        }
        if(P <= 0) {
            P = 0;
        } else if(P >= 1) {
            P = 1;
        }
        Index iBase0 = g.baseIndex(Index(0, 0));
        Index iBase1 = g.baseIndex(Index(1, 1));
        s.probMe[iBase0.me] = p;
        s.probMe[iBase1.me] = 1 - p;
        s.probThem[iBase0.them] = P;
        s.probThem[iBase1.them] = 1 - P;
        s.expectedValue = Value(
            P * A.me + (1 - P) * B.me,
            p * A.them + (1 - p) * C.them
        );
        s.valid = true;
    }
    return s;
}

Strategy nash_mixed(const Game<3, 3> &g) {
    //    P    Q    R
    // p a A  b B  c C
    // q d D  e E  f F
    // r g G  h H  i I

    Value A = g.at(Index(0, 0));
    Value B = g.at(Index(0, 1));
    Value C = g.at(Index(0, 2));
    Value D = g.at(Index(1, 0));
    Value E = g.at(Index(1, 1));
    Value F = g.at(Index(1, 2));
    Value G = g.at(Index(2, 0));
    Value H = g.at(Index(2, 1));
    Value I = g.at(Index(2, 2));

    // r = 1-p-q, R = 1-P-Q
    // Pick p,q such that choice of P,Q,R is arbitrary

    NumT q = ((
        + A.them * (I.them-H.them)
        + G.them * (B.them-C.them)
        - B.them*I.them
        + H.them*C.them
    ) / (
        (G.them+E.them-D.them-H.them) * (B.them+I.them-H.them-C.them) -
        (H.them+F.them-E.them-I.them) * (A.them+H.them-G.them-B.them)
    ));

    NumT p = (
        ((G.them+E.them-D.them-H.them) * q + (H.them-G.them)) /
        (A.them+H.them-G.them-B.them)
    );

    NumT Q = ((
        + A.me * (I.me-F.me)
        + C.me * (D.me-G.me)
        - D.me*I.me
        + F.me*G.me
    ) / (
        (C.me+E.me-B.me-F.me) * (D.me+I.me-F.me-G.me) -
        (F.me+H.me-E.me-I.me) * (A.me+F.me-C.me-D.me)
    ));

    NumT P = (
        ((C.me+E.me-B.me-F.me) * Q + (F.me-C.me)) /
        (A.me+F.me-C.me-D.me)
    );

    Strategy s;
    if(
        p >= -EPSILON && q >= -EPSILON && p + q <= 1 + EPSILON &&
        P >= -EPSILON && Q >= -EPSILON && P + Q <= 1 + EPSILON
    ) {
        if(p <= 0) { p = 0; }
        if(q <= 0) { q = 0; }
        if(P <= 0) { P = 0; }
        if(Q <= 0) { Q = 0; }
        if(p + q >= 1) {
            if(p > q) {
                p = 1 - q;
            } else {
                q = 1 - p;
            }
        }
        if(P + Q >= 1) {
            if(P > Q) {
                P = 1 - Q;
            } else {
                Q = 1 - P;
            }
        }
        Index iBase0 = g.baseIndex(Index(0, 0));
        s.probMe[iBase0.me] = p;
        s.probThem[iBase0.them] = P;
        Index iBase1 = g.baseIndex(Index(1, 1));
        s.probMe[iBase1.me] = q;
        s.probThem[iBase1.them] = Q;
        Index iBase2 = g.baseIndex(Index(2, 2));
        s.probMe[iBase2.me] = 1 - p - q;
        s.probThem[iBase2.them] = 1 - P - Q;
        s.expectedValue = Value(
            A.me * P + B.me * Q + C.me * (1 - P - Q),
            A.them * p + D.them * q + G.them * (1 - p - q)
        );
        s.valid = true;
    }
    return s;
}

template <int dimMe, int dimThem>
Strategy nash_validate(Strategy &&s, const Game<dimMe, dimThem> &g, Index unused) {
    if(!s.valid) {
        return s;
    }

    NumT exp;

    exp = 0;
    for(int them = 0; them < dimThem; ++ them) {
        exp += s.probThem[them] * g.at(Index(unused.me, them)).me;
    }
    if(exp > s.expectedValue.me) {
        s.valid = false;
        return s;
    }

    exp = 0;
    for(int me = 0; me < dimMe; ++ me) {
        exp += s.probMe[me] * g.at(Index(me, unused.them)).them;
    }
    if(exp > s.expectedValue.them) {
        s.valid = false;
        return s;
    }

    return s;
}

Strategy nash(const Game<2, 2> &g, bool verbose) {
    Strategy s = nash_mixed(g);
    s.findBestMe(nash_pure(g));
    if(!s.valid && verbose) {
        std::cerr << "No nash equilibrium found!" << std::endl;
    }
    return s;
}

Strategy nash(const Game<3, 3> &g, bool verbose) {
    Strategy s = nash_mixed(g);
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(1, 2,  1, 2)), g, Index(0, 0)));
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(1, 2,  0, 2)), g, Index(0, 1)));
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(1, 2,  0, 1)), g, Index(0, 2)));
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(0, 2,  1, 2)), g, Index(1, 0)));
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(0, 2,  0, 2)), g, Index(1, 1)));
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(0, 2,  0, 1)), g, Index(1, 2)));
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(0, 1,  1, 2)), g, Index(2, 0)));
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(0, 1,  0, 2)), g, Index(2, 1)));
    s.findBestMe(nash_validate(nash_mixed(g.subgame22(0, 1,  0, 1)), g, Index(2, 2)));
    s.findBestMe(nash_pure(g));
    if(!s.valid && verbose) {
        // theory says this should never happen, but fp precision makes it possible
        std::cerr << "No nash equilibrium found!" << std::endl;
    }
    return s;
}

struct PlayerState {
    int balls;
    int ducks;

    PlayerState(int balls, int ducks) : balls(balls), ducks(ducks) {}

    PlayerState doReload(int maxBalls) const {
        return PlayerState(std::min(balls + 1, maxBalls), ducks);
    }

    PlayerState doThrow(void) const {
        return PlayerState(std::max(balls - 1, 0), ducks);
    }

    PlayerState doDuck(void) const {
        return PlayerState(balls, std::max(ducks - 1, 0));
    }

    std::array<double,3> flail(int maxBalls) const {
        // opponent has obvious win;
        // try stuff at random and hope the opponent is bad

        (void) ducks;

        int options = 0;
        if(balls > 0) {
            ++ options;
        }
        if(balls < maxBalls) {
            ++ options;
        }
        if(ducks > 0) {
            ++ options;
        }

        std::array<double,3> p{};
        if(balls < balls) {
            p[0] = 1.0f / options;
        }
        if(balls > 0) {
            p[1] = 1.0f / options;
        }
        return p;
    }
};

class GameStore {
protected:
    const int balls;
    const int ducks;
    const std::size_t playerStates;
    const std::size_t gameStates;

public:
    static std::string filename(int turn) {
        return "nashdata_" + std::to_string(turn) + ".dat";
    }

    GameStore(int maxBalls, int maxDucks)
        : balls(maxBalls)
        , ducks(maxDucks)
        , playerStates((balls + 1) * (ducks + 1))
        , gameStates(playerStates * playerStates)
    {}

    std::size_t playerIndex(const PlayerState &p) const {
        return p.balls * (ducks + 1) + p.ducks;
    }

    std::size_t gameIndex(const PlayerState &me, const PlayerState &them) const {
        return playerIndex(me) * playerStates + playerIndex(them);
    }

    std::size_t fileIndex(const PlayerState &me, const PlayerState &them) const {
        return 2 + gameIndex(me, them) * 2;
    }

    PlayerState stateFromPlayerIndex(std::size_t i) const {
        return PlayerState(i / (ducks + 1), i % (ducks + 1));
    }

    std::pair<PlayerState, PlayerState> stateFromGameIndex(std::size_t i) const {
        return std::make_pair(
            stateFromPlayerIndex(i / playerStates),
            stateFromPlayerIndex(i % playerStates)
        );
    }

    std::pair<PlayerState, PlayerState> stateFromFileIndex(std::size_t i) const {
        return stateFromGameIndex((i - 2) / 2);
    }
};

class Generator : public GameStore {
    static char toDat(NumT v) {
        int iv = int(v * 256.0);
        return char(std::max(std::min(iv, 255), 0));
    }

    std::vector<Value> next;

public:
    Generator(int maxBalls, int maxDucks)
        : GameStore(maxBalls, maxDucks)
        , next()
    {}

    const Value &nextGame(const PlayerState &me, const PlayerState &them) const {
        return next[gameIndex(me, them)];
    }

    void make_probabilities(
        std::array<NumT, 9> &g,
        const PlayerState &me,
        const PlayerState &them
    ) const {
        const int RELOAD = 0;
        const int THROW = 1;
        const int DUCK = 2;

        g[RELOAD * 3 + RELOAD] =
            nextGame(me.doReload(balls), them.doReload(balls)).me;

        g[RELOAD * 3 + THROW] =
            (them.balls > 0) ? -1
            : nextGame(me.doReload(balls), them.doThrow()).me;

        g[RELOAD * 3 + DUCK] =
            nextGame(me.doReload(balls), them.doDuck()).me;

        g[THROW * 3 + RELOAD] =
            (me.balls > 0) ? 1
            : nextGame(me.doThrow(), them.doReload(balls)).me;

        g[THROW * 3 + THROW] =
            ((me.balls > 0) == (them.balls > 0))
            ? nextGame(me.doThrow(), them.doThrow()).me
            : (me.balls > 0) ? 1 : -1;

        g[THROW * 3 + DUCK] =
            (me.balls > 0 && them.ducks == 0) ? 1
            : nextGame(me.doThrow(), them.doDuck()).me;

        g[DUCK * 3 + RELOAD] =
            nextGame(me.doDuck(), them.doReload(balls)).me;

        g[DUCK * 3 + THROW] =
            (them.balls > 0 && me.ducks == 0) ? -1
            : nextGame(me.doDuck(), them.doThrow()).me;

        g[DUCK * 3 + DUCK] =
            nextGame(me.doDuck(), them.doDuck()).me;
    }

    Game<3, 3> make_game(const PlayerState &me, const PlayerState &them) const {
        static std::array<NumT, 9> globalValuesMe;
        static std::array<NumT, 9> globalValuesThemT;
        #pragma omp threadprivate(globalValuesMe)
        #pragma omp threadprivate(globalValuesThemT)

        make_probabilities(globalValuesMe, me, them);
        make_probabilities(globalValuesThemT, them, me);
        Game<3, 3> g(&globalValuesMe, &globalValuesThemT);
        for(int i = 0; i < 3; ++ i) {
            g.coordsMe[i] = i;
            g.coordsThem[i] = i;
        }
        return g;
    }

    Strategy solve(const PlayerState &me, const PlayerState &them, bool verbose) const {
        if(me.balls > them.balls + them.ducks) { // obvious answer
            Strategy s;
            s.probMe[1] = 1;
            s.probThem = them.flail(balls);
            s.expectedValue = Value(1, -1);
            return s;
        } else if(them.balls > me.balls + me.ducks) { // uh-oh
            Strategy s;
            s.probThem[1] = 1;
            s.probMe = me.flail(balls);
            s.expectedValue = Value(-1, 1);
            return s;
        } else if(me.balls == 0 && them.balls == 0) { // obvious answer
            Strategy s;
            s.probMe[0] = 1;
            s.probThem[0] = 1;
            s.expectedValue = nextGame(me.doReload(balls), them.doReload(balls));
            return s;
        } else {
            return nash(make_game(me, them), verbose);
        }
    }

    void generate(int turns, bool saveAll, bool verbose) {
        next.clear();
        next.resize(gameStates);
        std::vector<Value> current(gameStates);
        std::vector<char> data(2 + gameStates * 2);

        for(std::size_t turn = turns; (turn --) > 0;) {
            if(verbose) {
                std::cerr << "Generating for turn " << turn << "..." << std::endl;
            }
            NumT maxDiff = 0;
            NumT msd = 0;
            data[0] = balls;
            data[1] = ducks;
            #pragma omp parallel for reduction(+:msd), reduction(max:maxDiff)
            for(std::size_t meBalls = 0; meBalls < balls + 1; ++ meBalls) {
                for(std::size_t meDucks = 0; meDucks < ducks + 1; ++ meDucks) {
                    const PlayerState me(meBalls, meDucks);
                    for(std::size_t themBalls = 0; themBalls < balls + 1; ++ themBalls) {
                        for(std::size_t themDucks = 0; themDucks < ducks + 1; ++ themDucks) {
                            const PlayerState them(themBalls, themDucks);
                            const std::size_t p1 = gameIndex(me, them);

                            Strategy s = solve(me, them, verbose);

                            NumT diff;

                            data[2+p1*2  ] = toDat(s.probMe[0]);
                            data[2+p1*2+1] = toDat(s.probMe[0] + s.probMe[1]);
                            current[p1] = s.expectedValue;
                            diff = current[p1].me - next[p1].me;
                            msd += diff * diff;
                            maxDiff = std::max(maxDiff, std::abs(diff));
                        }
                    }
                }
            }

            if(saveAll) {
                std::ofstream fs(filename(turn).c_str(), std::ios_base::binary);
                fs.write(&data[0], data.size());
                fs.close();
            }

            if(verbose) {
                std::cerr
                    << "Expectations changed by at most " << maxDiff
                    << " (RMSD: " << std::sqrt(msd / gameStates) << ")" << std::endl;
            }
            if(maxDiff < 0.0001f) {
                if(verbose) {
                    std::cerr << "Expectations have converged. Stopping." << std::endl;
                }
                break;
            }
            std::swap(next, current);
        }

        // Always save turn 0 with the final converged expectations
        std::ofstream fs(filename(0).c_str(), std::ios_base::binary);
        fs.write(&data[0], data.size());
        fs.close();
    }
};

void open_file(std::ifstream &target, int turn, int maxDucks, int maxBalls) {
    target.open(GameStore::filename(turn).c_str(), std::ios::binary);
    if(target.is_open()) {
        return;
    }

    target.open(GameStore::filename(0).c_str(), std::ios::binary);
    if(target.is_open()) {
        return;
    }

    Generator(maxBalls, maxDucks).generate(200, false, false);
    target.open(GameStore::filename(0).c_str(), std::ios::binary);
}

int choose(int turn, const PlayerState &me, const PlayerState &them, int maxBalls) {
    std::ifstream fs;
    open_file(fs, turn, std::max(me.ducks, them.ducks), maxBalls);

    unsigned char balls = fs.get();
    unsigned char ducks = fs.get();
    fs.seekg(GameStore(balls, ducks).fileIndex(me, them));
    unsigned char p0 = fs.get();
    unsigned char p1 = fs.get();
    fs.close();

    // only 1 random number per execution; no need to seed a PRNG
    std::random_device rand;
    int v = std::uniform_int_distribution<int>(0, 254)(rand);
    if(v < p0) {
        return 0;
    } else if(v < p1) {
        return 1;
    } else {
        return 2;
    }
}

int main(int argc, const char *const *argv) {
    if(argc == 4) { // maxTurns, maxBalls, maxDucks
        Generator(atoi(argv[2]), atoi(argv[3])).generate(atoi(argv[1]), true, true);
        return 0;
    }

    if(argc == 7) { // turn, meBalls, themBalls, meDucks, themDucks, maxBalls
        std::cout << choose(
            atoi(argv[1]),
            PlayerState(atoi(argv[2]), atoi(argv[4])),
            PlayerState(atoi(argv[3]), atoi(argv[5])),
            atoi(argv[6])
        ) << std::endl;
        return 0;
    }

    return 1;
}

编译为C ++ 11或更高版本。为了提高性能,最好使用OpenMP支持进行编译(但这只是为了提高速度;不是必需的)

g++ -std=c++11 -fopenmp pain_in_the_nash.cpp -o pain_in_the_nash

这利用纳什均衡来决定每一回合该做什么,这意味着从理论上讲,无论对手采用何种策略,从长远来看(无论是多局比赛),它总是会获胜或平局。在实践中是否这样取决于我在实现中是否犯了任何错误。但是,由于这场KoTH比赛只有一个针对每个对手的回合,因此在排行榜上的表现可能不太好。

我最初的想法是为每个比赛状态提供一个简单的评估函数(例如,每个球的价值为+ b,每只鸭子的价值为+ d),但这会导致明显的问题,弄清楚这些评估值应该是什么,这意味着不能因此,这将分析整个游戏树,从第1000回合开始倒退,并根据每场比赛的成功情况填写实际估值。

结果是,除了几个硬编码的“明显”行为外,我绝对不知道该使用什么策略(如果球数多于对手的球+鸭,则投掷雪球;如果两人都出局,则重新加载)雪球)。如果有人想分析它产生的数据集,我想可以发现一些有趣的行为!

对“ Save One”进行测试表明,从长远来看,它的确确实赢了,但是只有很少的优势(第一批1000场比赛有514胜,486失,0平局,509获,491失,0)在第二)。


重要!

这可以立即使用,但这不是一个好主意。在中等规格的笔记本电脑上,大约需要9分钟才能生成完整的游戏树。但是,一旦将最终概率生成后,它将最终概率保存到文件中,此后每回合仅生成一个随机数并将其与2个字节进行比较,因此超快。

要简化所有操作,只需下载此文件(3.5MB)并将其放在可执行文件所在的目录中即可。

或者,您可以通过运行以下命令自己生成它:

./pain_in_the_nash 1000 50 25

每转将保存一个文件,直到收敛为止。请注意,每个文件的大小为3.5MB,它将在720圈收敛(即280个文件,约1GB),并且由于大多数游戏在720圈附近都无法到达,因此预收敛文件的重要性非常低。


是否可以使程序仅输出最终结果?谢谢!
HyperNeutrino

@HyperNeutrino所有其他输出都应该输出到stderr,因此不会有任何影响,但是我对其进行了更新,使其仅在预处理模式下运行时显示进度。现在,它将仅在正常运行时写入stdout。我建议遵循“重要”的建议,因为否则它只会在第一回合上徘徊几分钟(至少通过预处理可以看到进度)。
戴夫

哦好的。我会遵循该建议的,谢谢!
HyperNeutrino

如果您可以上传数据文件,将不胜感激,因为要花所有的时间来生成它们。如果您能做到,那就太好了:)
HyperNeutrino

@HyperNeutrino好的,在我糟糕的互联网上上传也花了很长时间,但是可以在这里找到3.5MB的聚合文件:github.com/davidje13/snowball_koth_pitn/blob/master/…(只需将其放在同一目录中)。
戴夫

1

迅捷-TheCrazy_XcodeRandomness

遗憾的是,它只能在Xcode中本地运行,因为它包含Foundation模块及其功能arc4random_uniform()。但是,您几乎可以确定算法是什么:

import Foundation

func game(turn: Int, snowballs: Int, opponent_snowballs: Int, ducks: Int, opponent_ducks: Int, max_snowballs: Int) -> Int{
    let RELOAD = 0
    let THROW = 1
    let DUCK = 2
    if turn == 0{
        return arc4random_uniform(2)==0 ? THROW : DUCK
    }
    else if ducks == 0{
        if snowballs != 0{return THROW}
        else {return RELOAD}
    }
    else if snowballs < max_snowballs && snowballs != 0{
        if opponent_ducks == 0 && opponent_snowballs == 0{return THROW}
        else if opponent_snowballs == 0{
            return arc4random_uniform(2)==0 ? THROW : RELOAD
        }
        else if opponent_ducks == 0{return THROW}
        else { return arc4random_uniform(2)==0 ? THROW : RELOAD }
    }
    else if opponent_snowballs == max_snowballs{
        return DUCK
    }
    else if snowballs == max_snowballs || opponent_ducks < 1 || turn < max_snowballs{return THROW}
    return arc4random_uniform(2)==0 ? THROW : RELOAD
}

可以在Linux上从bash运行吗?
HyperNeutrino

@HyperNeutrino我知道它可以在macOS上使用,但是我不知道它是否可以在Linux上使用。如果可以检查,那就太好了。尝试执行该swift命令,然后检查其是否有效
Xcoder先生

它似乎不存在;有一个软件包,但不是Swift语言。所以对不起,直到我能做一些工作之前,我才能进行测试。
HyperNeutrino

唯一可能的编译器是Xcode和IntelliJ,但是由于Foundation抱歉,它无法在线运行:/
Xcoder先生

撕裂 我需要能够从命令行运行它以使用它来运行控制器,但是如果我有时间,我可能会再次手动运行所有其他机器人。
HyperNeutrino

1

TableBot,Python 2

之所以称为TableBot,是因为它是通过实现此表创建的:

snow   duck   osnow   oduck   move
0      0      0       0       0
0      0      0       1       0
0      0      1       0       0
0      0      1       1       0
0      1      0       0       0
0      1      0       1       0
0      1      1       0       2
0      1      1       1       2
1      0      0       0       1
1      0      0       1       1
1      0      1       0       1
1      0      1       1       1
1      1      0       0       1
1      1      0       1       1
1      1      1       0       1
1      1      1       1       1

1表示具有1个或多个,0表示不具有1个或多个。

机器人:

import sys

reload=0
throw=1
duck=2

t,snowballs,o_snowballs,ducks,o_ducks,m=map(int,sys.argv[1:])

if snowballs > 0:
	print throw
elif ducks==0:
	print reload
elif o_snowballs==0:
	print reload
else:
	print duck

在线尝试!


1

AmbBot-球拍方案

我主要想尝试使用amb,因为它很酷。该漫游器会随机排序选项(重新加载,抛出和隐藏),过滤掉没有意义的选项,然后选择第一个选项。但是有了amb,我们就可以使用延续和回溯功能!

#lang racket
(require racket/cmdline)

; Defining amb.
(define failures null)

(define (fail)
  (if (pair? failures) ((first failures)) (error "no more choices!")))

(define (amb/thunks choices)
  (let/cc k (set! failures (cons k failures)))
  (if (pair? choices)
    (let ([choice (first choices)]) (set! choices (rest choices)) (choice))
    (begin (set! failures (rest failures)) (fail))))

(define-syntax-rule (amb E ...) (amb/thunks (list (lambda () E) ...)))

(define (assert condition) (unless condition (fail)))

(define (!= a b)
  (not (= a b)))

(define (amb-list list)
  (if (null? list)
      (amb)
      (amb (car list)
           (amb-list (cdr list)))))

; The meaningful code!
; Start by defining our options.
(define reload 0)
(define throw 1)
(define duck 2)

; The heart of the program.
(define (make-choice turn snowballs opponent_snowballs ducks opponent_ducks max_snowballs)
  (let ((can-reload? (reload-valid? snowballs opponent_snowballs ducks opponent_ducks max_snowballs))
        (can-throw? (throw-valid? snowballs opponent_snowballs ducks opponent_ducks max_snowballs))
        (can-duck? (duck-valid? snowballs opponent_snowballs ducks opponent_ducks max_snowballs)))
    (if (not (or can-reload? can-throw? can-duck?))
        (random 3) ; something went wrong, panic
        (let* ((ls (shuffle (list reload throw duck)))
               (action (amb-list ls)))
          (assert (or (!= action reload) can-reload?))
          (assert (or (!= action throw) can-throw?))
          (assert (or (!= action duck) can-duck?))
          action))))

; Define what makes a move possible.
(define (reload-valid? snowballs opponent_snowballs ducks opponent_ducks max_snowballs)
  (not (or
        (= snowballs max_snowballs) ; Don't reload if we're full.
        (and (= opponent_ducks 0) (= opponent_snowballs max_snowballs)) ; Don't reload if opponent will throw.
        )))

(define (throw-valid? snowballs opponent_snowballs ducks opponent_ducks max_snowballs)
  (not (or
        (= snowballs 0) ; Don't throw if we don't have any snowballs.
        (= opponent_snowballs max_snowballs) ; Don't throw if our opponent won't be reloading.
        )))

(define (duck-valid? snowballs opponent_snowballs ducks opponent_ducks max_snowballs)
  (not (or
        (= ducks 0) ; Don't duck if we can't.
        (= opponent_snowballs 0) ; Don't duck if our opponent can't throw.
        )))

; Parse the command line, make a choice, print it out.
(command-line
 #:args (turn snowballs opponent_snowballs ducks opponent_ducks max_snowballs)
 (writeln (make-choice
           (string->number turn)
           (string->number snowballs)
           (string->number opponent_snowballs)
           (string->number ducks)
           (string->number opponent_ducks)
           (string->number max_snowballs))))

我还编写了一个小型测试程序,以使其中两个机器人相互对抗。感觉第二个机器人赢了更多钱,所以我可能在某个地方犯了一个错误。

(define (run)
  (run-helper 0 0 0 5 5 5))                         

(define (run-helper turn snowballs opponent_snowballs ducks opponent_ducks max_snowballs)
  (printf "~a ~a ~a ~a ~a ~a ~n" turn snowballs opponent_snowballs ducks opponent_ducks max_snowballs)
  (let ((my-action (make-choice turn snowballs opponent_snowballs ducks opponent_ducks max_snowballs))
        (opponent-action (make-choice turn opponent_snowballs snowballs opponent_ducks ducks max_snowballs)))
    (cond ((= my-action reload)
           (cond ((= opponent-action reload)
                  (run-helper (+ turn 1) (+ snowballs 1) (+ opponent_snowballs 1) ducks opponent_ducks max_snowballs))
                 ((= opponent-action throw)
                  (writeln "Opponent wins!"))
                 ((= opponent-action duck)
                  (run-helper (+ turn 1) (+ snowballs 1) opponent_snowballs ducks (- opponent_ducks 1) max_snowballs))))
          ((= my-action throw)
           (cond ((= opponent-action reload)
                  (writeln "I win!"))
                 ((= opponent-action throw)
                  (run-helper (+ turn 1) (- snowballs 1) (- opponent_snowballs 1) ducks opponent_ducks max_snowballs))
                 ((= opponent-action duck)
                  (run-helper (+ turn 1) (- snowballs 1) opponent_snowballs ducks (- opponent_ducks 1) max_snowballs))))
          ((= my-action duck)
           (cond ((= opponent-action reload)
                  (run-helper (+ turn 1) snowballs (+ opponent_snowballs 1) (- ducks 1) opponent_ducks max_snowballs))
                 ((= opponent-action throw)
                  (run-helper (+ turn 1) snowballs (- opponent_snowballs 1) (- ducks 1) opponent_ducks max_snowballs))
                 ((= opponent-action duck)
                  (run-helper (+ turn 1) snowballs opponent_snowballs (- ducks 1) (- opponent_ducks 1) max_snowballs)))))))

1

MonteBot,C ++

我基本上是从Koth那里获得了代码,并针对此挑战对其进行了修改。它使用解耦的UCT蒙特卡罗树搜索算法。它应该非常接近纳什均衡。

#include <cstdlib>
#include <cmath>
#include <random>
#include <cassert>
#include <iostream>


static const int TOTAL_ACTIONS = 3;
static const int RELOAD = 0;
static const int THROW = 1;
static const int DUCK = 2;

//The number of simulated games we run every time our program is called.
static const int MONTE_ROUNDS = 10000;

struct Game
{
    int turn;
    int snowballs;
    int opponentSnowballs;
    int ducks;
    int opponentDucks;
    int maxSnowballs;
    bool alive;
    bool opponentAlive;

    Game(int turn, int snowballs, int opponentSnowballs, int ducks, int opponentDucks, int maxSnowballs)
        : turn(turn),
          snowballs(snowballs),
          opponentSnowballs(opponentSnowballs),
          ducks(ducks),
          opponentDucks(opponentDucks),
          maxSnowballs(maxSnowballs),
          alive(true),
          opponentAlive(true)
    {
    }

    Game(int turn, int snowballs, int opponentSnowballs, int ducks, int opponentDucks, int maxSnowballs, bool alive, bool opponentAlive)
        : turn(turn),
        snowballs(snowballs),
        opponentSnowballs(opponentSnowballs),
        ducks(ducks),
        opponentDucks(opponentDucks),
        maxSnowballs(maxSnowballs),
        alive(alive),
        opponentAlive(opponentAlive)
    {
    }

    bool atEnd() const
    {
        return !(alive && opponentAlive) || turn >= 1000;
    }

    bool isValidMove(int i, bool me)
    {
        if (atEnd())
        {
            return false;
        }

        switch (i)
        {
        case RELOAD:
            return (me ? snowballs : opponentSnowballs) < maxSnowballs;
        case THROW:
            return (me ? snowballs : opponentSnowballs) > 0;
        case DUCK:
            return (me ? ducks : opponentDucks) > 0 && (me ? opponentSnowballs : snowballs) > 0;
        default:
            throw "This should never be executed.";
        }

    }

    Game doTurn(int my_action, int enemy_action)
    {
        assert(isValidMove(my_action, true));
        assert(isValidMove(enemy_action, false));

        Game result(*this);

        result.turn++;

        switch (my_action)
        {
        case RELOAD:
            result.snowballs++;
            break;
        case THROW:
            result.snowballs--;
            if (enemy_action == RELOAD)
            {
                result.opponentAlive = false;
            }
            break;
        case DUCK:
            result.ducks--;
            break;
        default:
            throw "This should never be executed.";
        }

        switch (enemy_action)
        {
        case RELOAD:
            result.opponentSnowballs++;
            break;
        case THROW:
            result.opponentSnowballs--;
            if (my_action == RELOAD)
            {
                result.alive = false;
            }
            break;
        case DUCK:
            result.opponentDucks--;
            break;
        default:
            throw "This should never be executed.";
        }

        return result;
    }
};

struct Stat
{
    int wins;
    int attempts;

    Stat() : wins(0), attempts(0) {}
};

/**
* A Monte tree data structure.
*/
struct MonteTree
{
    //The state of the game.
    Game game;

    //myStats[i] returns the statistic for doing the i action in this state.
    Stat myStats[TOTAL_ACTIONS];
    //opponentStats[i] returns the statistic for the opponent doing the i action in this state.
    Stat opponentStats[TOTAL_ACTIONS];
    //Total number of times we've created statistics from this tree.
    int totalPlays = 0;

    //The action that led to this tree.
    int myAction;
    //The opponent action that led to this tree.
    int opponentAction;

    //The tree preceding this one.
    MonteTree *parent = nullptr;

    //subtrees[i][j] is the tree that would follow if I did action i and the
    //opponent did action j.
    MonteTree *subtrees[TOTAL_ACTIONS][TOTAL_ACTIONS] = { { nullptr } };

    MonteTree(const Game &game) :
        game(game), myAction(-1), opponentAction(-1) {}


    MonteTree(Game game, MonteTree *parent, int myAction, int opponentAction) :
        game(game), myAction(myAction), opponentAction(opponentAction), parent(parent)
    {
        //Make sure the parent tree keeps track of this tree.
        parent->subtrees[myAction][opponentAction] = this;
    }

    //The destructor so we can avoid slow ptr types and memory leaks.
    ~MonteTree()
    {
        //Delete all subtrees.
        for (int i = 0; i < TOTAL_ACTIONS; i++)
        {
            for (int j = 0; j < TOTAL_ACTIONS; j++)
            {
                auto branch = subtrees[i][j];

                if (branch)
                {
                    branch->parent = nullptr;
                    delete branch;
                }
            }
        }
    }

    double scoreMove(int move, bool me)
    {

        const Stat &stat = me ? myStats[move] : opponentStats[move];
        return stat.attempts == 0 ?
            HUGE_VAL :
            double(stat.wins) / stat.attempts + sqrt(2 * log(totalPlays) / stat.attempts);
    }


    MonteTree * expand(int myAction, int enemyAction)
    {
        return new MonteTree(
            game.doTurn(myAction, enemyAction),
            this,
            myAction,
            enemyAction);
    }

    int bestMove() const
    {
        //Select the move with the highest win rate.
        int best;
        double bestScore = -1;
        for (int i = 0; i < TOTAL_ACTIONS; i++)
        {
            if (myStats[i].attempts == 0)
            {
                continue;
            }

            double score = double(myStats[i].wins) / myStats[i].attempts;
            if (score > bestScore)
            {
                bestScore = score;
                best = i;
            }
        }

        return best;
    }
};

int random(int min, int max)
{
    static std::random_device rd;
    static std::mt19937 rng(rd());

    std::uniform_int_distribution<int> uni(min, max - 1);

    return uni(rng);
}

/**
* Trickle down root until we have to create a new leaf MonteTree or we hit the end of a game.
*/
MonteTree * selection(MonteTree *root)
{
    while (!root->game.atEnd())
    {
        //First pick the move that my bot will do.

        //The action my bot will do.
        int myAction;
        //The number of actions with the same bestScore.
        int same = 0;
        //The bestScore
        double bestScore = -1;

        for (int i = 0; i < TOTAL_ACTIONS; i++)
        {
            //Ignore invalid or idiot moves.
            if (!root->game.isValidMove(i, true))
            {
                continue;
            }

            //Get the score for doing move i. Uses
            double score = root->scoreMove(i, true);

            //Randomly select one score if multiple actions have the same score.
            //Why this works is boring to explain.
            if (score == bestScore)
            {
                same++;
                if (random(0, same) == 0)
                {
                    myAction = i;
                }
            }
            //Yay! We found a better action.
            else if (score > bestScore)
            {
                same = 1;
                myAction = i;
                bestScore = score;
            }
        }

        //The action the enemy will do.
        int enemyAction;

        //Use the same algorithm to pick the enemies move we use for ourselves.
        same = 0;
        bestScore = -1;
        for (int i = 0; i < TOTAL_ACTIONS; i++)
        {
            if (!root->game.isValidMove(i, false))
            {
                continue;
            }

            double score = root->scoreMove(i, false);
            if (score == bestScore)
            {
                same++;
                if (random(0, same) == 0)
                {
                    enemyAction = i;
                }
            }
            else if (score > bestScore)
            {
                same = 1;
                enemyAction = i;
                bestScore = score;
            }
        }

        //If this combination of actions hasn't been explored yet, create a new subtree to explore.
        if (!(*root).subtrees[myAction][enemyAction])
        {
            return root->expand(myAction, enemyAction);
        }

        //Do these actions and explore the next subtree.
        root = (*root).subtrees[myAction][enemyAction];
    }
    return root;
}

/**
* Chooses a random move for me and my opponent and does it.
*/
Game doRandomTurn(Game &game)
{
    //Select my random move.
    int myAction;
    int validMoves = 0;

    for (int i = 0; i < TOTAL_ACTIONS; i++)
    {
        //Don't do idiotic moves.
        //Select one at random.
        if (game.isValidMove(i, true))
        {
            validMoves++;
            if (random(0, validMoves) == 0)
            {
                myAction = i;
            }
        }
    }

    //Choose random opponent action.
    int opponentAction;

    //Whether the enemy has encountered this situation before
    bool enemyEncountered = false;

    validMoves = 0;

    //Weird algorithm that works and I don't want to explain.
    //What it does:
    //If the enemy has encountered this position before,
    //then it chooses a random action weighted by how often it did that action.
    //If they haven't, makes the enemy choose a random not idiot move.
    for (int i = 0; i < TOTAL_ACTIONS; i++)
    {
        if (game.isValidMove(i, false))
        {
            validMoves++;
            if (random(0, validMoves) == 0)
            {
                opponentAction = i;
            }
        }
    }

    return game.doTurn(myAction, opponentAction);
}


/**
* Randomly simulates the given game.
* Has me do random moves that are not stupid.
* Has opponent do random moves.
*
* Returns 1 for win. 0 for loss. -1 for draw.
*/
int simulate(Game game)
{
    while (!game.atEnd())
    {
        game = doRandomTurn(game);
    }

    if (game.alive > game.opponentAlive)
    {
        return 1;
    }
    else if (game.opponentAlive > game.alive)
    {
        return 0;
    }
    else //Draw
    {
        return -1;
    }
}


/**
* Propagates the score up the MonteTree from the leaf.
*/
void update(MonteTree *leaf, int score)
{
    while (true)
    {
        MonteTree *parent = leaf->parent;
        if (parent)
        {
            //-1 = draw, 1 = win for me, 0 = win for opponent
            if (score != -1)
            {
                parent->myStats[leaf->myAction].wins += score;
                parent->opponentStats[leaf->opponentAction].wins += 1 - score;
            }
            parent->myStats[leaf->myAction].attempts++;
            parent->opponentStats[leaf->opponentAction].attempts++;
            parent->totalPlays++;
            leaf = parent;
        }
        else
        {
            break;
        }
    }
}

int main(int argc, char* argv[])
{
    Game game(atoi(argv[1]), atoi(argv[2]), atoi(argv[3]), atoi(argv[4]), atoi(argv[5]), atoi(argv[6]));

    MonteTree current(game);

    for (int i = 0; i < MONTE_ROUNDS; i++)
    {
        //Go down the tree until we find a leaf we haven't visites yet.
        MonteTree *leaf = selection(&current);

        //Randomly simulate the game at the leaf and get the result.
        int score = simulate(leaf->game);

        //Propagate the scores back up the root.
        update(leaf, score);
    }

    int move = current.bestMove();

    std::cout << move << std::endl;

    return 0;
}

linux的编译说明:

保存到MonteBot.cpp
运行g++ -o -std=c++11 MonteBot MonteBot.cpp

运行命令: ./MonteBot <args>


1

拖拉 - Python 3中

拖延者会在打完前几回合后拖延。突然,恐慌的怪物想通过对抗对手最常用的举动来避免造成资源战。

import sys

turn, snowballs, opponent_snowballs, ducks, opponent_ducks, max_snowballs = map(int, sys.argv[1:])

max_ducks = 25
times_opponent_ducked = max_ducks - ducks 
times_opponent_thrown = (turn - times_opponent_ducked - opponent_snowballs) / 2
times_opponent_reloaded = times_opponent_thrown + opponent_snowballs


## return a different action, if the disiered one is not possible
def throw():
    if snowballs:
        return 1
    else:
        return duck()

def duck():
    if ducks:
        return 2
    else:
        return reload()

def reload():
    return 0





def instant_gratification_monkey():
    ## throw, if you still have a ball left afterwards
    if snowballs >= 2 or opponent_ducks == 0:
        return throw()
    ## duck, if opponent can throw
    elif opponent_snowballs > 0:
        return duck()
    ## reload, if opponent has no balls and you have only one
    else:
        return reload()

def panic_monster():
    ## throw while possible, else reload
    if times_opponent_reloaded > times_opponent_ducked: 
        if snowballs > 0:
            return throw() 
        else:
            return reload()
    ## alternating reload and duck
    else: 
        if turn % 2 == 1:
            return reload() 
        else:
            return duck()

def procrastinator():     
    if turn < 13 or (snowballs + ducks > opponent_snowballs + opponent_ducks):
        return instant_gratification_monkey()
    else:
        return panic_monster()


print(procrastinator())

“拖延者”。那么,PPCG上实际上打算从事功课的每个人?(不要否认,阅读
此书的

1
“即时满足猴子”您是否也看到过TEDTalk?:)
HyperNeutrino


0

ParanoidBot和PanicBot- ActionScript3RedTamarin

从一种不适合的小众语言(带有扩展名以提供命令行参数)起,他们表现出轻浮的ParanoidBot及其沉闷的同伴PanicBot。

偏执狂

ParanoidBot失去了理智,并且有不必要的特定策略可以依靠。首先,它会发射雪球直到达到阈值,从而保留一些储备。然后,在三只警戒鸭子之后,妄想症就开始了,机器人试图在随机鸭子之间储存更多的雪球。补充其供应后,ParanoidBot返回到盲目投掷。由于头脑中的声音,ParanoidBot可以判断其是否获胜,并会相应地“制定策略”。

import shell.Program;
import shell;

var TURN:int = Program.argv[0];
var SB:int = Program.argv[1];
var OPSB:int = Program.argv[2];
var DC:int = Program.argv[3];
var OPDC:int = Program.argv[4];
var MAXSB:int = Program.argv[5];
var usedDucks:int = 0;

if (!FileSystem.exists("data"))
    FileSystem.write("data", 0);
else
    usedDucks = FileSystem.read("data");

if (SB > OPSB + OPDC)
{ trace(1); Program.abort(); }
if (SB + DC < OPSB) {
if (DC > 0)
    trace(2);
else if (SB > 0)
    trace(1);
else
    trace(0);
Program.abort(); }

if (usedDucks >= 3) {
    if (SB > MAXSB / 3) {
        usedDucks = 0;
        FileSystem.write("data", usedDucks);
        trace(1);
        Program.abort();
    }
    else {
        if (Number.random() > 0.5 && DC > 0)
            trace(2);
        else
            trace(0);
    }
}
else {
    if (SB > (MAXSB / 6) && SB >= 3)
    { trace(1); Program.abort(); }
    else {
        usedDucks++;
        FileSystem.write("data", usedDucks);
        if (DC > 0)
            trace(2);
        else if (SB > 0)
            trace(1);
        else
            trace(0);
        Program.abort();
    }
}

牙套有点笨拙以帮助压缩尺寸

PanicBot

已经疯了,PanicBot出于本能的恐惧做出了反应。PanicBot在恐惧中退缩后耗尽了鸭子,然后盲目地投掷了所有的雪球,然后拼命制造并投掷了更多的雪球,直到(可能)击败。

import shell.Program;

var SB:int = Program.argv[1];
var DC:int = Program.argv[3];

if (DC > 0)
{ trace(2); Program.abort(); }
if (SB > 0)
{ trace(1); Program.abort(); }
else
{ trace(0); Program.abort(); }



这是此处在PPCG上使用AS3的其他15个以下条目之一。有一天,也许这种可以说是异国情调的语言会找到一个难题。


可以在Linux上从bash运行吗?
HyperNeutrino

我还没有测试过,但是可以。RedTamarin可执行文件(redshell)是为Windows,Mac和Linux构建的:http ://redtamarin.com/tools/redshell 。如果以上其中一个机器人已保存到名为的文件中snow.as,则以下命令应在bash中运行:$ ./redshell snow.as -- 0 50 50 25 25

当我尝试运行此命令时,它给了我一个权限被拒绝的错误。
HyperNeutrino

@HyperNeutrino chmod +x redshell是您的朋友在这里...
Egg the Outgolfer '17

也许chmod 777一切?RedTamarin网站上也可能有一些故障排除

0

防御者,Python

当两个玩家都没有雪球时重新加载。如果有雪球,则抛出。如果没有雪球,但对手有,那么它会躲开,否则重新加载。

def get_move(turn, snowballs, opponent_snowballs, ducks, opponent_ducks, max_snowballs):
    if snowballs == opponent_snowballs == 0:
        return 0 #Reload
    elif snowballs > 0:
        return 1 # Throw
    elif ducks > 0:
        return 2 # Duck
    else:
        return 0 # Reload

if __name__ == "__main__": # if this is the main program
    import sys
    print(main(*[int(arg) for arg in sys.argv[1:]]))

注意:尚未测试

By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.