Artificial Intelligence In X-Wing

By Astech, in X-Wing

Computers have long surpassed humans in their ability to play chess. At this point a home computer will simply never lose. Now, chess is a pretty decent game, with numerous variables, but it's not as complicated a board game as it gets. the opening move for each player is a total of 20 choices (each pawn's move and double move, plus the knight's opening jumps) and, while it does explode from there, it's a relatively small explosion that levels out at a maximum amount of potential choices due to the finite number of spaces/things to move.

Go, on the other hand, is very complex comparatively speaking. Mostly due to the sheer number of options at the start of the game, and the intense significance of each choice. Only recently have developments in Deep Learning algorithms and processing power allowed computers to compete with master Go players, and not necessarily win.

Then there's X-Wing. X-Wing is a fantastically complicated game. I think that, aside from the free-movement war games like 40k it's the most complicated board game. Even if you removed dice entirely and replaced them with a definitive mechanic, there are billions of options each turn.

Turn -1:

Squad building isn't too hard, considering the excellent community database in the form of list juggler. Curren AI kind of excel at finding the best combos in a fixed system, and it can use List Juggler's recent data as feedback to more or less instantly bring it up to speed on current "meta" builds and the tiers below that.

I think the main boon of an AI system is to practice against things you expect to be facing in upcoming larger events.

Turn 0:

If you overlook the approaching-infinity amount of starting positions a ship (and list as a whole) can be in, you're still faced with a staggering number of starting positions for ships, many of which result in instant losses (facing the board edge), and others lead to a long slow death (running with arc-based ships). The possibilities are enormous, and predicting your opponent's placement is even worse.

Planning Phase:

After you get past Asteroid Setup and the place forces step, and the additional annoyance of pre-game condition cards, you're faced with selecting maneuvers. Thankfully, each ship has a small finite number of maneuvers to choose from, and a computer can naturally tell the exact position a ship's going to end up in. So moving a single ship isn't so bad. However, if you've got multiple ships activating at the same time, while anticipating both previous, simultaneous and future enemy ship movements in the activation phase the complications produce an immense web even between two 2-ship lists.

Add in red and green maneuvers, and an overall vision of where you want to be in, say, three turns time and you're looking at a massive challenge.

Activation Phase :

Aside from changing your activation order based on your opponent's earlier activations, you've got to look at the actions for each ship. For a ship like the X-wing that's fairly simple, and you can go up the complexity scale to the order of Advanced Sensors PTL and Engine Upgrade Echo.

Action choice is further complicated by the current board state: do you focus or target lock when you're on one hull with a higher PS ship shooting you, but a lower PS one in your sights?

Combat Phase :

Relatively simple here. Mathwing is done best by a computer, and I'd imagine various "personalities" of players will go for aggressive token spending on the easiest targets for maximum damage, or conservative AI that token stack and go for a war of attrition, or rookie AI that split their fire between two aces.

Even so, abilities like HotCop and Baze present a difficulty, since they present decisions that don't directly affect shooting, as do abilities like feedback array and the like.

End Phase :

Nothing too tricky here. A few special cases like Corran and Pulsed Ray Shields might complicate things, but that bridge can be burned when the AI comes to it.

So, do you think AI could be a beneficial inclusion to the game? How do you think FFG (via an app) or a third party (probably via VASSAL) could institute it? Is the singularity coming?

Edited by Astech

I think the HotAC System is the easiest to implement, as Sandrem has done in his app. I suspect if you let it squad build it’ll get confuzzled.

2 minutes ago, Estarriol said:

I think the HotAC System is the easiest to implement, as Sandrem has done in his app. I suspect if you let it squad build it’ll get confuzzled.

I love the HotAC AI, but it only works because you're vastly outnumbered, so it simply doesn't have to fly well. It also doesn't integrate multiple ships/options well, and can't really be made to, despite the neat squadron mechanic.

The great thing about deep learning AI is that they can recognise mistakes over time - especially over long periods of time or numerous repetitions. Squad building is tricky, but so long as it's given some basic parameters I think it'll be good.

While I think your analysis is pretty good, I want to point out that the activation phase is actually barely more complicated than Chess. The mind bending number of possibilities of Echo, and the combinations leading to bumps with squads would be very easily taken care of by AI (Echo has what, less than 100 possible options, that's virtually nothing!).

13 minutes ago, NilsTillander said:

While I think your analysis is pretty good, I want to point out that the activation phase is actually barely more complicated than Chess. The mind bending number of possibilities of Echo, and the combinations leading to bumps with squads would be very easily taken care of by AI (Echo has what, less than 100 possible options, that's virtually nothing!).

Except, in chess you only move 1 piece per turn (and alternate), while in X-wing you simultaneously move 1-8 ships per turn, and so does your opponent. In addition, in chess each piece has a well-defined discrete number of start-end positions, while in X-wing you can collide. Finally, the "combat" in chess is very simple (hit->dead), while in x-wing each ship has both hull and shields, agility and attack and various conditions incl. crits that may apply. In summary, it is all those factors that has to be considered in each turn the AI "thinks ahead", and while the concepts of building an AI for X-wing is similar to that of Chess, the complexity and difficulty in building a traditional min-max-alpha-beta algorithm capable of being a worthy opponent running on a standard desktop PC in 2018 is an entirely different league.

Building on the probalistic

Well, Chess was really good in the 90s already, and even a very modest laptop is 50-100 more powerful than Deep Blue. X-wing is more complex than Chess, but not that much. In any case, the kind of AI we work with these days (that do not compute all possibilities turns in advance, but can figure out stuff more like humans do, kinda, it's complicated), is way more efficient!

16 hours ago, NilsTillander said:

While I think your analysis is pretty good, I want to point out that the activation phase is actually barely more complicated than Chess. The mind bending number of possibilities of Echo , and the combinations leading to bumps with squads would be very easily taken care of by AI ( Echo has what, less than 100 possible options, that's virtually nothing!).

I see when you're meaning, but keep in mind that just Echo's opening moves can equal the number of possibilities for both player's opening moves in all combinations . As an example, imagine a mirror match with three Lambda's on both sides (hahaha).

Each Lambda has 13 possible moves, and the order of activation does matter, so you're looking at 3 * 13 ^ 3 possible moves, or nearly 7 thousand scenarios. That's with a relatively small number of ships with absolutely no repositioning and narrow dials. Let's go bigger, in one direction, to a swarm mirror:

Player A brings 8 Z-95 headhunters and player B also happens to bring 8 Z-95s, each with the same pilot skill. They're very strong in this meta. The total number of combinations of moves per side is 14^8, or 1.5 billion. For the whole board you're looking at double that at 3 billion combinations. That's more than a home computer can process in the time of a 75 minute round. but we can still go bigger, because ships with repositioning actions complicate things further.

If you assume the boost's three finite end positions, and the barrel roll's forward, mid and backwards positions, a PTL TIE interceptor starting unstressed has 632 final, unique positions to choose from. Advanced Sensors makes this explode, to the point a which I won't even attempt to calculate the possible unique end points. Even so, a field of 6 interceptors has a total of over 500 million possible end scenarios, plus bump situations.

Now, an individual chess turn isn't so bad, but you could argue that a series of turns requires more computing power than X-wing does. Well, if we were generous and said that each chess turn has a total of 500'000 possible moves between each player, you're still an order of magnitude behind a fairly simple set of lists. As soon as you bump the point limit up (akin to 4-player chess, perhaps) the numbers explode in an exponential manner, to the point of 300 point epic games, where the numbers get truly ridiculous.

16 hours ago, Sciencius said:

Except, in chess you only move 1 piece per turn (and alternate), while in X-wing you simultaneously move 1-8 ships per turn, and so does your opponent. In addition, in chess each piece has a well-defined discrete number of start-end positions, while in X-wing you can collide. Finally, the "combat" in chess is very simple (hit->dead), while in x-wing each ship has both hull and shields, agility and attack and various conditions incl. crits that may apply. In summary, it is all those factors that has to be considered in each turn the AI "thinks ahead", and while the concepts of building an AI for X-wing is similar to that of Chess, the complexity and difficulty in building a traditional min-max-alpha-beta algorithm capable of being a worthy opponent running on a standard desktop PC in 2018 is an entirely different league.

16 hours ago, NilsTillander said:

Well, Chess was really good in the 90s already, and even a very modest laptop is 50-100 more powerful than Deep Blue. X-wing is more complex than Chess, but not that much. In any case, the kind of AI we work with these days (that do not compute all possibilities turns in advance, but can figure out stuff more like humans do, kinda, it's complicated), is way more efficient!

I feel like a brute force approach isn't going to achieve anything meaningful without the use of a supercomputer. However, I think that the 'human approach' to rapidly removing scenarios from available choices is viable.

For instance, an AI can immediately discount any maneuver that causes a ship to fly off the board. Add to that any useless action (focusing when you're the last ship to move with nothing in range), boosting towards a board edge to guarantee death next turn and so on.

I think that allowing the AI to observe our best player on Vassal is probably the best training ground. Have the AI make each choice, then compare it to when the player chose.

1 hour ago, Astech said:

If you assume the boost's three finite end positions, and the barrel roll's forward, mid and backwards positions, a PTL TIE interceptor starting unstressed has 632 final, unique positions to choose from. Advanced Sensors makes this explode, to the point a which I won't even attempt to calculate the possible unique end points. Even so, a field of 6 interceptors has a total of over 500 million possible end scenarios, plus bump situations.

Now, an individual chess turn isn't so bad, but you could argue that a series of turns requires more computing power than X-wing does. Well, if we were generous and said that each chess turn has a total of 500'000 possible moves between each player, you're still an order of magnitude behind a fairly simple set of lists. As soon as you bump the point limit up (akin to 4-player chess, perhaps) the numbers explode in an exponential manner, to the point of 300 point epic games, where the numbers get truly ridiculous.

I feel like a brute force approach isn't going to achieve anything meaningful without the use of a supercomputer. However, I think that the 'human approach' to rapidly removing scenarios from available choices is viable.

For instance, an AI can immediately discount any maneuver that causes a ship to fly off the board. Add to that any useless action (focusing when you're the last ship to move with nothing in range), boosting towards a board edge to guarantee death next turn and so on.

I think that allowing the AI to observe our best player on Vassal is probably the best training ground. Have the AI make each choice, then compare it to when the player chose.

I really think you are very very much underestimating how numbers would explode in X-wing.

Let's assume that the added complexity of X-wings board state is irrelevant compared to chess.
Let's also assume the squadron is ready made for the AI to play, and even the initial deployment and asteroid placement is also ready made, so that we can focus on the playing part.

In chess, the game flows from one board state to another board state by one of the player performing a single choice of the several he has available. So, for example, it's the turn of the white player, so he selects one piece and makes a valid movement for it. Then you apply the consequences of the move, and the turn goes to the black player. That is it.
Once you figure out how to code the board state, and how to determine the several "moves" each turn allows, you can use a heuristic that gives a score to each move depending on how promising the resulting board state is for the AI player.

However, in X-wing things aren't this simple.
Players don't alternate making choices. Instead, ships activate in a particular order in different phases depending on PS. But this order might change depending on a PS-modifying crit got in a previous phase. These crits are random, so you cannot foresee the PS of a ship changing when planning a few rounds ahead. Chess is deterministic. X-wing isn't.

Moreover, at any moment there is a huge amount of, let's call them, interactions , that a player can perform to alter the board state.
In chess you have X pieces and each gives you X i different interactions. In X-wing, though, you have ships that have abilities that might be activated or triggered, by a choice from the player, or something else happening that is unrelated to the ship or the player. (Like Advanced Sensors, as you mention. But also stuff like Snap-shot, Quickdraw's ability, Fenn Rau (rebel) ability, that depend on other ships, abilities, game effects, etc.)
Also, many of these interactions have optional components, or choices that alter their outcome, and these choices need to be made by the AI. (For example, Advanced Targeting Computer's choice, Vader (crew) choice, Sabine (crew) choice of ship at range 1, etc)
Also, many of these interactions might trigger other interactions from the same ship, or from a different ship. (like the decision to spend a focus token during an attack, considering that the ship will need to defend later in the same round, but that will trigger something like the R4 Agromech that will grant a target lock, that might trigger Weapons Engineer, and so on and on)

But numbers exploding isn't the worst part. That would only affect how deep ahead the IA could plan.

The worst part is that these different interactions need to be programmed in and given a semantic that the AI needs to understand to be able to evaluate its outcome in case it uses it.
The AI should know that not only the particular ability can be used at this moment, but also what would be the outcome of applying that ability to the board state. And if that ability have different choices or targets (Like Hux's, Ruthlessness, Swarm Leader, etc), how each individual different choice within the execution of this ability would alter the board state.

Then, you would need some way to give it a heuristic, a way to value how promising a board state would be for the AI player.
But that isn't really trivial. In chess, you know which interactions your opponent may perform after you perfrom a particular one. All information is known and public.
In X-wing, instead, there are the face down dials, and the damage deck, whose state is unknown and can only be guessed.
So you have not really enough information to tell very promising board states from very unpromising ones, since a lot of information is hidden to you.

You could obviously tell that a board state where your opponent's total remaining hull, shields and ships is lower than before could be good or more promising than another that just stays the same.
But that could be very misleading too. The opponent could exploit this logic by forcing an earlier, impusive encounter with a tanky ship that quickly lowers his total health, but puts the AI player into a very inconvenient situation (facing asteroids, looking away from the rest of his squadron, sacrificing a "pawn ship" so that the AI spends its ordnance on it, then be weaponless for the rest of the match, etc).

Both the encoding of the different game interactions in a way that is comprehensive and useful for evaluation; and the creation of a meaningful heuristic that is at the same times efficient, will be the biggest challenges on developing a real AI for X-wing.

What most attempts at creating an AI have been doing so far is either totally scripting it and randomizing the dial (HOTAC) with some extra "cheats" to even it out, or making the AI always use their abilities as soon as they are able to use them, regardless of their pertinence (Fly Casual, Squadron Benchmark).

Having written several AI for a few games, including chess, I think you're overestimating the complication here. The static evaluation function for board state is not really that much more complicated than chess where you have to account for not only pieces remaining but also squares threatened, squares protected, the value of said squares, variable piece values depending on game stage and what it's personally threatening etc. Here we have far fewer pieces that can move in much more complicated ways resulting in more branches. But the evaluation doesn't have to be overly complicated and to be honest the vast majority of those branches are flat out stupid so they can simply be pruned (unless of course all the choices are below the stupid threshold at which point you have to go back and run down a block of the least stupid paths). That's true both for movement and most game effect choices. In most instances there is a statistically correct choice which is fairly easy to determine by the various board states it is most likely to leave you in.

Having said that the standard methodology for a static evaluation chess AI of exhaustively running down each branch wouldn't work particularly well here. You'd be pruning so heavily that if the opponent did something outside of the expected branches, there was extreme variance in dices, or a significant crit that you'd need to recalc everything. So looking ahead 8+ moves would be pointless, instead you'd look ahead 2-3 and reevaluate as events happened.

The far larger issue would be getting all of the card logic in there, I feel like that would take me an order of magnitude longer to code than a fairly simple AI that just looks at a tree of beginning of combat phase board state to value it based on min/expected/max damage resulting in a weighted health range for each ship. It'd almost certainly have some exploitable and most importantly predictable behaviors but it would make sensible moves that let it not land on rocks, maximize modified shots, and minimize shots it takes. So while certainly not Alpha Go, probably better than a lot of people...

It really, really isn't as easy as it seems.

Even list building isn't. Given a huge mass of data a computer can tell you which upgrades and pilots are the best, sure. BUt it can't tell you why, or how to put them together in any ways other than those on display in the data. And it can't do the kind of heuristic assessment that allows it to know that Fenn/Ghost will be good for the same reasons Herahsoka was, or that Trajectory Simulator/Genius will be amazing, etc etc etc. It can tell you that TLT correlates strongly with any turret slot, but it can't tell you WHY. The state space even of list building in X Wing dwarfs the state space of Chess. Understanding cards and their interactions seems pretty trivial to us (actually to be honest, it's frequently NOT trivial to us lol), but it's really, really not trivial for a computer.

And when you get into gameplay, X-wing is a MASSIVELY fuzzy game. It's going to be played on a different board every time (a combination of 2 lots of 3-from-18 obstacles, set in a massively varied array of places, which has to be done correctly for many lists to work), followed by setup which will vary hugely depending on the opposing list, and in which even a couple of millimetres of difference in placement can be the difference between victory and defeat.

The each turn for each ship can have hundreds or thousands of different (albeit often very similar) options for movement, and can need to process deeply through 4 or 5 turns in advance to do well (just doing a straight instead of a bank in a given situation can end up with you losing games if you end up missing arc, or having to go around a rock for two or three turns). The state space is VAST, hugely more than Chess, which has 64 spaces, 32 pieces, and the maximum number of available endpoints for a given turn is, I believe, 21 (a queen with all of its moves free). PIeces are either alive or dead, can only make one attack, and have no probability to deal with when doing so. The ruleset is (relatively) tiny, and has no variation between games, and no exception-based logics to include.

Or alternatively, see this analysis for a game I used to do art development for, on which 'why don't you just write a universal AI' was probably the most common FAQ we had. https://wiki.wesnoth.org/WhyWritingAWesnothAIIsHard

In short, humans are insanely good heuristic assessment engines, and computers... aren't.

Yet...

I'd bet that you could train a neural net to reliably win against a single player with each of them playing a single list. But that wouldn't extrapolate into any other combination of lists, or even necessarily into another player using the same list.

(I suspect that you could get a good number of PhD projects out of developing neural nets and AIs for games like X Wing though, I'd love to see people do that.)

On 2/26/2018 at 5:22 AM, Astech said:

Computers have long surpassed humans in their ability to play chess. At this point a home computer will simply never lose. Now, chess is a pretty decent game, with numerous variables, but it's not as complicated a board game as it gets. the opening move for each player is a total of 20 choices (each pawn's move and double move, plus the knight's opening jumps) and, while it does explode from there, it's a relatively small explosion that levels out at a maximum amount of potential choices due to the finite number of spaces/things to move.

Go, on the other hand, is very complex comparatively speaking. Mostly due to the sheer number of options at the start of the game, and the intense significance of each choice. Only recently have developments in Deep Learning algorithms and processing power allowed computers to compete with master Go players, and not necessarily win.

Then there's X-Wing. X-Wing is a fantastically complicated game. I think that, aside from the free-movement war games like 40k it's the most complicated board game. Even if you removed dice entirely and replaced them with a definitive mechanic, there are billions of options each turn.

Turn -1:

Squad building isn't too hard, considering the excellent community database in the form of list juggler. Curren AI kind of excel at finding the best combos in a fixed system, and it can use List Juggler's recent data as feedback to more or less instantly bring it up to speed on current "meta" builds and the tiers below that.

I think the main boon of an AI system is to practice against things you expect to be facing in upcoming larger events.

Turn 0:

If you overlook the approaching-infinity amount of starting positions a ship (and list as a whole) can be in, you're still faced with a staggering number of starting positions for ships, many of which result in instant losses (facing the board edge), and others lead to a long slow death (running with arc-based ships). The possibilities are enormous, and predicting your opponent's placement is even worse.

Planning Phase:

After you get past Asteroid Setup and the place forces step, and the additional annoyance of pre-game condition cards, you're faced with selecting maneuvers. Thankfully, each ship has a small finite number of maneuvers to choose from, and a computer can naturally tell the exact position a ship's going to end up in. So moving a single ship isn't so bad. However, if you've got multiple ships activating at the same time, while anticipating both previous, simultaneous and future enemy ship movements in the activation phase the complications produce an immense web even between two 2-ship lists.

Add in red and green maneuvers, and an overall vision of where you want to be in, say, three turns time and you're looking at a massive challenge.

Activation Phase :

Aside from changing your activation order based on your opponent's earlier activations, you've got to look at the actions for each ship. For a ship like the X-wing that's fairly simple, and you can go up the complexity scale to the order of Advanced Sensors PTL and Engine Upgrade Echo.

Action choice is further complicated by the current board state: do you focus or target lock when you're on one hull with a higher PS ship shooting you, but a lower PS one in your sights?

Combat Phase :

Relatively simple here. Mathwing is done best by a computer, and I'd imagine various "personalities" of players will go for aggressive token spending on the easiest targets for maximum damage, or conservative AI that token stack and go for a war of attrition, or rookie AI that split their fire between two aces.

Even so, abilities like HotCop and Baze present a difficulty, since they present decisions that don't directly affect shooting, as do abilities like feedback array and the like.

End Phase :

Nothing too tricky here. A few special cases like Corran and Pulsed Ray Shields might complicate things, but that bridge can be burned when the AI comes to it.

So, do you think AI could be a beneficial inclusion to the game? How do you think FFG (via an app) or a third party (probably via VASSAL) could institute it? Is the singularity coming?

I think you're a little behind on the state of Deep Mind's progress, the Alpha Go AI is at the point where it pretty soundly trounces the best human Go players on a regular basis. It's also worth noting, Alpha Go is actually playing Go. Chess is a 'solved' game, in that it's possible to know the absolute most optimal move in any given situation, as long as you can hold the entire possibility tree for the game in memory. That's the barrier that stuff like IBM's Deep Blue did, they didn't so much learn how to play chess as brute force the problem, checking thousands of possiblities down the tree from the current gamestate to pick the most.

Despite the mechanics of Go being simpler, the variables involved in playing are way more complex, Alpha Go had to actually learn how to play, similar-ish. It observed, and has played, probably hundreds of thousands of games against itself, and against people online, starting from just the ability to randomly place stones, to get to where it is, the only thing programmed there was the learning algorithm. It's pretty crazy, most researches didn't expect AI to get to this point for a couple decades more at least.

As for X-wing, I think it'll get there eventually. Really aside from starting placement on the board, as complicated as the possibility space is to us, there are only so many maneuvers each ship on the table can make, a way more limited set than the possibilities of Go, once everything is deployed and in motion. The hard part is learning card interactions. But they're working on that kind of thing. I haven't checked in a while but the Deep Mind people had a couple sub projects going where they were teaching it how to parse Magic: The Gathering and Hearthstone cards. I'm not sure if they've moved onto building decks and playing yet, I'd have to go look. It's not too far from having that capability to being able to crush most people in X-wing, I'd think.

Go is a lot less complex than X Wing, though AlphaGo is an amazing piece of AI research.

Even TCGs are a lot less complex than X Wing, especially if they're written and templated well as Magic and Hearthstone are for the most part.

Artificial intelligence will inherently have issues with any game that involves positioning it's pieces in a play area that isn't divided into squares or hexes and then operates on a turn based system. Thats why RTS games still have hilariously bad AI while chess AI is amazing. Chess can be completely quantified, any game which operates in a space instead of a grid can't be quantified fully.

That's why RTS AI difficulty settings really only give the AI resource handicaps. They don't become smarter, they just get more stuff.

Edited by BadMotivator
2 hours ago, BadMotivator said:

Artificial intelligence will inherently have issues with any game that involves positioning it's pieces in a play area that isn't divided into squares or hexes and then operates on a turn based system. Thats why RTS games still have hilariously bad AI while chess AI is amazing. Chess can be completely quantified, any game which operates in a space instead of a grid can't be quantified fully.

That's why RTS AI difficulty settings really only give the AI resource handicaps. They don't become smarter, they just get more stuff.

That's more to do with it being outside the scope of the people developing your StarCrafts and your Command and Conquers to tackle major AI research issues. Your typical video game bot or AI opponent is just a collection of scripted responses to stimuli; it can't actually think or strategize, it's purely reactionary and purely within whatever parameters the developers put down. Stuff like Deep Blue actually worked on a similarly reactive principal; you look at the current game state, then using the computational power available to construct a tree of every possible move, then every possible move after each of those moves, on and on and on for as much as you can hold in memory until you figure out which move available to you from the current board state puts you on the branch of the tree that has the most chances to win the game at the bottom.

Deep Mind's tech, and modern machine learning in general, are quite a bit different. Really the game experiments like Alpha Go are just a means to an end of creating a more generalized problem solver. Go, like Chess, does have a limited possible number of board states; the difference being Go's limited number is still greater than the number of atoms in the universe. It's impossible to use the same old brute force method to solve the problem. The end goal of Deep Mind, and other outfit's research, is a set of tools that, given a set of inputs and knowledge of what the end goal is, a generic system that can solve any problem. Games are just a good test ground because they have defined rules and victory conditions, but the more serious applications are things like medical diagnoses(see also IBM's Watson bot that crushed human players at Jeopardy a few years ago).

I hear Sophia got a ticket to worlds. Y’all better be scerd.

I think the big challenge with xwing is codifying the rules system into something that an AI can use. Chess, Go, Etc all have simple rules systems that can be hard coded into a neural network. While xwing’s core rules are fairly straightforward, the pilot cards, upgrade cards, and the complicated resulting rules interactions can not be reasonably hard coded into the code - they have to be expressed in some way that can be parameterized and machine read.

And that ain’t easy.

13 hours ago, Otacon said:

Chess is a 'solved' game, in that it's possible to know the absolute most optimal move in any given situation, as long as you can hold the entire possibility tree for the game in memory. That's the barrier that stuff like IBM's Deep Blue did, they didn't so much learn how to play chess as brute force the problem, checking thousands of possiblities down the tree from the current gamestate to pick the most.

Small correction here - chess isn’t solved in the same way that checkers is solved (which is the way you describe above).

Chess is solved when there are 7 or less pieces on the board — this is a project started by Ken Thomson back in the 70s. It is slowly beavering forward but is expected to take a long time. See http://tb7.chessok.com/

38 minutes ago, sozin said:

I think the big challenge with xwing is codifying the rules system into something that an AI can use. Chess, Go, Etc all have simple rules systems that can be hard coded into a neural network. While xwing’s core rules are fairly straightforward, the pilot cards, upgrade cards, and the complicated resulting rules interactions can not be reasonably hard coded into the code - they have to be expressed in some way that can be parameterized and machine read.

And that ain’t easy.

Nah. Outside of some cards being inconsistently worded, everything can be reduced to basic triggers. The real trick would come to getting the AI to "think" about optional abilities and when it is best to use them. Things like "Should I trigger Snap Shot/Bo Shek/Maul/etc..." are where you might have issues teaching the computer what to do.

Stuff like timing and triggers and such would be straightforward to build into the code.

I see you guys talking about Alpha Go (Zero), and I think you should also think about OpenAI Dota2 bot. It actually doesn't have an advantage like the original AI of all these games, but played fairly. And crushed everyone.

DOTA2 is a game where you move troops on a non gridded area, have to deal with health status, have a range of abilities to trigger, and don't even have a turn based system. It is way more complicated than x-wing, where the AI could be given time to think.

You also have to remember that, like for self driving car, the objective is not to have an AI being "perfect" all the time, just one that would smash everyone.

X-wing isn't that complicated of a game : I can play it, and my PhD isn't really in game theory ;-)

25 minutes ago, NilsTillander said:

X-wing isn't that complicated of a game : I can play it, and my PhD isn't really in game theory ;-)

You can also read a comic book and write a 2 lines summary of it at the end, telling the main plot, and no AI can currently do that. And it won't do it in for a long time, while for us it's almost trivial.
How difficult a game is to play for a human is almost irrelevant in regards to how difficult it is to create an AI for that game.

1 hour ago, NilsTillander said:

I see you guys talking about Alpha Go (Zero), and I think you should also think about OpenAI Dota2 bot. It actually doesn't have an advantage like the original AI of all these games, but played fairly. And crushed everyone.

DOTA2 is a game where you move troops on a non gridded area, have to deal with health status, have a range of abilities to trigger, and don't even have a turn based system. It is way more complicated than x-wing, where the AI could be given time to think.

You also have to remember that, like for self driving car, the objective is not to have an AI being "perfect" all the time, just one that would smash everyone.

X-wing isn't that complicated of a game : I can play it, and my PhD isn't really in game theory ;-)

As noted, humans are amazing heuristic computers. We can do things no AI can. That's why AI is often deceptively difficult, because it cannot do things we find trivial, and vice versa.

3 hours ago, Azrapse said:

You can also read a comic book and write a 2 lines summary of it at the end, telling the main plot, and no AI can currently do that. And it won't do it in for a long time, while for us it's almost trivial.
How difficult a game is to play for a human is almost irrelevant in regards to how difficult it is to create an AI for that game.

AI are really good at synthesis. There's bots on Reddit shortening articles by 70% very well!

I have built a model of 'fleet jousting'

So, that's an upgrade over just 'jousting values', yet it's not really an actual moving game simulator.

It ignores all the movement part : it just attempts to estimate the probability that each ship is able to shoot at each target in the opposing list every turn and picks the most likely profitable shot each turn (usually the wounded one, or the ship with the highest current offense/defense ratio, or the shortest distance). A turreted ship or highly mobile one just gets higher 'ratings' than less flexible pilots for its probability of being able to shoot each other opposing ship each turn.

You obtain a global likelihood of winning between two lists (and the life expectancy of each ship following the first shot of the game.)

Yet, I am surprised at how many games actually really follow the pattern of this model - yes, the 'weak link' dies first almost at the expected time (i.e. number of turns after the first "firing" turn). And the last two or three predicted ships actually make it to the end game.

I find the expected odds and kill order both informative and entertaining to read :)

This code is already not a piece of cake as i try to model most of the unique pilot powers. But building an actual 'Artificial Intelligence trying to define what the best table move is' is a much larger scope of work that I would not imagine doing on my little spare time, even for the statistical-modeler-of-boardgames-freak I think I am.

22 hours ago, Otacon said:

That's more to do with it being outside the scope of the people developing your StarCrafts and your Command aonquers to tackle major AI Starresearch issues. Your typical video game bot or AI opponent is just a collection of scripted responses to stimuli; it can't actually think or strategize, it's purely reactionary and purely within whatever parameters the developers put down. Stuff like Deep Blue actually worked on a similarly reactive principal; you look at the current game state, then using the computational power available to construct a tree of every possible move, then every possible move after each of those moves, on and on and on for as much as you can hold in memory until you figure out which move available to you from the current board state puts you on the branch of the tree that has the most chances to win the game at the bottom.

Deep Mind's tech, and modern machine learning in general, are quite a bit different. Really the game experiments like Alpha Go are just a means to an end of creating a more generalized problem solver. Go, like Chess, does have a limited possible number of board states; the difference being Go's limited number is still greater than the number of atoms in the universe. It's impossible to use the same old brute force method to solve the problem. The end goal of Deep Mind, and other outfit's research, is a set of tools that, given a set of inputs and knowledge of what the end goal is, a generic system that can solve any problem. Games are just a good test ground because they have defined rules and victory conditions, but the more serious applications are things like medical diagnoses(see also IBM's Watson bot that crushed human players at Jeopardy a few years ago).

Indeed. But i doubt the big AI developers could develop a good RTS AI even if they did try. You'd never make a Deep Blue equivalent for Total War or Starcraft. Not in the near future anyway.