MOV, Large Base ship half points and SOS

By Osoroshii, in X-Wing

I've taken some time to let the decision of the Half Points sink in befor I decided to get things off my chest. I'm disappointed in the direction the organized play is heading for X-Wing. I was never really a fan of the MOV (Margin of Victory) system to begin with. I could easily see the value of bulking up on ships to bleed less points in organized play. Up until MOV the small ship squads perform well in tournaments. The large base models had to fight their way in to the top seats, yes even the turrets.

Strength of Schedule (SOS), even with it's flaws is a much better system then MOV. In SOS you are trying to win every match while in MOV your trying not to lose. I understand the point of wanting to move away from a SOS system. At times when a player misses a cut because his first or second round opponent dropps from the tournament, it feels unfair. I don't disagree that missing cuts because of someone else's action feels wrong.

FFG has moved in a good direction setting a definitive round time at 75 min. I personally feel this is way more then enough time to reasonable finish a game of X-Wing. If you are routinely getting called on time, your not really playing a squad that can kill off another squad. And that is what MOV brought to tournament play.

There has been a game system running events and tournaments for over 25 year. In Hobby gaming, it is the 2,000lbs gorilla in the room. Yup, Magic the Gathering. They use Swiss rounds, with a SOS. They also don't have modified wins, you either win, lose or draw. I don't think it would be that bad to shift towards that kind of system.

With the 75 min rounds mostly all games finish, my guess would be 90% or better. Normally the matches that don't finish, don't have enough offense in their list as they are playing the MOV game instead of X-Wing. When time is called if both players have ships left on the table, they both failed to win the match regardless of points per ships. When both players fail to win its a draw. Wins are 5 match points, draws are 1 and losses are 0.

So when MOV was introduced over a year ago most applauded the change. It wasn't long before the large defensive ships started to rule the scene. Now after a year, we make the adjustment to fix MOV by making large base ships score diffently then every other ship. Now there are situations (although rare) you can lose a match without ever losing a ship. So, I guess what I'm asking is to abandon MOV and half points and move to system proven to work, SOS.

You make some good points but really you didn't come far back enough to identify the actual problem of MOV or more specifically the fact that we shouldn't even need to have such rules.

The problem with MOV is not that it exist but that Large Ships exist. Adding Large ships to X-Wing was a huge mistake, put that in Armada where it belongs. These 10-15+ hit point immobile, 360 shooting bore-fests have not only completely thrown the entire game out of context, from what once was a game about dog fighting but is now a game of statistical dice math-wing but they have had to constantly "fix it" by upping the ante.

They should have stuck to the game being a dog fighting game where mobility, prediction and tactics rule supreme. X-Wing in particular in tournament competition is so incredibly boring to watch, I almost can't stand it. Now with these new Y-Wing TLT the games are almost as boring to play as they are to watch.

Don't get me wrong, I love X-Wing and thankfully if you have an agreeable group who is actually interested in playing X-Wing as a dog fighting game its a lot of fun, but competitive play has turned into a Magic The Gathering tard-fest and I don't blame the players, its the game.

Edited by BigKahuna

You make some good points but really you didn't come far back enough to identify the actual problem of MOV or more specifically the fact that we shouldn't even need to have such rules.

This is a bad start, since it's (a) not true and (b) has already been discussed multiple times in the thread. We need some kind of rules for scoring games that end by the timer instead of by blowing up all of the opponent's ships, and we've had those rules since the game came out. They're an indispensable part of competitive play; MOV simply extends them so that they can also be used as a tiebreaker.

The problem with MOV is not that it exist but that Large Ships exist. Adding Large ships to X-Wing was a huge mistake, put that in Armada where it belongs. These 10-15+ hit point immobile, 360 shooting bore-fests have not only completely thrown the entire game out of context, from what once was a game about dog fighting but is now a game of statistical dice math-wing but they have had to constantly "fix it" by upping the ante.

The halcyon days when dogfighting was real dogfighting lasted less than six months, and that was only because FFG had supply problems with Wave 2. I also can't really conceive of a Star Wars dogfighting game that doesn't include the Falcon, since the Falcon was part of every single fighter engagement depicted in the original trilogy.

X-Wing in particular in tournament competition is so incredibly boring to watch, I almost can't stand it. Now with these new Y-Wing TLT the games are almost as boring to play as they are to watch.

Don't get me wrong, I love X-Wing...

"I hate a lot of core elements of this game! But don't get me wrong, I love the game." This is, at best, incoherent.

...and thankfully if you have an agreeable group who is actually interested in playing X-Wing as a dog fighting game its a lot of fun, but competitive play has turned into a Magic The Gathering tard-fest and I don't blame the players, its the game.

I've discussed before how much I dislike it when people throw around the words "retard" and "tard". They're words that have been used frequently to hurt people I love, and after decades of dealing with it, I simply have no respect left for people who use those words as casual weapons.

But you didn't stop there. You chose to aim those weapons at other people who play a game you claim to love, as well as a lot of people who play a related game. All told, you're insulting hundreds of thousands of people not because of anything they actually did, but simply because their play preferences differ from yours.

And then you have the gall to say you don't blame those players.

I'd be happy to talk about it by PM, if you're interested.

Please don't; this is getting good.

Okay, you asked for it. ;)

That was an excellent ready. Thank you.

Notably, though, even in a round-robin tournament there's some error associated with that set of z-scores because there's error associated with the outcome of an X-wing game. If I play you and you win, it could be because you're better than I am. But it could also be that your list had a particularly good matchup against mine, or that you had excellent results on the dice in the first few rounds, or it could be that twenty minutes into the game I was elbowed in the kidneys by a celebrating player who was standing too close to me. In this very simple model of player performance, everything that affects the outcome of an X-wing game that isn't skill is represented as error, and it turns out that X-wing is a noisy (that is, error-prone) process.

How much does a second or third round reduce this error? I ask this question for its potential value to our local league. We have several regional divisions wherein each competitor plays his division mates twice. Then, division winners play a single elimination bracket (I would have preferred double elimination, but we need to finish before the holidays). Is there a way to quantify (or at least estimate) the relative error between small-group double round-robin and a larger single round?

SOS had two big problems. We've talked about the first problem, players dropping from the tournament and tanking your SOS. But, the second problem was just as bad. With SOS, if you lost, it mattered WHEN you lost. Under SOS, if you lost in the first or second round, you were functionally eliminated from making the top cut.

In a 5 round swiss tournament, a loss in the first round meant that you weren't going to be playing against someone with a winning record again until Round 4. Let say you lost in Round 1 and then win out, your opponents records are:

Player 1's Opponents

Round 1: 0-0
Round 2: 0-1
Round 3: 1-1
Round 4: 2-1
Round 5: 3-1

Finished at 4-1

No, let's say instead of losing in Round 1, you lose in Round 4, your opponents records will be:

Player 2's Opponents:

Round 1: 0-0
Round 2: 1-0
Round 3: 2-0
Round 4: 3-0
Round 5: 3-1

Finished 4-1

Simply by changing when you lost you were adding 3 more victories over the course of the tournament to your SOS. We have no idea how they early round opponents actually finished, but starting behind by 3 victories or 15 points against other 4-1 players is a serious deficit. Dropping actually tended to compound the problem. With an early loss, you were more likely to play other players that also lost and were more likely to drop early.

So, in the above example, the player 1 is paired against Bob, who is an excellent player. Bob wins the Round 1 game by a slim margin, maybe Bob gets a clutch die roll and squeaks out the win against the player 1 in Round 1. Bob then plays the player 2 in Round 4. Bob also squeaks by a win against player 2. But, because player 1 played Bob in round 1 and player 2 didn't play Bob until round 4, Player 2 may make the cut, and Player 1 will almost certainly not. This was a huge flaw in the SOS system, probably bigger than player drops.

I do think that there is probably something that should be done to differentiate a win at time and a victory where all your opponents ships are destroyed. But, doing that could disqualify a number of lists. A list with all Awings can have trouble killing all an opponents ships sometimes, the ideal system doesn't disqualify list types. We probably do need a solution where the tournament points scored have more variance so that the tiebreaker comes into play less often. Right now the tie breaker is front and center in most tournament list decisions, because it's just going to always come up.

Edited by Rinehart

The problem with MOV is not that it exist but that Large Ships exist. Adding Large ships to X-Wing was a huge mistake, put that in Armada where it belongs. These 10-15+ hit point immobile, 360 shooting bore-fests have not only completely thrown the entire game out of context, from what once was a game about dog fighting but is now a game of statistical dice math-wing but they have had to constantly "fix it" by upping the ante.

With apologies to JBR7

BOO%252520TOPIC%2525201.png

Notably, though, even in a round-robin tournament there's some error associated with that set of z-scores because there's error associated with the outcome of an X-wing game. If I play you and you win, it could be because you're better than I am. But it could also be that your list had a particularly good matchup against mine, or that you had excellent results on the dice in the first few rounds, or it could be that twenty minutes into the game I was elbowed in the kidneys by a celebrating player who was standing too close to me. In this very simple model of player performance, everything that affects the outcome of an X-wing game that isn't skill is represented as error, and it turns out that X-wing is a noisy (that is, error-prone) process.

How much does a second or third round reduce this error? I ask this question for its potential value to our local league. We have several regional divisions wherein each competitor plays his division mates twice. Then, division winners play a single elimination bracket (I would have preferred double elimination, but we need to finish before the holidays). Is there a way to quantify (or at least estimate) the relative error between small-group double round-robin and a larger single round?

I don't have a quantitative answer for you, since that's something that would require a lot of empirical data collection*. Qualitatively, it reduces some kinds of error but not others: playing two matches won't change a bad matchup, and a player who's fatigued in the first game won't suddenly be refreshed for the second. It will help reduce error due to dice, though.

My preferred tournament structure for X-wing would be a round-robin qualifier followed by double elimination final, though, for a number of reasons (for one thing, there's virtually no need for tiebreakers). So that part makes me happy!

*ETA: Or extensive simulation. Which I'll get around to eventually, meaning sometime between now and the heat death of the universe.

Edited by Vorpal Sword

I don't have a quantitative answer for you, since that's something that would require a lot of empirical data collection*. Qualitatively, it reduces some kinds of error but not others: playing two matches won't change a bad matchup, and a player who's fatigued in the first game won't suddenly be refreshed for the second. It will help reduce error due to dice, though.

In this case, though, we are talking about a six week league. Lists will change between games, so "bad matchup" noise should be reduced (consecutive bad matchups are just as likely to be evidence of a strong list builder playing a weak one). Fatigue should (usually) not be a factor.

We are pretty far off topic, but thanks for the discussion.

Half points for large ships isn't the only MOV fix their trying. They've also changed the pairing of rounds to help breakup the large ships from facing off.

SoS is random. Your primary tiebreaker should not be a random element if you have any other reasonable metric available to you. Strength of Schedule is never a goodboption, it just happens to be the only option in most cases.

I saw this thread a few hours too late, but want to get my opinion out there as well. There are a lot of good arguments here - too many to really quote.

SOS was awful where it allowed the play of your opponents to have too much impact on your results and chance to make the cut. I've have been on both sides of making the cut and being the first player to miss the cut because of SOS more times than I care to admit. I was all for any rule change that made how you played the better tie-breaker. MOV is definitely better in that regard, but MOV has its own problems - the rise of the 2-ship "FAT" meta proves that. With the recent rule changes giving 1/2 points for doing 1/2 or more damage to large ships, I think MOV does a bit better job of reflecting the state of the game at the end. But as people have pointed out you can run into the weird situation of losing a game where you have not lot a ship and your opponent has not. Some people think it will happen a lot, others think not to often. Either way - this is definitely a problem!

There are other threads about how the 1/2 points for half damage should have been applied to all ships and not just large ones. The opinions in those threads have been just a polarized as they are here - some are all for the 1/2 points being applied to all ships, others think things should never have been changed in the first place.

I completely disagree with eliminating the modified win and getting 1 point in a draw if your squad is not completely destroyed. People have complained about the stalling with FAT ships trying to hold on at the end of the game. Imagine how many games that would happen under with all players trying to stall to keep their last ship on the table. NO THANK YOU!

FWIW - In my opinion, the 1/2 points scoring change is a more in the right direction and should have been applied to all ships NOW! I think it's just more balanced - everyone follows the same rules, not just penalizing the people that want to run large ships - especially those that still want to run a 2-ship tanky build. Yes you may still have the oddball game where you lose when destroying some of your opponents ships and not losing any of your own - but with this rule in place, I think the scoring would more accurately reflect the damage taken vs total hit points of both squads and give a better picture of the true state of the game when time is called. I think in time that is where FFG will go with the scoring of MOV - they will see scoring the 1/2 points rule on all ships is more balanced and adjust the rules again. My prediction is either right after store championship season or just before Worlds next year.

Regardless if you love the current rules or hate them, they are the current rules for competitive/tournament play and everyone needs to follow them!!!

People being vocal regarding their opinions just shows how popular this game is. I hope everyone keeps expressing their opinions - whether they are the same or different than mine. I just hope everyone can do it in a respectful manner! And even though we love playing it - it is still only a game!!!!

I understand the point of what they are trying to do with half points on large based ships. I just feel its a bad ruling to help a bad tournament format.

imagine baseball where if at the end of the 9th inning if the score was tied they counted the number of times a team got to second base to break the tie?

Edited by Osoroshii

imagine baseball where if at the end of the 9th inning if the score was tied they counted the number of times a team got to second base to break the tie?

doesn't sound too bad for me ;-)

I understand the point of what they are trying to do with half points on large based ships. I just feel its a bad ruling to help a bad tournament format.

imagine baseball where if at the end of the 9th inning if the score was tied they counted the number of times a team got to second base to break the tie?

I don't think SoS is any better, though, based on my experience with Warmahordes.

Okay, you asked for it. ;)

One way to look at tournaments is as a mechanism for picking some number (often 1) of "players" from a set of an arbitrarily large size. The players could be actual individual players like in X-wing, or teams like in baseball, or more abstractly they could be people competing for a job. We'll just call them all players, and assume there's an operation that compares any pair and determines which is better.

A psychometrician named Thurstone developed something called the law of comparative judgment in the 1920s. Thurstone was interested in determining things like the smallest difference people could perceive between two stimuli--say, brightness of a light, and the law of comparative judgment was how he approached the problem.

It's not really a law, but actually a mathematical model that ranks a set of objects in exactly the way we need to perform our tournament: it compares every possible pairwise combination of objects, and uses those comparisons to not only rank the objects but actually determine their positions with respect to some hypothetical underlying variable that causes their performance.

That's a direct analogy to a round-robin tournament that matches every player against every other player. If I put 128 X-wing players in a round-robin tournament, I could use the results to determine (say) a z-score for each of them. (With today's mathematical tools at hand, I'd probably use a Rasch model, which is a pretty linear descendant of the law of comparative judgment.) But for obvious reasons, that's not feasible unless you have three weeks for your tournament. But round-robin is the best way to do it, if your goal is actually to figure out how good every player in the field is with respect to every other player.

Notably, though, even in a round-robin tournament there's some error associated with that set of z-scores because there's error associated with the outcome of an X-wing game. If I play you and you win, it could be because you're better than I am. But it could also be that your list had a particularly good matchup against mine, or that you had excellent results on the dice in the first few rounds, or it could be that twenty minutes into the game I was elbowed in the kidneys by a celebrating player who was standing too close to me. In this very simple model of player performance, everything that affects the outcome of an X-wing game that isn't skill is represented as error, and it turns out that X-wing is a noisy (that is, error-prone) process.

So now picture those players laid out on a number line (a graph of the underlying variable of "skill"). Each person who participated in this enormous round-robin tournament has both an estimated location on the line, represented by a dot, and an interval (represented by a shaded area) that indicates potential error in the estimate. (A poll of registered voters might put a candidate's support at 27%, plus or minus 3.5%. That 3.5 percent is the potential error.)

Some of those error regions might overlap, and if they do, that's the mathematical model we're using admitting that the ranking might be wrong--player #38 and player #39 might really belong in the opposite order, if the model was a little bit wrong about both of their skill levels.

But critically, this method (round-robin analyzed using a particular set of mathematical tools, if you're losing track) is the most accurate possible way to determine player's positions on that line. Since it's clearly not feasible to do it, we invented alternatives. Single-elimination is an old solution, and it has a huge payoff in terms of time--it's literally an exponential decrease in the number of rounds I have to run. But you pay for it in terms of accuracy, because now you're testing each player against a limited sample of other players. And that blows up those error regions by a lot. Now there are a lot of players with overlapping error bars (again, meaning you can't really resolve which of them is better), and the error is uniformly distributed across all players. You lost resolution, and you lost it everywhere at once.

Swiss tournaments are a newer solution, and they work--in mathematical terms--not by reducing that error, but by moving it. The Swiss system says "hey, we care most about the players at the top, right? Let's focus on getting good estimates for them!" And that's just what it does. When you apply mathematical tools to the Swiss system (usually using large simulations), what happens is that because you're always comparing the best players to the best players, the uncertainty around those players decreases rapidly. Unfortunately, for the middle ranks anyway, you get that advantage by blowing up the error bars for players in the middle of the distribution even more than you did under single-elimination.

The best qualitative explanation I can come up with for how this happens is that in order for someone who's "really" a 3-2 player to end up at 5-0, all of the error in each of her games has to point in the same direction--she would have to be not just consistently lucky but lucky in a set of increasingly discriminating games. Because that's unlikely, the error bars are pretty small. But in order for someone who's "really" a 3-2 player to end up at 2-3, bad luck only has to strike once. So the error bars are typically going to be so large that we can't tell players in adjacent score brackets apart.

So how does strength of schedule tie into all of this? It's conceptually just the sum of your opponents' skill estimates, which means the error in your strength of schedule is the sum of the errors in each of those estimates. Players in the middle of a Swiss ranking tend to have played other players in the middle; players on the outside tend to have played mostly players on the outside. In each round, we're "pushing" error away from the players on the outside, at the cost of our ability to tell the difference between players on the inside. And that means with each round, strength of schedule gets less and less meaningful for those players on the inside.

And as we figured out just now, we can't even reliably tell the difference between a 3-2 player and a 2-3 player by the end of 5 Swiss rounds. So when we add up the error in everyone two 3-2 players played against, we get those players' strength of schedule--but there's so much error wrapped up in strength of schedule that it's almost literally meaningless.

(I should say that all of this has been presented more-or-less qualitatively, but it can be demonstrated quantitatively. I haven't done it, and to my knowledge no one else has. It would be a straightforward but very time-consuming task, for someone with the requisite knowledge base. You could also take a shortcut and do it by simulation, which would be less time consuming but requires a set of software tools I don't have and would still be at least moderately time-consuming...)

Swiss also gives a very good picture of the lowest ranked players, since it's very symmetrical (dropouts notwithstanding).

My criticism of swiss is its mechanics vs. how many players there are. While the end result is always that the first player won all matches (assuming you don't get draws) and the last player lost all matches, the middle area will vary widely depending on a lot of random things (pairings, dice) and number of players. A 9-player tournament will "feel" very different from a 16-player tournament, although they both have 4 rounds. I don't have a concrete example, but I feel like a 9-player tournament generates more "upsets", i.e. the error in the middle is much higher.

Okay, you asked for it. ;)

One way to look at tournaments is as a mechanism for picking some number (often 1) of "players" from a set of an arbitrarily large size. The players could be actual individual players like in X-wing, or teams like in baseball, or more abstractly they could be people competing for a job. We'll just call them all players, and assume there's an operation that compares any pair and determines which is better.

A psychometrician named Thurstone developed something called the law of comparative judgment in the 1920s. Thurstone was interested in determining things like the smallest difference people could perceive between two stimuli--say, brightness of a light, and the law of comparative judgment was how he approached the problem.

It's not really a law, but actually a mathematical model that ranks a set of objects in exactly the way we need to perform our tournament: it compares every possible pairwise combination of objects, and uses those comparisons to not only rank the objects but actually determine their positions with respect to some hypothetical underlying variable that causes their performance.

That's a direct analogy to a round-robin tournament that matches every player against every other player. If I put 128 X-wing players in a round-robin tournament, I could use the results to determine (say) a z-score for each of them. (With today's mathematical tools at hand, I'd probably use a Rasch model, which is a pretty linear descendant of the law of comparative judgment.) But for obvious reasons, that's not feasible unless you have three weeks for your tournament. But round-robin is the best way to do it, if your goal is actually to figure out how good every player in the field is with respect to every other player.

Notably, though, even in a round-robin tournament there's some error associated with that set of z-scores because there's error associated with the outcome of an X-wing game. If I play you and you win, it could be because you're better than I am. But it could also be that your list had a particularly good matchup against mine, or that you had excellent results on the dice in the first few rounds, or it could be that twenty minutes into the game I was elbowed in the kidneys by a celebrating player who was standing too close to me. In this very simple model of player performance, everything that affects the outcome of an X-wing game that isn't skill is represented as error, and it turns out that X-wing is a noisy (that is, error-prone) process.

So now picture those players laid out on a number line (a graph of the underlying variable of "skill"). Each person who participated in this enormous round-robin tournament has both an estimated location on the line, represented by a dot, and an interval (represented by a shaded area) that indicates potential error in the estimate. (A poll of registered voters might put a candidate's support at 27%, plus or minus 3.5%. That 3.5 percent is the potential error.)

Some of those error regions might overlap, and if they do, that's the mathematical model we're using admitting that the ranking might be wrong--player #38 and player #39 might really belong in the opposite order, if the model was a little bit wrong about both of their skill levels.

But critically, this method (round-robin analyzed using a particular set of mathematical tools, if you're losing track) is the most accurate possible way to determine player's positions on that line. Since it's clearly not feasible to do it, we invented alternatives. Single-elimination is an old solution, and it has a huge payoff in terms of time--it's literally an exponential decrease in the number of rounds I have to run. But you pay for it in terms of accuracy, because now you're testing each player against a limited sample of other players. And that blows up those error regions by a lot. Now there are a lot of players with overlapping error bars (again, meaning you can't really resolve which of them is better), and the error is uniformly distributed across all players. You lost resolution, and you lost it everywhere at once.

Swiss tournaments are a newer solution, and they work--in mathematical terms--not by reducing that error, but by moving it. The Swiss system says "hey, we care most about the players at the top, right? Let's focus on getting good estimates for them!" And that's just what it does. When you apply mathematical tools to the Swiss system (usually using large simulations), what happens is that because you're always comparing the best players to the best players, the uncertainty around those players decreases rapidly. Unfortunately, for the middle ranks anyway, you get that advantage by blowing up the error bars for players in the middle of the distribution even more than you did under single-elimination.

The best qualitative explanation I can come up with for how this happens is that in order for someone who's "really" a 3-2 player to end up at 5-0, all of the error in each of her games has to point in the same direction--she would have to be not just consistently lucky but lucky in a set of increasingly discriminating games. Because that's unlikely, the error bars are pretty small. But in order for someone who's "really" a 3-2 player to end up at 2-3, bad luck only has to strike once. So the error bars are typically going to be so large that we can't tell players in adjacent score brackets apart.

So how does strength of schedule tie into all of this? It's conceptually just the sum of your opponents' skill estimates, which means the error in your strength of schedule is the sum of the errors in each of those estimates. Players in the middle of a Swiss ranking tend to have played other players in the middle; players on the outside tend to have played mostly players on the outside. In each round, we're "pushing" error away from the players on the outside, at the cost of our ability to tell the difference between players on the inside. And that means with each round, strength of schedule gets less and less meaningful for those players on the inside.

And as we figured out just now, we can't even reliably tell the difference between a 3-2 player and a 2-3 player by the end of 5 Swiss rounds. So when we add up the error in everyone two 3-2 players played against, we get those players' strength of schedule--but there's so much error wrapped up in strength of schedule that it's almost literally meaningless.

(I should say that all of this has been presented more-or-less qualitatively, but it can be demonstrated quantitatively. I haven't done it, and to my knowledge no one else has. It would be a straightforward but very time-consuming task, for someone with the requisite knowledge base. You could also take a shortcut and do it by simulation, which would be less time consuming but requires a set of software tools I don't have and would still be at least moderately time-consuming...)

Swiss also gives a very good picture of the lowest ranked players, since it's very symmetrical (dropouts notwithstanding).

My criticism of swiss is its mechanics vs. how many players there are. While the end result is always that the first player won all matches (assuming you don't get draws) and the last player lost all matches, the middle area will vary widely depending on a lot of random things (pairings, dice) and number of players. A 9-player tournament will "feel" very different from a 16-player tournament, although they both have 4 rounds. I don't have a concrete example, but I feel like a 9-player tournament generates more "upsets", i.e. the error in the middle is much higher.

That would be correct, because you've lowered the resolution it's capable of - you've put in more players to the same number of rounds. The more players in a given number of rounds, the bigger those 'error bars' get, and thus the more likely that the middle of the pack is showing inaccuracies.

Alas, rounds are few enough that their atomicity leads to some pretty big shifts in precision based purely on player counts. And as noted, keeping the number of rounds to a minimum is kind of key. (That said, I find tournaments that go below 4 rounds always feel a bit of a let-down. Better to just run extra rounds of swiss for everyone and skip the cutoff, eh? That way everyone keeps playing. :) )

@Vorpal Sword: I wish I could like that post thrice, and then thrice again. You explained a fairly technical concept fantastically well, congratulations. I've tried to do it before and failed horribly; now I've got the perfect reference. :)

Edited by Reiver

I understand the point of what they are trying to do with half points on large based ships. I just feel its a bad ruling to help a bad tournament format.

imagine baseball where if at the end of the 9th inning if the score was tied they counted the number of times a team got to second base to break the tie?

It would be ridiculous, but I doubt ties would be an issue if we could play 3 hour games or only needed to get in one a day.

This is why sports analogies and people's expectations for sports cannot be applied to games like X-Wing. It requires a format that can be completed in a reasonable time over the course of a day while still play as many games as possible.

Edited by AlexW