Hi folks, quick query for you.
Given the problems with once a card is printed, it can't be adjusted, I was hoping to ask why FFG would not be able to adopt a beta model for crowdsourcing the raw data for playtesting. Not as a replacement for their current system, but in addition, for that last final check before it gets approved. Just wait and see for 2 months to see what kind of madness floats to the top from people proxying prototype cards. And then release that card in an expansion a half year later.
Let's say for example, every time the FAQ was released, along with it were 10 "Beta Playtest cards" specifically and only for that cycle. And you could give them all very generic names, Eg. Prototype Astromech A1, or Prototype Cannon A1, or Prototype Missile A3.
I don't need to know the name of the thing. Would this impact sales? Do people feel this hurts the surprise aspect? Man though I wish we'd gotten a chance to widely playtest Nym though ahead of time, for example.
I understand there may be logistical difficulties with this that I don't understand, I'm just curious to understand why it wouldn't work.
Wider playtesting of cards
5 minutes ago, citruscannon said:Hi folks, quick query for you.
Given the problems with once a card is printed, it can't be adjusted, I was hoping to ask why FFG would not be able to adopt a beta model for crowdsourcing the raw data for playtesting. Not as a replacement for their current system, but in addition, for that last final check before it gets approved. Just wait and see for 2 months to see what kind of madness floats to the top from people proxying prototype cards. And then release that card in an expansion a half year later.
Let's say for example, every time the FAQ was released, along with it were 10 "Beta Playtest cards" specifically and only for that cycle. And you could give them all very generic names, Eg. Prototype Astromech A1, or Prototype Cannon A1, or Prototype Missile A3.
I don't need to know the name of the thing. Would this impact sales? Do people feel this hurts the surprise aspect? Man though I wish we'd gotten a chance to widely playtest Nym though ahead of time, for example.
I understand there may be logistical difficulties with this that I don't understand, I'm just curious to understand why it wouldn't work.
That would be awesome. The problem is that with the Star Wars license it makes this very difficult if not impossible to do on their end.
Just now, Timathius said:That would be awesome. The problem is that with the Star Wars license it makes this very difficult if not impossible to do on their end.
can they not get around it by a literal blank card in the FAQ which has no identifying features, it's just an "optional rulecard for casual play". That has no connection to anything in the movies?
Just totally divorcing the rules on the card from the name of the thing, and just giving us a rule to test?
Just now, citruscannon said:can they not get around it by a literal blank card in the FAQ which has no identifying features, it's just an "optional rulecard for casual play". That has no connection to anything in the movies?
Just totally divorcing the rules on the card from the name of the thing, and just giving us a rule to test?
Disney I doubt lets them risk it. Certain people could look at cards 8 months to a year out from the next movie and start filling in blanks as to what they could mean etc.
4 minutes ago, Timathius said:Disney I doubt lets them risk it. Certain people could look at cards 8 months to a year out from the next movie and start filling in blanks as to what they could mean etc.
I guess. But wouldn't that apply only to new ships? I can't determine any identifying features on any upgrade cards whatsoever without the name. It's just mechanics. Like I get that if there were an imperial ship with X hull and X shields it'd be a problem. But a cost 3 crew that gives a focus token when an adjacent ship at r1 spends an evade, for example. I wouldn't have a clue.
When thinking about upgrdes though, it seems to be more often the case that the base ship or a pilot is undercosted, rather than an upgrade, because of how upgrades can be taken on multiple ships. So maybe this is the biggest issue? The worst offenders I can think of are the k4, for example, but I can name a handful of ships that feel slightly overpowered.
but if 2/3 ships were actual planned statlines and dials, and the third was a red herring every time, would that make it work instead?
1 hour ago, citruscannon said:Hi folks, quick query for you.
Given the problems with once a card is printed, it can't be adjusted, I was hoping to ask why FFG would not be able to adopt a beta model for crowdsourcing the raw data for playtesting. Not as a replacement for their current system, but in addition, for that last final check before it gets approved. Just wait and see for 2 months to see what kind of madness floats to the top from people proxying prototype cards. And then release that card in an expansion a half year later.
Let's say for example, every time the FAQ was released, along with it were 10 "Beta Playtest cards" specifically and only for that cycle. And you could give them all very generic names, Eg. Prototype Astromech A1, or Prototype Cannon A1, or Prototype Missile A3.
I don't need to know the name of the thing. Would this impact sales? Do people feel this hurts the surprise aspect? Man though I wish we'd gotten a chance to widely playtest Nym though ahead of time, for example.
I understand there may be logistical difficulties with this that I don't understand, I'm just curious to understand why it wouldn't work.
The biggest reason would be the lack of objective results and comments. Some would love it, some would claim it to be OP and others would see it as useless. With no control, this Beta Test attempt would probably turn into a Charlie Foxtrot. More data is not necessarily a good thing especially if it's of an unknown quality.
Too much data is almost as bad as too little. Combined with the added hassle of having to get it approved by LFL for every revision they post to get playtested, it isn't feasible.
The problem to date has been that volunteer based playtesting hasn't worked. Thqt would imply that either playtesters don't find or don't report the issues, or FFG ignores the playtesters. I can buy it might be option one for things like Dengaroo or other things that rely on somewhat weird and complicated card interactions (like that three shot YV-666), but I don't buy for a second that playtesters missed VI Advanced Sensors bomblet Nym with Genius. Or the Lowrrick/Biggs/Rex interaction. Or that the Jumpmaster was broken good and way too cheap.
The solution to the games current balance problem isn't "More playtesting". It's"Designers that listen to and act on the feedback playtesting generates".
I don't know enough about the internal structure, or the people involved or the corporate culture of FFG or any of the factors that might get them from here to there. But it strikes me that a few basic things could help.
Paid playtesters would be a start. Get a few people into the team who's full time job is purely to test the ships and cards being designed and try to break them early on, and make sure the designers listen to their feedback. Those people can't be designers themselves - anyone who has ever made video game missions or set a challenge up for other people of any kind knows that you are a terrible judge of your own product, because you know how it's supposed to be used/played/completed. A fresh set of eyes without your preconceptions will find holes you never considered.
There's other stuff too. More interaction with the community would no doubt help the designers recognise the issues players have. They don't have to listen to everything said by everyone - let's be honest, there's a lot not worth listening to. But there's also wisdom in crowds, and basic issues come up here faster and louder than anywhere else. If nothing else, the occasional visible indication that they are listening (do FFG even have a social media manager?) and have fixes planned for the most egregious issues would go a long way to diminish the *******.
When you say things like that, it is clear that you haven't playtested. They may not have spotted certain things, because those THINGS DIDN'T EXIST when playtested. The EPT to the Punishing One may have been a revision they made based on the last rounds of playtests and sent to the printer. Or a last minute cost reduction. Deadlines exist. Sometimes, you can't do another pass at playtesting, and make a change based on the feedback you got. Also, when you are looking at the lead times on these things, playtesting is happening 2 waves before release. Now, pretty certain that playtesters would have access to the unreleased, finalized stuff. But it is a world of difference of not having those stuff constantly be in the spotlight to think of it. And I don't care how good of a playtesting base you have, predicting a meta impact 2 waves ahead of time can be difficult.
I also don't understand how actually interacting with the toxic waste dump this forum has become is helpful. They read it. That is good enough. Far better that they remain quite, than to get baited by a troll and cause a PR incident.
The biggest reason they won't is they don't feel they need to. They are happy with the play test process as of now, or at least happy with any changes they have made over time to that process. Regardless of what this forum states, they know the customer-base a bit better, and if play testing is sufficient/insufficient they will make those adjustments as they think its needed. Besides it isn't like just add more play testing is some magical panacea that just fixes everything.
Similar to engineering, playtesting can suffer from the age-old problem: Fast, good, or cheap. You can only pick two. And for gaming companies, "cheap" is generally a requirement (contrary to what seems to be common perception, your average gaming company is generally only barely making a profit... if they are even making one at all). The real problems come in when it turns out that customers demand a release cycle that effectively makes "fast" an additional requirement...
To be fair, it's not really that cut and dry. It's more like those are three rubber-band axes, where the more you tug on one requirement, the more it pulls away from the other two. Pull on two, and there's no give left for the third. But it's easier to convey the general concept with my initial pithy statement ![]()
2 hours ago, Freeptop said:Similar to engineering, playtesting can suffer from the age-old problem: Fast, good, or cheap. You can only pick two. And for gaming companies, "cheap" is generally a requirement (contrary to what seems to be common perception, your average gaming company is generally only barely making a profit... if they are even making one at all). The real problems come in when it turns out that customers demand a release cycle that effectively makes "fast" an additional requirement...
To be fair, it's not really that cut and dry. It's more like those are three rubber-band axes, where the more you tug on one requirement, the more it pulls away from the other two. Pull on two, and there's no give left for the third. But it's easier to convey the general concept with my initial pithy statement
Similar to science, playtesting can suffer from the age-old problem: changing only one variable at a time to see its effects.
At this point, there are far too many moving parts to comprehensively playtest everything, unless you want to slow development to a crawl.
And, of course, everything Freeptop said is true, too.
What this game suffers from the most is players who think that somehow they could do it better, and name alternate price points to existing components as if they were fact.
Edited by Darth MeanieWider testing is not the answer. Employing professional testers is. No matter how experienced a player a play tester is they are just a player who has been given early access. That’s nowhere near the same thing as understanding and being able to adhere to the fundimental principals of test practice. Not only that, but the testers should be fully engaged in the design process from start to finish. That’s how you engineer quality, not just semi-random spot checking at the end of the process.
Uh, can someone translate this into normal human language, please?
18 minutes ago, Major Tom said:Wider testing is not the answer. Employing professional testers is. No matter how experienced a player a play tester is they are just a player who has been given early access. That’s nowhere near the same thing as understanding and being able to adhere to the fundimental principals of test practice. Not only that, but the testers should be fully engaged in the design process from start to finish. That’s how you engineer quality, not just semi-random spot checking at the end of the process.
And how do you define a professional tester?
Testers should not be a part of the design process. (Familiarity breeds contempt and all that) They should go in cold just like any player trying to decipher the language on the cards.
15 minutes ago, Celestial Lizards said:Uh, can someone translate this into normal human language, please?
Game broken.
Me can fix.
![]()
Crowdsourced playtesting is a firestorm waiting to happen. And paid playtesters seem pretty unlikely given ffg-asmodee's a bit of a miser when it comes to even paying its designers.
That being said, leaks from the playtesters suggest that FFG's playtesting system is not ideal, with playtesters doing a good bit more than providing raw data or play experiences.
32 minutes ago, Darth Meanie said:Similar to science, playtesting can suffer from the age-old problem: changing only one variable at a time to see its effects.
At this point, there are far too many moving parts to comprehensively playtest everything, unless you want to slow development to a crawl.
And, of course, everything Freeptop said is true, too.
What this game suffers from the most is players who think that somehow they could do it better, and name alternate price points to existing components as if they were fact.
Your comments, along with Freetop's brought a smile to my face. Players that know time is not on the developers side, contrary to what the Rolling Stones say.
2 hours ago, Stoneface said:And how do you define a professional tester?
Testers should not be a part of the design process. (Familiarity breeds contempt and all that) They should go in cold just like any player trying to decipher the language on the cards.
I define a professional tester as someone who tests for a living. As a developer in an Agile environment I work with professional testers on a day-to-day basis, and am expected to be conversant in test practices and able to execute techniques myself. I could throw a bunch of acronyms about, but they’d not really be very meaningful unless you are involved in testing yourself. As for your contention that testers should not be involved in the design process, decades of experience building software tells me that’s flat out wrong. Once we started involving testers in design from the very beginning of the definition of requirements to the end of release quality increased exponentially. The tests that are designed now cover areas like meeting design intent rather than just function. It makes a **** of a difference.
I’ve also play tested a couple of games in my time, and it never ceases to amaze me that people expect decent results from random games and people filling out questionairs. Detailed, methodical test approaches find systemic faults. Does anyone sitting down to play some unreleased content for X-Wing or similar even sketch out an outline of what they are trying to test, or do they just build lists and see if they think something is OP or over complicated? Testing is a complex and highly skilled discipline, but games companies invariably seem to think it can be done by letting people who’s main qualification is being good at the game rather than being good at testing and letting them play to see if anything breaks.
4 hours ago, Major Tom said:I define a professional tester as someone who tests for a living. As a developer in an Agile environment I work with professional testers on a day-to-day basis, and am expected to be conversant in test practices and able to execute techniques myself. I could throw a bunch of acronyms about, but they’d not really be very meaningful unless you are involved in testing yourself. As for your contention that testers should not be involved in the design process, decades of experience building software tells me that’s flat out wrong. Once we started involving testers in design from the very beginning of the definition of requirements to the end of release quality increased exponentially. The tests that are designed now cover areas like meeting design intent rather than just function. It makes a **** of a difference.
I’ve also play tested a couple of games in my time, and it never ceases to amaze me that people expect decent results from random games and people filling out questionairs. Detailed, methodical test approaches find systemic faults. Does anyone sitting down to play some unreleased content for X-Wing or similar even sketch out an outline of what they are trying to test, or do they just build lists and see if they think something is OP or over complicated? Testing is a complex and highly skilled discipline, but games companies invariably seem to think it can be done by letting people who’s main qualification is being good at the game rather than being good at testing and letting them play to see if anything breaks.
While I agree with you about having testers involved from the beginning works very well for software and engineering I don't think it would be beneficial for this game in particular for the following reasons.
1) I don't think that this game alone would support a fulltime play testing staff. That means they are on call and you're paying for unproductive time or the team is shared among other departments and may not be available when needed. I've said it before that we don't know what kind of time is allotted to design and debugging before being sent out to production.
2) Rules and wording on cards. If the playtesters are intimately familiar with the game, then they may share the same bias, if you will, with the devs when it comes to the wording on cards. You read a card already knowing the intent and decide the directions on the card adequately relay the required information. I'd much rather have someone do a cold read and come back with "WTH does this mean" than have to read four pages of discussions of RAI vs RAW.
3) People knowing the game vs good players that know the game. This one's more of a gut feel rather than logical call. Personally, I'd rather have good players with the evil genius trait rather than a knowledgeable non-player. Street smarts vs book learning. All things being equal, I'll take the player every time. Providing they have good ethics and morals. Playtesters that have a vested interest in skewing the data are worthless to the development team.
I was in engineering, in one form or another, for 34 years so I can understand and appreciate your arguments. If we had more information on FFG's design process and the time frame they use, from concept to customer, I might agree with you on everything but a dedicated, professional playtesting staff. There would have to be one heck of a payback to justify the cost.