"Extensive Playtesting"

By Kartigan13, in Descent: Journeys in the Dark

So the new video preview of FFG's Civilization game came out today. It's a fun preview, and the game looks pretty neat so I think I may check it out once it releases.

There was though one point on the video which made me laugh out loud. It was when a FFG employee speaking (I think it said he was the VP of marketing or some such) and I believe his exact words were "So we're known for doing extensive playtesting on all of our games, it's a point of pride for us.", at which point he started talking about the playtesting and reworking of the Civ game.

I did have to chuckle when I heard that, since Sea of Blood was what immiedetely popped into my mind. It's seems quite obvious to me that that expansion was not playtested in the slightest. Then again, playtesting a 60 hour campaign extensively is probably impossible to do. But Descent in general seems to have not been playtested in a lot of areas, not only from balance perspectives (such as the lack of player scaling), but also rules writing or interpretations. You'd think that a lot of the rules questions that occur would have come up at some point during an extensive playtesting period.

Though on the other hand, there are so many different combinations of skills, cards, and items in Descent that even during several playtests you may not run into every combo. Still, you'd think someone would have thought about the way Gauntlets of Power could be abused with Rapid Fire before that card was released, if they were doing "extensive playtesting" partido_risa.gif . Lol, I love FFG, and their newer Board games do seem to be playtested well (I'm thinking of Runewars), but that statement just made me laugh in regards to Descent.

the problem is, when people who wrote the rules test their own games. of course it's clear for them what they meant most of the time (hopefully). either that or they dismissed small/quick questions from their testers as too mundane or something. i can agree, that they most likely played a few games and thought, it's working well enough. in their case, the insanity that is the rulebooks without a structured approach and no useful index to speak of wasn't a problem either. they just pushed all the information in there "somewhere", so that was probably good enough. they probably never had to explain the rules to people alien to the dungeon crawler genre either. lots of guesses of course, but me and my friends are quite frequent players and have a lot of experience with complex games (magic realm anyone?). still, descent (especially rtl) and starcraft must have been the worst set of rulebooks i ever had to endure.

Kartigan said:

I did have to chuckle when I heard that, since Sea of Blood was what immiedetely popped into my mind. It's seems quite obvious to me that that expansion was not playtested in the slightest. Then again, playtesting a 60 hour campaign extensively is probably impossible to do. But Descent in general seems to have not been playtested in a lot of areas, not only from balance perspectives (such as the lack of player scaling), but also rules writing or interpretations. You'd think that a lot of the rules questions that occur would have come up at some point during an extensive playtesting period.

I don't know if FFG has a dedicated QA department, but if they're really serious about performing extensive playtesting, they should. If they did have such a department, then getting together 5 guys whose only job is to play SoB for 8 hours a day should mean they can bang out a full campaign in about 7 or 8 (working) days. And that's assuming they play all the way through. If the OL scores an early razing victory or if they wind up in a position where the heroes have little if any chance of making it (two events which I gather occur with not uncommon frequency in SoB) then they could probably do more games in the same amount of time. Given a month's time to do QA, they may have played somewhere between 5-10 camapigns, let's say. Extensive? Maybe not, but it should at least have been enough to show up some of the more glaring problems that somehow made it through.

That means either the current state of SoB is considered acceptable to their QA standards, or the people in charge decided to push it through regardless in order to meet deadlines. I work as a QA myself (not in board games, but I'm sure the process is similar), so I know that when push comes to shove that second option has a habit of showing up sooner or later. In fairness, I also know from my own experiences that QA will never find all the bugs, so drawing the line somewhere is not necessarily a bad thing.

Descent's Advanced Campaigns are probably a bad example for exactly the reason you mention - lengthy playing time reduces the number of test cycles that can be done. Even so, with a dedicated team of employees whose only job is to play the crap out of these games 8 hours a day, 40 hours a week, I would agree that they could've done better than they did with SoB. Runewars was pretty solid, though, (and that's the most recent game I've purchased) so here's hoping that what he said was true of current and future design procedures, even if it hasn't been upheld so well in the past.

I don't know how effective it would be to have the same five people do all of the playtesting. I'm currently playing with two different vanilla Descent groups and one RTL campaign. Each group is completely different in approach. Situations that give one group a hard time are either completely avoided or destroyed by another group, and I'm not just talking about one group having the right characters or skill set to handle it. (Does anyone else ALWAYS end up with a Varikas character wielding the Bone Blade in every game?) I'd think if you had only one group do all of the playtesting the results would be pretty lopsided.

Actually, given the number of complaints about SOB maybe that's exactly what they did.

Just my two cents, but I think truly complete playtesting would have to include multiple groups, and mix up those groups as much as possible.

From the RtL rulebook, page 34:

" Lead Playtester: Mike Zebrowski
Playtesters: Jesse Acostas, Jonathan Ahern, Sean Ahern, Matthew B. Cary, Dan Clark, Tony Doepner, Deron Dorna, Brent Doughty, Michael Evans, Craig Goldberg, Judd Jensen, Evan Kinne, Anthony La Terra, Frank La Terra, James Lilly, Thyme Ludwig, Bruce Packard, Daniel Scheppard, John Skogerboe, Jason Allan Lee Smith, Chris Stafford, Thor Wright, and Team XYZZY"

I count 23 names, plus "Team XYZZY".

From the SoB rulebook, page 46:

" Playtesters: Matthew B. Cary, Dan Clark, J.R. Godwin, James Hata, Sally Karkula, Rob Kouba, Eric M. Lang, John Skogerboe, and Team XYZZY"

I count 8 names, plus "Team XYZZY".

Hm...maybe it's not such a surprise that SoB has more complaints than RtL...

Antistone said:

From the RtL rulebook, page 34:

" Lead Playtester: Mike Zebrowski
Playtesters: Jesse Acostas, Jonathan Ahern, Sean Ahern, Matthew B. Cary, Dan Clark, Tony Doepner, Deron Dorna, Brent Doughty, Michael Evans, Craig Goldberg, Judd Jensen, Evan Kinne, Anthony La Terra, Frank La Terra, James Lilly, Thyme Ludwig, Bruce Packard, Daniel Scheppard, John Skogerboe, Jason Allan Lee Smith, Chris Stafford, Thor Wright, and Team XYZZY"

I count 23 names, plus "Team XYZZY".

From the SoB rulebook, page 46:

" Playtesters: Matthew B. Cary, Dan Clark, J.R. Godwin, James Hata, Sally Karkula, Rob Kouba, Eric M. Lang, John Skogerboe, and Team XYZZY"

I count 8 names, plus "Team XYZZY".

Hm...maybe it's not such a surprise that SoB has more complaints than RtL...

The problem isn't the numbers.
The problem is the personnel. Many of those names are in-house FFG personnel. Possibly all of them (I recognise at least 5 names there from FFG). They include Sally Karkula, who did the dungeon designs (and frankly, the odd issue aside, designed much better dungeons than in RtL). Dan Clark answers a lot of the questions for the FAQ etc. These two people, at the very least, should be who the playtesters report to . They should not be playtesting themselves!
The playtesters should also be playing in a vacuum, so to speak, without immediate access to people who were involved in design - when they get problems and issues they should have to stew in those problems and issues for a while, not get them easily fixed and answered. That may have happened (especially group XYZZY), but doesn't seem likely with such a small group having so many FFG employees.

I agree with all the points made by everyone. It was especially eye-opening for me to see the number of playtesters for both games that Antistone mentioned, and then as Corbon pointed out, many of them are in house, two of them shouldn't have been playtesting at all!

I'd also add my two cents that it really shouldn't take 5 guys to playtest a campaign. 2 people familiar with the game could easily playtest a campaign completely, allowing more playtests in more groups with fewer personnel. Though maybe in play test groups it'd be a good idea to have more opinions, so a larger group might be wanted, like 3 or more.

They do play testing all the time, with every game, every day with hundreds of people. Thats ever person that picks up the game and sits down with other people and play the game, having unexpected things pop up and can't find any kind of ruling or what ever else one can think of happening in a game. Then jump into the forums and start asking questions to other people who went through the same ordeal, get into deep discussions and then send e-mails to FFG about rules and what not. Then after some time FFG puts out a FAQ that doesn't help much because the people in the forums already figured out what to do and how to fix it. The problem become solved and FFG didn't have to pay a single person, other than the people answering the e-mails/phones and the people making and printing the FAQ. So extensive play testing looks to be a yes.

That's not playtesting. That's releasing a crappy product on purpose because you know people will still buy it, then making whatever updates you have to in order to ensure people keep buying it. They're two completely unrelated business practices.

James McMurray said:

That's not playtesting. That's releasing a crappy product on purpose because you know people will still buy it, then making whatever updates you have to in order to ensure people keep buying it. They're two completely unrelated business practices.

Actually, I'm pretty sure that was sarcasm.

Probably, but internet text has no tone and no emotes or tags were given, so I took it at face value.

i'd like to watch the game devs play a round together... i just can't believe they didn't fight over their own rules all the frickin time.

Looking at their rulebooks, I'm pretty sure they don't have a dedicated QA team...

Looking at the Quest Compendium, I confirm they don't do QA at all!