Deceit by Statistics
Why the actors’ union’s survey results of L.A. members don’t add up
By Kevin Delin
The reason the bulk of the Los Angeles theater community galvanized in true solidarity over the past two months is the fear that the new small-theater plan proposed by Actors’ Equity Association (AEA), the stage actors’ union, will up-end, and possibly decimate, the entire stage scene in Los Angeles. The current 99-Seat Plan has spawned a theater art form unique in the country where professionals can voluntarily get together to create world-class and cutting-edge art. The shows in LA’s intimate theaters needn’t first be audience-tested as television or film productions. The professionals in the 99-seat theaters can take real artistic risks.
And that, at its core, is what the professional actors and stage managers in Los Angeles are fighting to maintain: the freedom to take artistic risk while maintaining the basic work environment protections by the Union.
The Union’s plan, in contrast, is focused on changing the financial structure of LA intimate theater by enforcing universal minimum wages for rehearsals and performances, regardless of a theater’s size or its production budget.
And Union Executive Director Mary McColl bases that argument on a membership survey and two small focus groups from last October. According to McColl, “AEA members have spoken loud and clear and told their leadership that they deserve compensation for their training, their contributions to productions, and their talent.” And for several months now, McColl and her Council have justified the implementation of their new plan by endlessly repeating the mantra that, according to their 2014 survey, the top concern of their membership is “more pay.”
Except their survey does not show that. In fact: The AEA’s interpretations of the published survey results are not supported by mathematics. While the actors’ top concern may be a desire for more money, you cannot use this survey to prove that’s the case.
That’s a serious charge because the entire AEA case and new plan is based on their survey as evidence. The charge, however, is completely consistent with the actors’ overwhelming rejection (66% voted NO to 34% YES) of the AEA plan in a recent non-binding referendum. So let’s examine how there’s a disconnect between Mary McColl’s claims and the actors’ reaction with a better mathematical analysis of the survey’s results. I promise it won’t hurt.
Before we sharpen our #2 pencils and don our green eyeware, however, let’s get a few basic observations out of the way.
First, AEA hired Hart Research Associates to conduct the polling. Hart Research is well known and an established outfit from inside the Washington DC Beltway. They do significant polling for the type of clients (politicians) where getting things wrong means you don’t get a second chance.
Second, the raw survey is often released as part of a complete results package. AEA has not done so. Morever, there are no copies of the survey because polling was done online. For now, we’ll assume that (a) all questions in the poll are represented in the summary and (b) the order of the questions in the poll are the same as in the summary. There is a minor problem with the “pick from a list” questions in that we don’t know what order the list was presented to those surveyed but we will assume for now it won’t affect any major conclusions.
The most basic issue in polling is sample size. Accurate national polls can be done with as few as 1000 or so respondents. As long as you choose your survey recipients carefully, without introducing sample bias, you don’t have to poll a large number of people.
The sample size is stated as 608 individuals. This is a legitimate sampling size for a population of 6990 AEA actors living in LA County (based on the number of ballots sent out for a recent referendum).
But is there any sampling bias? An example of sampling bias in the national media would be polling only people you call on landlines – this biases towards older individuals (younger people are more likely to have cell phones only). Here, an email account needed to be available to take the AEA survey. It is uncertain if this introduced any intrinsic bias into the sampling. Other biases include the possibility of oversampling people who lived in LA less than one year (and may not understand the LA theater community yet) and how the email addresses for the sample were chosen. While the survey claims that the sample is representative, it would be nice for the sampling method to be made public.
Of more significant concern is the unreported margin of error. It’s highly unusual to not report this number and it should be released.
For analysis purposes, it is estimated, assuming a typical 95% confidence level with a sample size of 608, that the margin of error is +/-4%. (Calculated as 0.98/square root of 608.) What does this mean in English? It means that if AEA gives out the survey 100 times, 95 of those times (the confidence level) will have the results reported to within +/- 4.0% of the percentages given in the summary. For example, if a response is given 60% of the time, then 95 times out of 100, we’d expect the survey response to this question be within the range 56% to 64%. Using this calculated margin of 4%, most of the key survey results will not be affected (in other words, the differences between answer responses are usually much greater than 4.0%). A slight correction to our estimated margin of error accounts for the 608 sample size being a relatively large percentage of the 6690 population size. If we use a population correction factor, our adjusted margin is a slightly tighter +/-3.8%.
Are you going to worry about this difference? Didn’t think so. We’ll use 4%.
The survey states that 63% of the membership participated in the 99-seat plan in the last 5 years. Using the 6,990 population size, this means that 4,215 AEA actors participated in intimate theater in the last 5 years. We can get an idea of this scale is by comparing it to the sizes of union membership communities in other cities. As reported in the AEA 2013-2014 annual report, the New York City area, unsurprisingly has the largest community with 18,795 members. This size is followed by the Los Angeles area with 8,481. The next four largest AEA communities in the United States are Chicago (1,829), San Francisco (1,143), Philadelphia (1,057), and Washington DC/Baltimore (1,055). That means that the AEA 99-seat theater acting community in Los Angeles is multiple times larger than the total membership in other cities.
So while the acting may take place in a small theater, the acting community supporting it is anything but small. When comparing Philadelphia with the LA intimate theater community, for example, the LA intimate theater community is the giant in the room, not the City of Brotherly Love. Little wonder that LA intimate theater is having an impact on the national theater culture. By its sheer size, this is an expected, and not surprising, result.
Of these 63% of actors, 60% are satisfied with their 99-seat theater experience. To put that 60% in context: If we look at the last 112 years of presidential elections, historians identify four landslides:
- 1932: Franklin Roosevelt (57.4%) over Herbert Hoover (39.7%)
- 1964: Lyndon Johnson (61.1%) over Barry Goldwater (38.5%)
- 1972: Richard Nixon (60.7%) over George McGovern (37.5%)
- 1984: Ronald Reagan (58.8%) over Walter Mondale (40.6%)
By looking at the popular vote in these landslides, it’s clear that 60% to 40% is a hugely significant margin. There’s a mathematical reason for this: 68% represents everything within 1 standard of deviation on a Bell Curve (or Gaussian Curve if you are wandering around MIT). The things outside that region are considered “on the fringes.”
The same exact statistics tell us that a 60%/40% response means that actors are satisfied with their 99-seat theater experience on a landslide scale. In this context, the AEA survey predicts quite accurately the recent complete rejection by LA union members, by 66% to 34%, of the new AEA plan. Given the efforts made by the Union to campaign for the proposal, the thorough rejection is further confirmation that Los Angeles actors are passionate about tweaking, rather than up-ending, the present intimate theater culture.
There are two basic ways that a survey can go bad. The first, sample bias, we’ve already discussed. The second is response bias, or using bad questions. Response bias comes in many forms and the AEA survey is replete with them. First, of course, is the “pick two from a list” style question which is clearly limited to a pre-chosen set of responses.
Of the following areas, what are the two most important considerations for you personally when participating in theatre under an Equity contract, agreement, or code?
- Wage rates, pay levels, per diems: 61%
- Ability to hone my skills, craft: 40%
- Opportunities for exposure that may lead to Equity contract: 33%
- Opportunities for exposure that may lead to TV/film work: 28%
- Health payments: 20%
- Pension, 401(k) payments: 11%
- Other: (Other volunteered considerations include cast/production members, love of the art, artistic value, quality of production, role, story.): 7%
Next is the issue of question ambiguity. Consider: “Of the following areas, what are the two most important considerations for you personally when participating in theatre[sic] under an Equity contract, agreement, or code?” This question lumps contract (paid) work with code (stipend) work. Presumably, an actor has different reasons for deciding to go in either direction and yet this question blends them all together. For example, should we interpret the question as “why choose to participate under Equity contract” and answer “wages”? Or should we interpret it as “why choose to participate under the 99-seat plan” and answer “honing skills”? The question immediately conflates the two issues preventing any specific, and logically sound, interpretation by AEA.
There are two survey questions which involve choosing from a list of items. The first, cited above, involves the top concern of the actor.
The second involves the top concern for the Union. Instead of simply choosing a single item, however, respondents are asked to “pick two.”
Increasing work opportunities under Equity contracts and ensuring safe working conditions are LA area members’ top priorities.
Selected Two Most Important Activities for Actor’s Equity:
- Increasing contracts available to members: 57%
- Setting/maintaining standards for working conditions: 37%
- Negotiating contracts and agreements: 31%
- Increasing flexible opportunities, including codes: 29%
- Enforcing contracts and agreements: 27%
- Helping members with problems, service needs, grievances: 7%
- Communicating about activities/issues facing theatre prof.: 7%
This form of selection is a bit similar to alternative voting. Alternative voting is known to be more a more accurate representation of the group’s desires compared to “pick one.” However, unlike alternative voting, in “pick two” the order of the choices are never tabulated. Therefore, once you throw out the information about the preference between the two items selected, you cannot reintroduce the idea that one of the most selected items is more popular than the other.
Hypothetically, we can demonstrate by example that the most-chosen item from a “Pick-Two” scenario is not necessarily the most-popular item. Suppose we give 10 respondents an alternative voting choice out of the following list: A, B, C, D, E, F. Here are the responses, including the order of the picks:
- B, C
- A, C
- A, E
- D, B
- E, A
- C, B
- C, F
- A, C
- A, C
- B, C
What’s the most popular choice? Alternative voting tells us to rank all the top choices, and then add in the secondary votes of those not making a top choice in Round 1:
Round 1 Votes:
A = 4
B = 2
C = 2
D = 1
E = 1
F = 0
D and E are the low votes at 1 apiece so use their second choice to determine the top choice:
A = 4 + 1 = 5 = 50%
B = 2 + 1 = 3 = 30%
C = 2 + 0 = 2 = 20%
Obviously A is the most popular (50%) B is 2nd most popular (30%), and C is in third place (20%).
However, suppose we instead count the same response under a “Pick-Two” method where order of choice isn’t accounted. In this case we have:
A = 5 = 50%
B = 4 = 40%
C = 7 = 70%
D = 1 = 10%
E = 2 = 20%
F = 1 = 10%
(Note that the percentages indicate the percentage of respondents picking that item, not the percentage of times the item was chosen. Since each person picks two items, the sum total of percentages is “200%” – which is identical to the sum of percentages in the AEA survey results.)
Under the “Pick-Two” scenario, it may look like C has the top ranking (70% of the respondents choosing it), with A (the true most-popular choice) coming in 2nd (50% of the people choosing it). However, since we threw out information (the order of the choices), we can’t now include that lost information in a ranking. We can only say that A and C were the two most chosen items. We know, of course, that in a true popular ranking (alternative voting) C would be in third place.
From this example, it’s much easier to interpret the AEA survey results. In the case of what concerns the actors most, the top two selected items are “wages” and the “ability to hone my skills.” Since we don’t know which item was chosen first or second, one cannot say that “wages” is the top choice just because 61% of the people chose it. In fact, subsequent discussions, including an AEA Town Hall meeting in January, have focused on the actors’ desire to maintain 99-seat theater to “practice their craft.” And when given the choice between getting in front of an audience and getting paid, most actors state that getting paid is a lesser concern. (Again, this speaks to the idea of tweaking the current 99-seat plan rather than overturning it.)
Just like in our example, then, perhaps “hone my skills” was the true most-popular choice and “wages” was actually in third place (behind the oft-mentioned “exposure opportunities.”) At least this idea is consistent with comments made since the survey. Of course, no one can prove “hone my skills” is the most-popular choice because since AEA didn’t collect the selection ordering data, they can’t re-run the experiment. Nevertheless, we have just demonstrated that AEA cannot claim that this survey shows the top actor concern is wages.
Similarly, the most-popular choice for the second question, the top concern for the Union, is also unclear. Again, subsequent discussions have revealed that the actors in 99-seat theaters are more interested in Union workplace protections rather than Union pay requirements. Again, the loss of selection ordering prevents a definitive interpretation.
To summarize: it is an error to claim that the most-chosen item is representative of the top choice of a group. One cannot use “Pick Two” as an alternative to alternative voting. If one is looking for the top choice to represent a group’s desires, alternative voting must be used. A sleight of hand occurs when asking about the “two most important considerations” becomes identifying a “top consideration.”
It’s a shame that the Mary McColl and her Council spent good time and money to create such a poorly vetted survey. While there is no obvious sampling bias associated with the survey, there is clearly an embedded response bias. And compounding those issues are mathematically unjustifiable assertions AEA makes based on a faulty conception of ranked data.
In short, this survey does nothing to prove the AEA claim that a majority of actors are primarily concerned with increasing wages to the exclusion of all else. In fact, despite the response bias, a careful analysis of the survey is surprisingly consistent with the subsequent reactions of the actors. By landslide margins, actors and stage managers in 99-seat theaters are satisfied with their experience and so, not surprisingly, by landslide margins, LA actors and stage managers rejected an AEA plan which threatened those same theaters. In addition, both the value of practicing craft and union workplace protection finished in the most-chosen group in the “Pick-Two” questions and most actors have talked about these issues being more important than monetary compensation.
So, it turns out there is some connection between reality and this survey after all.
Just not in the way that AEA claimed.