How far is any given political opponent really ahead in voter opinion polls? The answer is as reliant upon how a poll is conducted as it is the number a poll produces.
If we look at Wikipedia’s conglomerate data entry on the most recent statewide opinion polls for the 2020 presidential election, we see many confusing numbers next to the highlighted victory margins.
What is the difference between an aggregate and a 2020 poll? Why are some dates shorter than others? What does LV mean? And most importantly, who is actually chosen to participate?
The answers are as convoluted as the polls themselves.
What is an Aggregate Poll?
An aggregate poll compiles results from other polls and averages them out. The top websites that provide aggregate poll data include 270 to Win, Real Clear Politics, and FiveThirtyEight.
The logical presumption would be that aggregate poll sites select which polls to average in order to favor a certain result. However, the pollsters used in the final calculation above are shown to be a good mix of conservative and liberal leaning polls as well as reputable news networks and smaller, independently run polls.
As always, there is the issue of source reliability. FiveThirtyEight is considered especially reliable due to their graded ratings of pollsters.
For example, FiveThirtyEight assigns Trafalgar Group a C- rating whereas Siena College holds an A+ rating. The only description FiveThirtyEight gives for how their ratings system works is “based on the historical accuracy and methodology of each firm’s polls.”
One could argue that the graded rating system itself is biased. But the average site visitor is still given access to a wide array of pollsters.
Does it Matter How Long the Poll was Conducted?
This is a tricky question to answer, but most pollsters would lean towards saying “yes”. Logically, the longer a voter is given to answer a survey, the more informed their decision will be. However, this is not an inflexible rule.
Voters should be wary of polls that are conducted over a number of weeks, especially with a small sample size because such polls may be targeting a specific demographic in anticipation of a major political event that could skew the results.
In other words, if a pollster issues a questionnaire to be filled out by Floridians during a two-week period in which the Independent candidate is campaigning in Florida, that poll would not be as reliable as one that was conducted outside of the campaign trail. According to the National Council on Public Polls (NCPP), voters filling out such a poll may be more inclined to check the “other/Independent” category as opposed to committing to one of the two major party candidates. Such a result may lead the public to believe there is larger support for third parties than election day eventually renders.
Typically, most polls are conducted over a three to five day period, with longer polls running about a week.
Why are Sample Sizes so Inconsistent? What Does LV Mean?
As with any poll, it is impossible to guarantee the same amount of people will participate every time. As a general rule, the larger the sample size is, the better.
Unfortunately, sample sizes are rarely ever larger than 1,000 and more often than not, the amount of people polled is closer to 500. This is why the data is not always accurate.
Take Arizona, for example. The poll listed from Trafalgar Group uses a sample size of 1,087 with a margin of error of 2.89%. Next to that number we see the letters LV in parentheses. LV stands for “likely voters,” one of three measured demographics; the other two being “registered voters” and “all adults.”
As Nate Silver of FiveThirtyEight has proven, “likely voters” is typically the most reliable measurement as many people who are registered to vote, don’t.
The predicted 1,087 likely voters Trafalgar Group polled are supposed to represent the entire state of Arizona, nearly 7.3 million people.
According to the Arizona Secretary of State’s website, the total amount of registered voters numbers 3.99 million as of August 2020. However, only 2.41 million of those registered voters casted a ballot in the 2018 midterm election (turnout was down by 10% from the 2016 election).
This is why it is important pollsters indicate their sample size and which of the three demographics they are measuring.
While a 2.89% margin of error may seem small, if we prorate that number for likely voters based on the 2018 election, it comes out to roughly 70,000 voters. President Trump only defeated Secretary Clinton by 91,000 votes in Arizona in 2016. Thus, a 2.89% margin of error is actually quite substantial despite appearing otherwise.
Who is Filling out These Polls?
Our final question is the hardest to answer. Currently, most pollsters don’t release data on who they are polling, largely out of privacy concerns. Nonetheless, the issue does present a legitimate question: how are pollsters selecting their participants?
Unfortunately, we simply do not know. We have no idea how much diversity in terms of race, age, ethnicity, gender, or sex factors into participant selection. This may be a significant factor in determining political polling’s accuracy and its future relevance as a reliable medium.
For now, we must simply continue to study and scrutinize the numbers.
By Thomas O’Connor
Leave a Reply