How many presidential polls are there
Despite these efforts to achieve a random sample, though, response rates remain shockingly low, especially among younger people, Spanish speakers, Evangelicals, and African-Americans.
If they are different, then the sample is likely to be biased, and the results of the poll wrong. To correct for this sort of bias in the samples, pollsters make use of weighting.
As such, a pollster might use weighting to, in effect, count African-American response twice towards the overall results.
Young African-American males are usually the hardest demographic to reach in political polling, and any that happen to be in the sample are likely to be upweighted because of their race, their gender, and their age. But since the weights are cumulative, that one person could respresent as much as half a percent of the overall results, potentially letting a few people throw off the entire sample.
Decisions like this give some pollsters the opportunity to push their results one way or the other, for partisan purposes, or to avoid being too far from what other polls are saying. As polling averages have become more prevalent, some pollsters have become nervous about putting out results that are too far from that average, leading them to weight strategically to get their data back towards the mean, or, in some cases, to choose not to release results that look weird.
As a result, the polls, in the aggregate, can miss shifts in public opinion. The best telephone interviewers are highly experienced and college educated, and paying them is the main cost of political surveys. The most common form of this is Interactive Voice Response IVR polling, in which the live interviewers are replaced with recorded prompts, and respondents give answers by speaking to the computer.
They also have even lower response rates than traditional phone sampling, seem to encourage more false responses, and cannot legally reach cell phones. Online polls have presented another cheap, fast alternative to live caller polls, but they still face enormous challenges. As with journalism, there are pluses and minuses to this democratization.
There has been a wave of experimentation with new approaches, but there has also been a proliferation of polls from firms with little to no survey credentials or track record. In , this contributed to a state polling landscape overrun with fast and cheap polls, most of which made a preventable mistake: failing to correct for an overrepresentation of college-educated voters , who leaned heavily toward Hillary Clinton.
Some newcomer polls might provide good data, but poll watchers should not take that on faith. Surveys can be sampled and adjusted to represent the country on certain dimensions, so any person can make this claim about any poll, regardless of its quality.
The real margin of error is often about double the one reported. The notion that a typical margin of error is plus or minus 3 percentage points leads people to think that polls are more precise than they really are. Why is that? For starters, the margin of error addresses only one source of potential error: the fact that random samples are likely to differ a little from the population just by chance.
But there are three other, equally important sources of error in polling: nonresponse , coverage error where not all the target population has a chance of being sampled and mismeasurement.
Not only does the margin of error fail to account for those other sources of potential error, it implies to the public that they do not exist, which is not true. Several recent studies show that the average error in a poll estimate may be closer to 6 percentage points, not the 3 points implied by a typical margin of error. While polls remain useful in showing whether the public tends to favor or oppose key policies, this hidden error underscores the fact that polls are not precise enough to call the winner in a close election.
Students learning about surveys are generally taught that a very large sample size is a sign of quality because it means that the results are more precise. While that principle remains true in theory, the reality of modern polling is different. Adding more and more interviews from a biased source does not improve estimates. For example, online opt-in polls are based on convenience samples that tend to overrepresent adults who self-identify as Democrats, live alone, do not have children and have lower incomes.
While an online opt-in survey with 8, interviews may sound more impressive than one with 2, interviews, a study by the Center found virtually no difference in accuracy. There is evidence that when the public is told that a candidate is extremely likely to win, some people may be less likely to vote. Now there is scientific research to back up that logic. A team of researchers found experimental evidence that when people have high confidence that one candidate will win, they are less likely to vote.
This helps explain why some analysts of polls say elections should be covered using traditional polling estimates and margins of error rather than speculative win probabilities also known as probabilistic forecasts. Taking as an example, both Donald Trump and Clinton had historically poor favorability ratings.
That turned out to be a signal that many Americans were struggling to decide whom to support and whether to vote at all. By contrast, a raft of state polls in the Upper Midwest showing Clinton with a lead in the horse race proved to be a mirage. This year, there will be added uncertainty in horse race estimates stemming from possible pandemic-related barriers to voting.
Far more people will vote by mail — or try to do so — than in the past, and if fewer polling places than usual are available, lines may be very long. All of this is to remind us that the real value in election polling is to help us understand why people are voting — or not voting — as they are. Historically, public opinion researchers have relied on the ability to adjust their datasets using a core set of demographics to correct imbalances between the survey sample and the population.
Three examples from a summer survey illustrate the point. Shifting the focus to party affiliation among nonvoters, we see even less fidelity of partisans to issue positions typically associated with those parties. Adding more Trump voters and Republicans also does add more skeptics about immigration, but nearly a third of the additional Trump voters say immigrants strengthen American society, a view shared by about half of Republican nonvoters. This means that our survey question on immigration does not change in lockstep with changes in how many Trump supporters or Republicans are included in the poll.
Similarly, the Biden voter group includes plenty of skeptics about a larger government. Pump up his support and you get more supporters of bigger government, but, on balance, not as many as you might expect.
Not all applications of polling serve the same purpose. We expect and need more precision from election polls because the circumstances demand it. In a closely divided electorate, a few percentage points matter a great deal.
In a poll that gauges opinions on an issue, an error of a few percentage points typically will not matter for the conclusions we draw from the survey. Those who follow election polls are rightly concerned about whether those polls are still able to produce estimates precise enough to describe the balance of support for the candidates. Election polls in highly competitive elections must provide a level of accuracy that is difficult to achieve in a world of very low response rates.
Only a small share of the survey sample must change to produce what we perceive as a dramatic shift in the vote margin and potentially an incorrect forecast. In the context of the presidential election, a change of that small size could have shifted the outcome from a spot-on Biden lead of 4.
Differences of a magnitude that could make an election forecast inaccurate are less consequential when looking at issue polling. Unlike the measurement of an intended vote choice in a close election, the measurement of opinions is more subjective and likely to be affected by how questions are framed and interpreted.
Moreover, a full understanding of public opinion about a political issue rarely depends on a single question like the vote choice. Often, multiple questions probe different aspects of an issue, including its importance to the public. Astute consumers of polls on issues usually understand this greater complexity and subjectivity and factor it into their expectations for what an issue poll can tell them.
The goal in issue polling is often not to get a precise percentage of the public that chooses a position but rather to obtain a sense of where public opinion stands. For example, differences of 3 or 4 percentage points in the share of the public saying they would prefer a larger government providing more services matter less than whether that is a viewpoint endorsed by a large majority of the public or by a small minority, whether it is something that is increasing or decreasing over time, or whether it divides older and younger Americans.
But good pollsters take many steps to improve the accuracy of their polls. Good survey samples are usually weighted to accurately reflect the demographic composition of the U. The samples are adjusted to match parameters measured in high-quality, high response rate government surveys that can be used as benchmarks.
Many opinions on issues are associated with demographic variables such as race, education, gender and age, just as they are with partisanship.
At Pew Research Center, we also adjust our surveys to match the population on several other characteristics, including region, religious affiliation, frequency of internet usage, and participation in volunteer activities. And although the analysis presented here explicitly manipulated party affiliation among nonvoters as part of the experiment, our regular approach to weighting also includes a target for party affiliation that helps minimize the possibility that sample-to-sample fluctuations in who participates could introduce errors.
Collectively, the methods used to align survey samples with the demographic, social and political profile of the public help ensure that opinions correlated with those characteristics are more accurate.
As a result of these efforts, several studies have shown that properly conducted public opinion polls produce estimates very similar to benchmarks obtained from federal surveys or administrative records. While not providing direct evidence of the accuracy of measures of opinion on issues, they suggest that polls can accurately capture a range of phenomena including lifestyle and health behaviors that may be related to public opinion.
A lack of trust in other people or in institutions such as governments, universities, churches or science, might be an example of a phenomenon that leads both to nonparticipation in surveys and to errors in measures of questions related to trust.
Surveys may have a smaller share of distrusting people than is likely true in the population, and so measures of these attitudes and anything correlated with them would be at least somewhat inaccurate. Polling professionals should be mindful of this type of potential error.
And we know that measures of political and civic engagement in polls are biased upward. Polls tend to overrepresent people interested and engaged in politics as well as those who take part in volunteering and other helping behaviors.
Pew Research Center weights its samples to address both of these biases, but there is no guarantee that weighting completely solves the problem. But this does not mean that pollsters should quit striving to have their surveys accurately represent Republican, Democratic and other viewpoints.
Errors in the partisan composition of polls can go in both directions. Despite cautions from those inside and outside the profession, polling will continue to be judged, fairly or not, on the performance of preelection polls. Pew Research Center is exploring ways to ensure we reach the correct share of Republicans and that they are comfortable taking our surveys.
We are also trying to continuously evaluate whether Republicans and Trump voters — or indeed, Democrats and Biden voters — in our samples are fully representative of those in the population. However, this study is not without its limitations. The underlying mechanism that weakens the association between levels of candidate support or party affiliation and opinions on issues should apply to polls conducted by any organization at any level of geography, but we examined it using only our surveys.
Another important assumption is that the Trump voters and Biden voters who agreed to be interviewed are representative of Trump voters and Biden voters nationwide with respect to their opinions on issues.
We cannot know that for sure.
0コメント