The will of the people is an interesting thing to ponder, sometimes.
http://www.nytimes.com/2011/03/01/us/01poll.html?hp (yeah, yeah I know the source but they do their polls using sound methodology)
Originally Posted by discreetgent
Polling and statistics are subjective to the people/regions polled. I am not a big fan of polls even when they lean right. Although, I, just like others, will use them. I'm sure YOU know this stuff, but I thought I'd include it for those that DO shape their personal opinions based on polls.
Nonresponse bias
Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population due to a non-response bias (
Non-response bias occurs in statistical surveys if the answers of respondents differs from the potential answers of those who did not answer). Because of this selection bias (
Selection bias is a statistical bias in which there is an error in choosing the individuals or groups to take part in a scientific study. It is sometimes referred to as the selection effect. The term "selection bias" most often refers to the distortion of a statistical analysis, resulting from...) the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes, because taking a larger sample size simply repeats the same mistake on a larger scale. If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, then the final results should be unbiased. If the people who do not answer have different opinions then there is bias in the results. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own techniques for adjusting weights to minimize selection bias.
Response bias
Survey results may be affected by response bias (
Response bias is a type of cognitive bias which can affect the results of a statistical survey if respondents answer questions in the way they think the questioner wants them to answer rather than according to their true beliefs), where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer. For example, respondents might be unwilling to admit to unpopular attitudes like racism (
Racism is the belief that the genetic factors which constitute race are a primary determinant of human traits and capacities and that racial differences produce an inherent superiority of a particular race) or sexism (
Sexism, a term coined in the mid-20th century, is the belief or attitude that one gender or sex is inferior to, less competent, or less valuable than the other. It can also refer to hatred of, or prejudice towards, either sex as a whole , or the application of stereotypes of masculinity in relation), and thus polls might not reflect the true incidence of these attitudes in the population. In American political parlance, this phenomenon is often referred to as the Bradley Effect (
The Bradley effect, less commonly called the Wilder effect, is a theory proposed to explain observed discrepancies between voter opinion polls and election outcomes in some US government elections where a white candidate and a non-white candidate run against each other). If the results of surveys are widely publicized this effect may be magnified - a phenomenon commonly referred to as the spiral of silence (
The spiral of silence is a political science and mass communication theory propounded by the German political scientist Elisabeth Noelle-Neumann).
Wording of questions
It is well established that the wording of the questions, the order in which they are asked and the number and form of alternative answers offered can influence results of polls. For instance, the public is more likely to indicate support for a person who is described by the operator as one of the "leading candidates". This support itself overrides subtle bias for one candidate, as does lumping some candidates in an "other" category or vice versa. Thus comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys. This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey.
A common technique to control for this bias is to rotate the order in which questions are asked. Many pollsters also split-sample. This involves having two different versions of a question, with each version presented to half the respondents.