Certain limitations hold true across all time periods and methodologies. Though most pollsters refer to surveys of the general public as “national adult,” this term is generally understood to mean the non-institutionalized, civilian population aged 18 and older, a distinction not always clear to poll readers. Most of the 2.6 percent of the total U.S. population who are classified by the Census as living in group quarters are excluded from most surveys, including those living in residential treatment centers, skilled nursing facilities, group homes, military barracks, prisons, workers’ dormitories, and those living in non-sheltered outdoor locations or in temporary shelters like tent encampments. While cell phone and online polling may reach some of these populations, these limitations are important to keep in mind particularly when considering attitudes on subjects that would be specifically relevant to such populations, like long-term care or incarceration.
Language limitations can also exclude respondents. Some U.S. surveys are fielded exclusively in English, while others offer the choice of English or Spanish to participants. Some special surveys aimed at reaching particular subpopulations have questionnaires translated into additional languages.
The earliest health polls in iPoll offer unique insights into a wide array of issues affecting the health of the nation in the 1930s and 1940s. However, the face-to-face quota-based methods used at the time largely left decisions about respondent selection to the interviewers, who may not always have followed the instructions to find a balanced sample, especially when that involved visiting areas that seemed dangerous or involved substantially increased walking. These choices could skew the respondent sample toward more affluent Americans.
The quotas themselves could also introduce biases. Early Gallup polls used voting as an important control in developing their quotas, giving greater weight to those regions and populations that voted at higher rates. This resulted in systematic underrepresentation of women and Black Americans, particularly in the South where Jim Crow laws prevented Black Americans from voting.
Nonetheless, accuracy in several elections indicates that early polls were reasonably reliable and invaluable to researchers interested in health attitudes and behaviors at the time. Still, less precision should be expected than in surveys with more refined methods, and potential biases noted.
The shift in the U.S. to probability-based polling in the 1950s addressed many of these biases. But some remained. The advent of telephone polling in the 1970s introduced a new source of non-coverage: those who did not have a phone. This population was small at the emergence of phone polling and has become smaller since. In 2019, roughly 1% of the adult population lived in households without a landline or cell phone. This population tends to be less educated, more likely to live in poverty, and less likely to be a homeowner than those with phones.
Other technologies can complicate telephone polling. Cell phones presented a challenge to pollsters when they first began to replace landlines, but polling firms rapidly adopted methods to include cell phones in telephone samples. People who depend on assistive devices for communicating by telephone, such as those with hearing loss, are generally not included in surveys.
Online polls fall into two categories, nonprobability and probability. Nonprobability polls are limited in coverage to those who have internet access, unless they are supplemented by a poll conducted a different way, like by telephone. These polls depend on weighting to make their polls representative of the total population.
Online probability panel polls recruit a pool of respondents using RDD or Address-Based Sampling. Survey samples are then drawn from this panel. (learn more link)
Both non-probability and probability polls conducted online can be biased toward those who spend more time on the Internet. But online polls may address another limitation of other methods of polling. Online panels can be very large, and information is retained about participants from survey to survey. This allows researchers to target small subpopulations, like mothers of infants or people with particular health conditions, who would be difficult or costly to reach otherwise.
Once a sample has been selected and respondents contacted, pollsters implement methods intended to improve sample balance. Callbacks in telephone and in-person polls, or reminders or incentives to participate in online polls, can increase participation among those who are less likely to answer the phone or respond to emails. After the fieldwork ends, pollsters use weighting to bring the final sample in line with the national population in terms of sex, race, education, and other characteristics, using Census demographics or other benchmarks. These efforts improve polling accuracy, but nothing can ensure perfect representation.
The most perfectly representative sample in the world can still misrepresent public opinion if the question wording is leading, unbalanced, or simply too confusing. The presence of other questions in the survey instrument and the order in which they are presented can also affect responses. For example, asking political questions before policy questions may prime the respondent to give answers more aligned with their party’s position.
The method used to contact the respondents – telephone calls, mail, web, text messages, in person interviewers, etc. – is called the survey mode. Different modes can lead to different results due to the sampling issues described above, but also because different types of interactions can have different effects on respondents. Questions about highly sensitive topics, like sexual or substance use experiences, might be answered more truthfully when the respondent feels more anonymous. But a well-trained interviewer may be effective at encouraging participation in a poll for a reluctant respondent, thereby reducing overall bias.
In some cases, different field organizations can get different results for the same survey questions using similar polling methods. This can be attributed to several factors, like choice of weighting demographics, number of callbacks or reminders used, visual design on an online polling instrument, or interviewer training. One example of interviewer training differences can be found in accepting “don’t know” as a volunteered response. Some organizations may ask their interviewers to probe for an answer before accepting “don’t know,” while others allow interviewers to accept a don’t know response immediately. The same question of the same population might therefore result in differing levels of “don’t know” response.