How well do web surveys represent the public?
Arguably the most significant development in survey research over the past decade has been the rapid emergence of web-based online research methods. This trend has been driven primarily by the spread of online access and activity across the population; today roughly nine in ten citizens are connected to the Internet, and a majority do so everyday.
Online research is not without its limitations (click here for a cogent review), chief among them is reaching representative samples of the target population. Unlike telephone-based surveys (in which every individual with a telephone can in theory be reached), online surveys of the general population rely on panels of individuals who are recruited through various means (in most cases through popular websites). This means that the composition of such panels is not representative of the full population, and moreover it is difficult to determine clearly who is left out. This is a major limitation for research that aims to extrapolate findings to the broad population, and the reason why online methods are not considered acceptable for many research applications that require accurate population estimates.
The Pew Research Center has just published a helpful new analysis to answer the question of how online and offline populations are different, both in their composition and how they answer survey questions. They accomplished this through their unique American Trends Panel, which was created to accurately mirror the US adult population.
What did they find? The Pew analysis compared the results of online and offline samples across 406 survey items across a broad range of topics, and in most cases the differences were “quite small.” But there were some notable differences, most notably on questions related the Internet and technology, and to a lesser extent on political knowledge and financial circumstances. Perhaps more important are the differences that emerged within certain subgroups in which the online and offline profiles were different. The one in five Americans who were surveyed offline were more likely to be 65 years and older, Black, rural, Protestant, and have lower levels of education and income. And within these groups, the opinions of online and offline samples are noticeably different (that is, older Americans surveyed online are distinct from their counterparts who participated offline). This means that online surveys will not effectively represent the perspectives of these segments of the population, even if the correct proportion of them are included in the survey sample.
The Pew analysis concludes that the overall coverage error (or bias) is modest in scope, but clearly present. While such coverage error doesn’t challenge the legitimacy of online research, it poses a substantive weakness for studies that seek to produce valid population estimates. This will be most problematic for research topics pertinent to broad public interest and especially ones that have implications for vulnerable segments of the population (e.g., social programs, access to health care).
This valuable work focuses on the US population, but similar findings would likely apply in Canada. What Canada lacks is an organization like the Pew Research Center with the mandate and resources to undertake comparable work in this country.
Katrina 10 years later – How is New Orleans doing?
Ten years ago this month Hurricane Katrina came ashore at New Orleans and devastated parts of the city, with aftereffects taking an even greater toll on the community in terms of dislocation, government mismanagement, racism and post-traumatic stress. Reconstruction of the infrastructure has been underway and can be easily measured but what is less clear is how well residents are coping with the aftermath. This is where social research is needed, and the Kaiser Family Foundation has taken important leadership in launching a sustained study of residents post-Katrina that has involved city-wide surveys in 2006, 2008, 2010 and now most recently in 2015 (10 years after the tragedy).
The latest survey was conducted June 2 – July 5, 2015, among 1,517 randomly selected adults ages 18 and older residing in Orleans Parish, Louisiana (the city of New Orleans). Computer-assisted interviews conducted via landline telephone (705) and cell phone (812) were carried out in English and Spanish.
The survey results show remarkable progress New Orleans has made in the 10 years since Katrina, as well as the stark challenges that remain. But not surprisingly, the biggest challenges were not brought about by the storm itself but how it exacerbated the vast difference in the living circumstances of the city’s African American and white residents. This gap is reflected not only in the rates at which African Americans and whites report ongoing financial problems and a lack of neighborhood services, but also in their feelings about how far New Orleans has come in its recovery, and their views of the city as a good place for young people.
While a majority of both blacks and whites remain optimistic about New Orleans’ future, a third of African Americans and nearly half of young adults are considering moving away. This is a telling and troublng indicator for the future of this unique American city.
Survey research establishes a beachhead in Cuba
Survey research has spread to virtually every corner of the populated world in recent years, extending even into conflict-ridden countries like Iraq and Afghanistan. The list of countries where such research is not possible is very small and recently shrunk by one with the elimination of Cuba. Earlier this year, the Washington Post reported on what is likely the first ever independently commissioned public opinion survey of the Cuban people living in Cuba. The survey was conducted by Miami-based Bendixen & Amandi International (a Miami-based research and communications firm) on behalf of the media company Univision Noticias and Fusion. The research was conducted without authorization of the Cuban government, and it is interesting to speculate whether the government was aware that it was taking place.
The survey was conducted in-person (the standard method for most non-OECD countries) with a representative sample of 1,200 Cuban adults over a 10 day period in March 2015. Interviews were conducted by Cuban residents trained by the research firm, and the interviews were recorded on handheld electronic devices so that responses could be sent electronically to a server outside the country. As reported, the demographic profile of the survey participants closely reflects the known distribution of the population by race, gender, and religion, while somewhat underepresenting older generations.
What makes this research so important is that it provides a unique empirically-based glimpse into the minds of Cubans, that is likely to offer a more accurate picture than the anecdotal and politically-tinged assumptions.
The survey tells an interesting story. On the one hand, there is a positive story in that most Cubans express optimism about their future and that of their family, and say they are satisfied their education and health care systems. And nearly all Cubans agree that normalization with the USA (now finally about to happen) is good for Cuba, and most have a positive view of US President Barack Obama. On the other hand, there are areas of clear discontent: especially with the state-run economy, but also with the political system. Only one in five feel comfortable expressing themselves freely in public, and more than half say they would like to leave and go live in another country (the USA being the top choice of most, followed by European countries and Canada).
Also of interest are the media habits of Cubans. Radio is the principal source of news and entertainment, with only one in six (16%) reporting access to the Internet (and mostly outside of the home). Among those with such access, four in ten use social media (with Facebook being the most popular) and it is used primarily to communicate with others outside the country (no one in the sample said they use social media only to connect with other Cubans in the country).
Given the politics and culture of the country, did Cubans feel it was safe to provide honest answers to the survey questions? There is no way to determine this, but it is likely that some of the respondents declined to answer certain questions even if they had an opinion. About one in five declined to answer questions about whether the country should have more political parties, if the US is a friend of Cuba and whether they hold a positive or negative opinion of the Catholic Church in Cuba. This limitation notwithstanding, this survey represents a landmark first picture of public opinion in Cuba, and could well serve as a precedent for future research in the near future.
NYT on election polling – does it have a future?
Political polls have been part of the fabric of elections in democracies since the 1970s and for most of this period the biggest ongoing controversy has been whether polling results influence voters, and if so whether this is a good thing or not.
Today, political polls face a much more fundamental challenge, resulting from their uneven performance in predicting election outcomes in the past few years. There is a growing chorus of skepticism, if not dismissiveness, about whether polling is still valid. Polling has now become a target from many quarters, many of which are people who do not adequately understand how polling is done or how it should be used.
The most valuable perspective comes from those who are actively involved in the practice of survey research. The most recent published piece – and among the best so far – appeared in the June 21-2015 edition of The New York Times, by Cliff Zukin, a professor of public policy and political science at Rutgers University and one of the leading experts in this field.
Professor Zukin does not pull punches in what his profession is facing: “Election polling is in near crisis, and we pollsters know.” And he then proceeds to outline the specific trends that are making election polling increasingly unreliable (growth of cell phones leading to less representative samples, decline in people’s willingness to answer surveys leading to lower response rates).
His conclusion is stark:
“So what is the solution for election polling? There isn’t one. Our old paradigm has broken down, and we haven’t figured out how to replace it. . . Polls and pollsters are going to be less reliable. We may not even know when we’re off base. What this means for 2016 is anybody’s guess.“
This is hardly welcome news for those who look to political polls as a gauge of voter behaviour, and this state of affairs is as much an issue in Canada (and for our own upcoming federal election in October). But the profession is also actively working on new ways of conducting surveys that will work in today’s globalized online world. This will be a turbulent but exciting process to watch, so stay tuned.
Polling methods are under the gun, but remain essential to democracy
It is by painfully clear that public opinion surveys and polls of general populations are getting more difficult to do effectively, even as new technologies have arrived to make them cheaper and easier to do (e.g., SurveyMonkey). Recent election polling in the UK, Poland and Israel proved well off the mark, as just the most recent examples of how challenging it is becoming to take an accurate reading of voter intentions.
The latest voice to weigh in on this issue is Nate Silver, creator of the well-known data aggregation website FiveThirtyEight.com, which pioneered new methods for aggregating poll results to generate election outcomes (with much success in recent US election cycles). Some might assume that this type of big data methodology is a rival, and possibly a replacement, for traditional survey methods. But in fact the aggregation methods rely on polling data (and lots of it) to work. Nate Silver is not a pollster, but he is well positioned to comment on the current landscape and he has done so in a recent post on his website entitled: Polling is getting harder, but it’s a vital check on power.”
Silver’s piece is well worth reading, and the following excerpts in particular:
“So if the polls fared poorly, does that mean you should have listened to the pundits after all? Not really: In these elections, the speculation among media insiders was usually no better than the polls and was often worse. Almost no one, save perhaps Mick Jagger, assigned much of a chance to the Conservatives’ big win in the U.K. last month, with some betting shops offering odds of 25-to-1 against a Conservative majority.”
Polls are also essential to understanding public opinion on a host of issues that people never get a chance to vote upon. How do Americans feel about higher taxes on the rich? The Keystone XL pipeline? Abortion? Capital punishment? Obamacare?
Left to their own devices, politicians are not particularly good at estimating prevailing public opinion. Neither, for the most part, are journalists. One reason that news organizations like The New York Times and (FiveThirtyEight partner) ABC News continue to conduct polls — at great expense and at a time when their newsrooms are under budgetary pressure — is as a corrective to inaccurate or anecdotal representations of public opinion made by reporters based mostly in New York and Washington. Polling isn’t a contrast to “traditional” reporting. When done properly, it’s among the most rigorous types of reporting, consisting of hundreds or thousands of interviews with statistically representative members of a particular community.”
Crimeans weigh in on annexation, one year later
It has now been a year since the Russian annexation of Crimea, which contravened international law and raised international tensions between East and West rarely seen since the end of the Cold War. The Russian take-over of this region from Ukraine was widely viewed in western democracies as a belligerent political move carried out by military might against the wishes of many in the local population. But recent public opinion research reveals a different picture, in which a strong majority of Crimean residents approve of the annexation and believe it has been a positive change for their region.
The research comes from a recent public opinion study commissioned by openDemocracy, and conducted in December 2014 by the Moscow-based Levada Center. Unlike many surveys reported in western media, this survey was in-depth (conducted by telephone in Russian), encompassing about 150 questions covering a range of topics about identity, politics, media consumption and general issues facing the region.
Results from the survey show that a strong majority of Crimeans approve of the Russian annexation, with 84 percent of Russian and Ukrainian ethnic groups saying it was “absolutely the right decision”, with Ukrainian sentiment only modestly lower than that of Russians. The small minority of ethnic Tatars (who make up an estimated 12% of the population) is split between approval and disapproval, with 20 percent saying the annexation was “absolutely” right.
Consistent with these results is the fact that few Crimeans consider themselves to be “European”, in contrast to sentiments in other parts of Ukraine. And a clear majority (85%) expressed the view that Crimea is now moving in the right direction, in contrast to previous polling (e.g., 6% indicated this view in a 2009 survey by the International Republican Institute). Ethnic Tatars are largely divided on these questions.
The research shows that the vast majority of Crimeans of Russian and Ukrainian identity approve of the recent annexation of their region, with the Tatar minority divided. Does popular will trump international law? openDemocracy describes this as an example of an act that is “illegal but legitimate”, the same words used almost 20 years ago to describe NATO’s intervention that led to Kosovo’s separation from Serbia.
Is this survey legitimate, in terms of accurately portraying the opinions of the Crimean population? openDemocracy describes the Levada Center as having a reputation for “integrity, professionalism, and independence”, and there was no indication of government interference with the survey fieldwork. There is no other way to validate the results other than to await further research that might be undertaken by other organizations based outside of Russia.
Apart from providing important insights about the public mood in Crimea, this research provides a current and compelling example of the valuable role that survey research can play in international affairs. This type of research may not contribute directly to resolving political disputes, but it does provide necessary empirical evidence to settle key questions about public opinion that would otherwise be a battle of anecdotes and political spin.
Research Industry wrestles Margin of Error monkey
Almost any time you read about a public opinion poll you will see a sentence, usually at the end, stating a “margin of error” percentage, “plus or minus.” Those who know something about research will understand this to be a statistical measure of the representativeness of the sample of survey respondents recruited from the broader population under study. Those who know how survey research is done are likely aware that there is growing controversy about the use of this statistic as an indicator of survey quality.
The issues boils down to the following: The margin of error statistic applies to probability samples, such as those historically used for telephone surveys where every household is theoretically available to be sampled for a given survey. Such samples are increasingly difficult (and costly) to generate , and most surveys today are conducted through online methods that rely on non-probability samples. And yet the research industry and its clients continue to rely heavily on margin of error as the sine qua non indicator of survey accuracy.
The issue is well known within the research community but not discussed, until now. Annie Petit of Peanut Labs recently hosted a well attended webinar on this topic. The webinar featured four senior level researchers from the market research industry who discussed the relevance and applicability of margin of error in today’s world. The discussion is a bit dry, and may be difficult to follow for those who lack a basic understanding of margin of error and how it works since the webinar is aimed at industry practitioners. But the discussion is well worth listening to for those involved in survey research as a practitioner, client, journalist, or anyone who wants to understand better about how surveys are done.
There is no surprise in the fact that the four panellists all agreed that margin of error statistics are no longer as relevant in survey research today, and that they may do more harm than good in providing irrelevant and possibly misleading information about survey data quality. So why does the practice persist? In part this is because there are no other metrics of survey quality that offer the conciseness and face validity of margin of error. There are numerous sources of error that can affect the accuracy of survey results, and these are complicated if not impossible to measure.
The most revealing insight to come out of the webinar is how research companies are stuck with an irrelevant metric of survey quality because their clients demand it. Several panellists noted how they write survey reports that include a margin of error and then state it does not actually apply to the results (as a way to appease clients who insist the statistic is included). One panellist commented that to stop quoting margin of error even when it does not apply could well risk to the loss of valued clients. What this reveals is an underlying conflict between the science and business aspects of market research in today’s world. Commercial and media clients need data to drive or justify decisions, and they need to show their data is sound. Margin of error has been cast in the role of providing that seal of approval, and the inconvenient truth behind the science is easily ignored.
Not all survey research is conducted for business, and it would be illuminating to also hear the perspective on margin of error from practitioners in government, university and non-profit settings who are focused more on sound data than business confidence. Perhaps this will be the topic of a future webinar.
You can listen to the Peanut Labs webinar in its entirety here.
When survey research goes to war
Public opinion surveys are used for many purposes, and some have much less profile than others. A good example is how survey research is now being used by governments and their militaries as a counter insurgency tool in conflict areas. This research flies largely under the media radar, but is nicely discussed in a recent Monkey Cage blog post in the Washington Post by Andrew Shaver and Yang-Yang Zhou (both are Ph.D. political science candidates at Princeton University).
Shaver and Zhou discuss major research projects undertaken by US-led coalition forces in Afghanistan and Iraq to measure local population opinions and sentiment in support of military operations. These efforts are substantial in scope – in the case of Iraq entailing in-person interviews every month over a five year period, cumulatively totalling around 200,000 interviews. Topics included level of support for insurgent attacks against the coalition and Iraqi government forces, satisfaction with a range of public goods and services, and expectations about the capabilities of Iraqi security forces. Why this is relevant is because successful counterinsurgency initiatives are unlikely to succeed without local public support.
The scope of such ongoing investment would suggest that the research is proving valuable in helping to anticipate challenges facing military operations as well as measure progress in achieving public support. Shaver and Zhou report the data collected in Iraq revealed a clear positive relationship between public support for insurgent attacks against coalition forces and the actual number of such attacks. But they point out that such data do not tell us whether one leads to the other (does growing public sentiment lead to more attacks, or do such attacks result in more popular support?).
As well, the authors raise what they aptly describe as “the more fundamental and less exciting question of whether the survey responses accurately reflect the attitudes of the citizens they are designed to capture.” Do Iraqis and Afghanis tell the truth when being interviewed on surveys being conducted on behalf of an occupying army? There is no way to measure this precisely but it would have to be a concern to those sponsoring such research. Shaver and Zhou briefly outline some of the methodological approaches that have been developed to obtain accurate answers to sensitive survey questions. But these approaches were developed using western populations accustomed to survey participation, and their effectiveness with other cultures and contexts remain to be established.
Apropos this issue, the New England Chapter of the American Association of Public Opinion Research (AAPOR) is hosting a half day mini-conference entitled “New Frontiers in Preventing, Detecting, and Remediating Fabrication in Survey Research.” This free event will be held in Cambridge, MA on February 13, 2015 (and also broadcast over the web via WebEx). The event is likely to cover the issues facing surveys conducted by governments overseas, as the agenda will include speakers from the US State Department and the Arab Barometer.
What’s wrong with online survey research methods?
Perhaps the most significant trend in market and public opinion research in the past decade has been the emergence of online research methods as the dominant form of survey data collection. This trend has taken for three reasons: a) to leverage the expanding array of digital technologies and their rapid adoption across the population; b) to realize greater efficiencies and lower costs in collecting survey data; and c) to avoid having to deal with the challenges associated with telephone interviewing.
But what about the quality of online research? Is something critical being lost by this expanding reliance on online research methods? This question is addressed in the latest issue of MRA’s Alert Magazine, with a critique of online research methods by author Neil Chakraborty. He addresses a number of issues, but zeroes in the reliance on non-probability samples, which is widely considered to be the greatest limitation of online survey research.
Survey research blossomed in the latter half of the 20th century largely on the strength of the science of probability sampling that provided a statistically-credible basis for extrapolating results from small representative samples to the populations they are drawn from. This approach requires that every member of the population has a chance of being selected to be surveyed. This could be more or less accomplished when surveys were conducted by telephone, but cannot be done with Internet-based surveys because there is no online equivalent to telephone numbers (unlike telephone numbers, e-mail addresses cannot be randomly generated). This means that most online surveys rely on drawing samples from established panels of individuals who are recruited to participate through website promotions. However balanced such online samples might be, in terms of their demographic and regional characteristics, they do not possess the qualities of probability samples, and cannot be treated as such.
Chakraborty’s critique is not new, and his points have been covered by others (see for instance AAPOR’s 2010 report on online panels and 2013 report on non-probability samples). But it offers a useful overview of key issues, and includes an important admonishment to research practitioners to both focus on reducing survey errors, and be transparent about the limitations of their methods.
Practical advice about survey research goes video
When organizations, students and newcomers to the survey research business look for “how to” guidance, the choices are largely limited to textbooks, consultants and on-the-job experience. Now there is a new resource in the form of short practical advice videos that have been created by Elon University in Greensboro North Carolina (home of the Elon University Poll).
- Surveys in society
- What is sampling error?
- Methods of collecting survey data
- How to read a crosstab
- 7 tips for good survey questions
The material is basic, and the audience is primarily for beginners. But even seasoned professionals can benefit from refreshing their knowledge.