Research Digest

Page 1 of 8 |Next|Last

Survey research establishes a beachhead in Cuba

Survey research has spread to virtually every corner of the populated world in recent years, extending even into conflict-ridden countries like Iraq and Afghanistan. The list of countries where such research is not possible is very small and recently shrunk by one with the elimination of Cuba. Earlier this year, the Washington Post reported on what is likely the first ever independently commissioned public opinion survey of the Cuban people living in Cuba. The survey was conducted by Miami-based Bendixen & Amandi International (a Miami-based research and communications firm) on behalf of the media company Univision Noticias and Fusion. The research was conducted without authorization of the Cuban government, and it is interesting to speculate whether the government was aware that it was taking place.

The survey was conducted in-person (the standard method for most non-OECD countries) with a representative sample of 1,200 Cuban adults over a 10 day period in March 2015. Interviews were conducted by Cuban residents trained by the research firm, and the interviews were recorded on handheld electronic devices so that responses could be sent electronically to a server outside the country. As reported, the demographic profile of the survey participants closely reflects the known distribution of the population by race, gender, and religion, while somewhat underepresenting older generations.

What makes this research so important is that it provides a unique empirically-based glimpse into the minds of Cubans, that is likely to offer a more accurate picture than the anecdotal and politically-tinged assumptions.

The survey tells an interesting story. On the one hand, there is a positive story in that most Cubans express optimism about their future and that of their family, and say they are satisfied their education and health care systems. And nearly all Cubans agree that normalization with the USA (now finally about to happen) is good for Cuba, and most have a positive view of US President Barack Obama. On the other hand, there are areas of clear discontent:  especially with the state-run economy, but also with the political system. Only one in five feel comfortable expressing themselves freely in public, and more than half say they would like to leave and go live in another country (the USA being the top choice of most, followed by European countries and Canada).

Also of interest are the media habits of Cubans. Radio is the principal source of news and entertainment, with only one in six (16%) reporting access to the Internet (and mostly outside of the home). Among those with such access, four in ten use social media (with Facebook being the most popular) and it is used primarily to communicate with others outside the country (no one in the sample said they use social media only to connect with other Cubans in the country).

Given the politics and culture of the country, did Cubans feel it was safe to provide honest answers to the survey questions? There is no way to determine this, but it is likely that some of the respondents declined to answer certain questions even if they had an opinion. About one in five declined to answer questions about whether the country should have more political parties, if the US is a friend of Cuba and whether they hold a positive or negative opinion of the Catholic Church in Cuba. This limitation notwithstanding, this survey represents a landmark first picture of public opinion in Cuba, and could well serve as a precedent for future research in the near future.

NYT on election polling – does it have a future?

Political polls have been part of the fabric of elections in democracies since the 1970s and for most of this period the biggest ongoing controversy has been whether polling results influence voters, and if so whether this is a good thing or not.

Today, political polls face a much more fundamental challenge, resulting from their uneven performance in predicting election outcomes in the past few years. There is a growing chorus of skepticism, if not dismissiveness, about whether polling is still valid. Polling has now become a target from many quarters, many of which are people who do not adequately understand how polling is done or how it should be used. 

The most valuable perspective comes from those who are actively involved in the practice of survey research. The most recent published piece – and among the best so far – appeared in the June 21-2015 edition of The New York Times, by Cliff Zukin, a professor of public policy and political science at Rutgers University and one of the leading experts in this field. 

Professor Zukin does not pull punches in what his profession is facing:  “Election polling is in near crisis, and we pollsters know.”  And he then proceeds to outline the specific trends that are making election polling increasingly unreliable (growth of cell phones leading to less representative samples, decline in people’s willingness to answer surveys leading to lower response rates).  

His conclusion is stark:

 “So what is the solution for election  polling? There isn’t one. Our old paradigm has broken down, and we haven’t figured out how to replace it. . . Polls and pollsters are going to be less reliable. We may not even know when we’re off base. What this means for 2016 is anybody’s guess.“

This is hardly welcome news for those who look to political polls as a gauge of voter behaviour, and this state of affairs is as much an issue in Canada (and for our own upcoming federal election in October). But the profession is also actively working on new ways of conducting surveys that will work in today’s globalized online world. This will be a turbulent but exciting process to watch, so stay tuned.

Polling methods are under the gun, but remain essential to democracy

It is by painfully clear that public opinion surveys and polls of general populations are getting more difficult to do effectively, even as new technologies have arrived to make them cheaper and easier to do (e.g., SurveyMonkey). Recent election polling in the UK, Poland and Israel proved well off the mark, as just the most recent examples of how challenging it is becoming to take an accurate reading of voter intentions.

The latest voice to weigh in on this issue is Nate Silver, creator of the well-known data aggregation website FiveThirtyEight.com, which pioneered new methods for aggregating poll results to generate election outcomes (with much success in recent US election cycles). Some might assume that this type of big data methodology is a rival, and possibly a replacement, for traditional survey methods. But in fact the aggregation methods rely on polling data (and lots of it) to work. Nate Silver is not a pollster, but he is well positioned to comment on the current landscape and he has done so in a recent post on his website entitled:  Polling is getting harder, but it’s a vital check on power.”

Silver’s piece is well worth reading, and the following excerpts in particular:

“So if the polls fared poorly, does that mean you should have listened to the pundits after all? Not really: In these elections, the speculation among media insiders was usually no better than the polls and was often worse. Almost no one, save perhaps Mick Jagger, assigned much of a chance to the Conservatives’ big win in the U.K. last month, with some betting shops offering odds of 25-to-1 against a Conservative majority.”

Polls are also essential to understanding public opinion on a host of issues that people never get a chance to vote upon. How do Americans feel about higher taxes on the rich? The Keystone XL pipeline? Abortion? Capital punishment? Obamacare?

Left to their own devices, politicians are not particularly good at estimating prevailing public opinion. Neither, for the most part, are journalists. One reason that news organizations like The New York Times and (FiveThirtyEight partner) ABC News continue to conduct polls — at great expense and at a time when their newsrooms are under budgetary pressure — is as a corrective to inaccurate or anecdotal representations of public opinion made by reporters based mostly in New York and Washington. Polling isn’t a contrast to “traditional” reporting. When done properly, it’s among the most rigorous types of reporting, consisting of hundreds or thousands of interviews with statistically representative members of a particular community.”

Crimeans weigh in on annexation, one year later

It has now been a year since the Russian annexation of Crimea, which contravened international law and raised international tensions between East and West rarely seen since the end of the Cold War. The Russian take-over of this region from Ukraine was widely viewed in western democracies as a belligerent political move carried out by military might against the wishes of many in the local population. But recent public opinion research reveals a different picture, in which a strong majority of Crimean residents approve of the annexation and believe it has been a positive change for their region.

The research comes from a recent public opinion study commissioned by openDemocracy, and conducted in December 2014 by the Moscow-based Levada Center. Unlike many surveys reported in western media, this survey was in-depth (conducted by telephone in Russian), encompassing about 150 questions covering a range of topics about identity, politics, media consumption and general issues facing the region.

Results from the survey show that a strong majority of Crimeans approve of the Russian annexation, with 84 percent of Russian and Ukrainian ethnic groups saying it was “absolutely the right decision”, with Ukrainian sentiment only modestly lower than that of Russians. The small minority of ethnic Tatars (who make up an estimated 12% of the population) is split  between approval and disapproval, with 20 percent saying the annexation was “absolutely” right.

Consistent with these results is the fact that few Crimeans consider themselves to be “European”, in contrast to sentiments in other parts of Ukraine.  And a clear majority (85%) expressed the view that Crimea is now moving in the right direction, in contrast to previous polling (e.g., 6% indicated this view in a 2009 survey by the International Republican Institute). Ethnic Tatars are largely divided on these questions.

The  research shows that the vast majority of Crimeans of Russian and Ukrainian identity approve of the recent annexation of their region, with the Tatar minority divided. Does popular will trump international law? openDemocracy describes this as an example of an act that is “illegal but legitimate”, the same words used almost 20 years ago to describe NATO’s intervention that led to Kosovo’s separation from Serbia.

Is this survey legitimate, in terms of accurately portraying the opinions of the Crimean population? openDemocracy describes the Levada Center as having a reputation for “integrity, professionalism, and independence”, and there was no indication of government interference with the survey fieldwork.  There is no other way to validate the results other than to await further research that might be undertaken by other organizations based outside of Russia.

Apart from providing important insights about the public mood in Crimea, this research provides a current and compelling example of the valuable role that survey research can play in international affairs. This type of research may not contribute directly to resolving political disputes, but it does provide necessary empirical evidence to settle key questions about public opinion that would otherwise be a battle of anecdotes and political spin.

Research Industry wrestles Margin of Error monkey

Almost any time you read about a public opinion poll you will see a sentence, usually at the end, stating a “margin of error” percentage, “plus or minus.” Those who know something about research will understand this to be a statistical measure of the representativeness of the sample of survey respondents recruited from the broader population under study. Those who know how survey research is done are likely aware that there is growing controversy about the use of this statistic as an indicator of survey quality.

The issues boils down to the following: The margin of error statistic applies to probability samples, such as those historically used for telephone surveys where every household is theoretically available to be sampled for a given survey. Such samples are increasingly difficult (and costly) to generate , and most surveys today are conducted through online methods that rely on non-probability samples.  And yet the research industry and its clients continue to rely heavily on margin of error as the sine qua non indicator of survey accuracy.

The issue is well known within the research community but not discussed, until now.  Annie Petit of Peanut Labs recently hosted a well attended webinar on this topic. The webinar featured four senior level researchers from the market research industry who discussed the relevance and applicability of margin of error in today’s world. The discussion is a bit dry, and may be difficult to follow for those who lack a basic understanding of margin of error and how it works since the webinar is aimed at industry practitioners.  But the discussion is well worth listening to for those involved in survey research as a practitioner, client, journalist, or anyone who wants to understand better about how surveys are done.

There is no surprise in the fact that the four panellists all agreed that margin of error statistics are no longer as relevant in survey research today, and that they may do more harm than good in providing irrelevant and possibly misleading information about survey data quality. So why does the practice persist?  In part this is because there are no other metrics of survey quality that offer the conciseness and face validity of margin of error. There are numerous sources of error that can affect the accuracy of survey results, and these are complicated if not impossible to measure.

The most revealing insight to come out of the webinar is how research companies are stuck with an irrelevant metric of survey quality because their clients demand it.  Several panellists noted how they write survey reports that include a margin of error and then state it does not actually apply to the results (as a way to appease clients who insist the statistic is included). One panellist commented that to stop quoting margin of error even when it does not apply could well risk to the loss of valued clients. What this reveals is an underlying conflict between the science and business aspects of market research in today’s world. Commercial and media clients need data to drive or justify decisions, and they need to show their data is sound. Margin of error has been cast in the role of providing that seal of approval, and the inconvenient truth behind the science is easily ignored. 

Not all survey research is conducted for business, and it would be illuminating to also hear the perspective on margin of error from practitioners in government, university and non-profit settings who are focused more on sound data than business confidence. Perhaps this will be the topic of a future webinar.

You can listen to the Peanut Labs webinar in its entirety here.

When survey research goes to war

Public opinion surveys are used for many purposes, and some have much less profile than others. A good example is how survey research is now being used by governments and their militaries as a counter insurgency tool in conflict areas. This research flies largely under the media radar, but is nicely discussed in a recent Monkey Cage blog post in the Washington Post by Andrew Shaver and Yang-Yang Zhou (both are Ph.D. political science candidates at Princeton University).

Shaver and Zhou discuss major research projects undertaken by US-led coalition forces in Afghanistan and Iraq to measure local population opinions and sentiment in support of military operations. These efforts are substantial in scope – in the case of Iraq entailing in-person interviews every month over a five year period, cumulatively totalling around 200,000 interviews. Topics included level of support for insurgent attacks against the coalition and Iraqi government forces, satisfaction with a range of public goods and services, and expectations about the capabilities of Iraqi security forces. Why this is relevant is because successful counterinsurgency initiatives are unlikely to succeed without local public support.

The scope of such ongoing investment would suggest that the research is proving valuable in helping to anticipate challenges facing military operations as well as measure progress in achieving public support. Shaver and Zhou report the data collected in Iraq revealed a clear positive relationship between public support for insurgent attacks against coalition forces and the actual number of such attacks. But they point out that such data do not tell us whether one leads to the other (does growing public sentiment lead to more attacks, or do such attacks result in more popular support?).

As well, the authors raise what they aptly describe as “the more fundamental and less exciting question of whether the survey responses accurately reflect the attitudes of the citizens they are designed to capture.” Do Iraqis and Afghanis tell the truth when being interviewed on surveys being conducted on behalf of an occupying army? There is no way to measure this precisely but it would have to be a concern to those sponsoring such research. Shaver and Zhou briefly outline some of the methodological approaches that have been developed to obtain accurate answers to sensitive survey questions. But these approaches were developed using western populations accustomed to survey participation, and their effectiveness with other cultures and contexts remain to be established.

Apropos this issue, the New England Chapter of the American Association of Public Opinion Research (AAPOR) is hosting a half day mini-conference entitled “New Frontiers in Preventing, Detecting, and Remediating Fabrication in Survey Research.” This free event will be held in Cambridge, MA on February 13, 2015 (and also broadcast over the web via WebEx). The event is likely to cover the issues facing surveys conducted by governments overseas, as the agenda will include speakers from the US State Department and the Arab Barometer.

What’s wrong with online survey research methods?

Perhaps the most significant trend in market and public opinion research in the past decade has been the emergence of online research methods as the dominant form of survey data collection. This trend has taken for three reasons: a) to leverage the expanding array of digital technologies and their rapid adoption across the population; b) to realize greater efficiencies and lower costs in collecting survey data; and c) to avoid having to deal with the challenges associated with telephone interviewing.

But what about the quality of online research?  Is something critical being lost by this expanding reliance on online research methods?  This question is addressed in the latest issue of MRA’s Alert Magazine, with a critique of online research methods by author Neil Chakraborty. He addresses a number of issues, but zeroes in the reliance on non-probability samples, which is widely considered to be the greatest limitation of online survey research.

Survey research blossomed in the latter half of the 20th century largely on the strength of the science of probability sampling that provided a statistically-credible basis for extrapolating results from small representative samples to the populations they are drawn from. This approach requires that every member of the population has a chance of being selected to be surveyed. This could be more or less accomplished when surveys were conducted by telephone, but cannot be done with Internet-based surveys because there is no online equivalent to telephone numbers (unlike telephone numbers, e-mail addresses cannot be randomly generated). This means that most online surveys rely on drawing samples from established panels of individuals who are recruited to participate through website promotions. However balanced such online samples might be, in terms of their demographic and regional characteristics, they do not possess the qualities of probability samples, and cannot be treated as such.

Chakraborty’s critique is not new, and his points have been covered by others (see for instance AAPOR’s 2010 report on online panels and 2013 report on non-probability samples). But it offers a useful overview of key issues, and includes an important admonishment to research practitioners to both focus on reducing survey errors, and be transparent about the limitations of their methods.

Practical advice about survey research goes video

When organizations, students and newcomers to the survey research business look for “how to” guidance, the choices are largely limited to textbooks, consultants and on-the-job experience. Now there is a new resource in the form of short practical advice videos that have been created by Elon University in Greensboro North Carolina (home of the Elon University Poll).

There are 10 videos in all (posted on Youtube) each about three minutes in length.  The featured speaker is Elon Professor Kenneth Fernandez, who covers such topics as:

  • Surveys in society
  • What is sampling error?
  • Methods of collecting survey data
  • How to read a crosstab
  • 7 tips for good survey questions


The material is basic, and the audience is primarily for beginners.  But even seasoned professionals can benefit from refreshing their knowledge.

2013 Survey of American Jews -- Lessons for Canada

Last fall the widely-respected US-based Pew Research Center released the results of a comprehensive survey of American Jews, which generated considerable attention in Canada as well as in the USA.

The survey addresses many themes, but chief among them is how Jewish identity and practice are changing among Americans, and the results may also be relevant to what is happening in the Canadian Jewish community. This was the focus of the Seventh Annual Elka Klein Memorial Lecture held at Congregation Darchei Noam in Toronto on June 23, 2014.

The event featured a presentation on the American survey results by Greg Smith, one of the senior Pew researchers responsible for the study. This was followed by a panel of Canadian experts who discussed how the American findings may or may not be relevant to the Canadian Jewish community (in the absence of any comparable survey of Canadian Jews).

The panel was moderated by Janice Stein (Director, Munk School of Global Affairs), and will include our member Frank Bialystok (University of Toronto), Bernie Farber (former CEO of the Canadian Jewish Congress) and Aaron Levy (Founder and Executive Director of Makom: Creative Downtown Judaism).

See full video coverage of the event.

AAPOR launches new initiative on survey methods

The American Association of Public Opinion Research (AAPOR) righly bills itself as the leading association of public opinion and survey research professionals. In keeping with this role, AAPOR has just announced the launch of its latest special task force initiatives, in this case to encourage dialogue and deepen our understanding of today's survey methods.

The current research environment is characterized by an expanding array of methodologies, each with its own challenges — increased cost, under-coverage, low participation, mixing of modes, uncertainty in the links between theory and practical application. This trend is producing numerous reactions, from those who are developing and testing new techniques and innovative methods to address many of these issues, to those who have grown skeptical of contemporary survey methods, whether new approaches or traditional methods.

The Task Force will be composed of leading research methodologists, and will have three principal responsibilities:

  1. Organizing a "mini-conference" at the 2015 AAPOR Annual Conference (May 2015).
  2. Develop guidance to help consumer of survey data assess the quality of what they commission.
  3. Identify other ways to feature innovation and learning in this critical area of research methodology.

This type of independent, expert-driven initiative is critically needed at this point in time, and is well worth paying close attention to as it evolves over the next year or so. Watch for updates in this space.

Page 1 of 8 |Next|Last