Research Digest

Page 1 of 7 |Next|Last

NYT on election polling – does it have a future?

Political polls have been part of the fabric of elections in democracies since the 1970s and for most of this period the biggest ongoing controversy has been whether polling results influence voters, and if so whether this is a good thing or not.

Today, political polls face a much more fundamental challenge, resulting from their uneven performance in predicting election outcomes in the past few years. There is a growing chorus of skepticism, if not dismissiveness, about whether polling is still valid. Polling has now become a target from many quarters, many of which are people who do not adequately understand how polling is done or how it should be used. 

The most valuable perspective comes from those who are actively involved in the practice of survey research. The most recent published piece – and among the best so far – appeared in the June 21-2015 edition of The New York Times, by Cliff Zukin, a professor of public policy and political science at Rutgers University and one of the leading experts in this field. 

Professor Zukin does not pull punches in what his profession is facing:  “Election polling is in near crisis, and we pollsters know.”  And he then proceeds to outline the specific trends that are making election polling increasingly unreliable (growth of cell phones leading to less representative samples, decline in people’s willingness to answer surveys leading to lower response rates).  

His conclusion is stark:

 “So what is the solution for election  polling? There isn’t one. Our old paradigm has broken down, and we haven’t figured out how to replace it. . . Polls and pollsters are going to be less reliable. We may not even know when we’re off base. What this means for 2016 is anybody’s guess.“

This is hardly welcome news for those who look to political polls as a gauge of voter behaviour, and this state of affairs is as much an issue in Canada (and for our own upcoming federal election in October). But the profession is also actively working on new ways of conducting surveys that will work in today’s globalized online world. This will be a turbulent but exciting process to watch, so stay tuned.

Polling methods are under the gun, but remain essential to democracy

It is by painfully clear that public opinion surveys and polls of general populations are getting more difficult to do effectively, even as new technologies have arrived to make them cheaper and easier to do (e.g., SurveyMonkey). Recent election polling in the UK, Poland and Israel proved well off the mark, as just the most recent examples of how challenging it is becoming to take an accurate reading of voter intentions.

The latest voice to weigh in on this issue is Nate Silver, creator of the well-known data aggregation website FiveThirtyEight.com, which pioneered new methods for aggregating poll results to generate election outcomes (with much success in recent US election cycles). Some might assume that this type of big data methodology is a rival, and possibly a replacement, for traditional survey methods. But in fact the aggregation methods rely on polling data (and lots of it) to work. Nate Silver is not a pollster, but he is well positioned to comment on the current landscape and he has done so in a recent post on his website entitled:  Polling is getting harder, but it’s a vital check on power.”

Silver’s piece is well worth reading, and the following excerpts in particular:

“So if the polls fared poorly, does that mean you should have listened to the pundits after all? Not really: In these elections, the speculation among media insiders was usually no better than the polls and was often worse. Almost no one, save perhaps Mick Jagger, assigned much of a chance to the Conservatives’ big win in the U.K. last month, with some betting shops offering odds of 25-to-1 against a Conservative majority.”

Polls are also essential to understanding public opinion on a host of issues that people never get a chance to vote upon. How do Americans feel about higher taxes on the rich? The Keystone XL pipeline? Abortion? Capital punishment? Obamacare?

Left to their own devices, politicians are not particularly good at estimating prevailing public opinion. Neither, for the most part, are journalists. One reason that news organizations like The New York Times and (FiveThirtyEight partner) ABC News continue to conduct polls — at great expense and at a time when their newsrooms are under budgetary pressure — is as a corrective to inaccurate or anecdotal representations of public opinion made by reporters based mostly in New York and Washington. Polling isn’t a contrast to “traditional” reporting. When done properly, it’s among the most rigorous types of reporting, consisting of hundreds or thousands of interviews with statistically representative members of a particular community.”

Crimeans weigh in on annexation, one year later

It has now been a year since the Russian annexation of Crimea, which contravened international law and raised international tensions between East and West rarely seen since the end of the Cold War. The Russian take-over of this region from Ukraine was widely viewed in western democracies as a belligerent political move carried out by military might against the wishes of many in the local population. But recent public opinion research reveals a different picture, in which a strong majority of Crimean residents approve of the annexation and believe it has been a positive change for their region.

The research comes from a recent public opinion study commissioned by openDemocracy, and conducted in December 2014 by the Moscow-based Levada Center. Unlike many surveys reported in western media, this survey was in-depth (conducted by telephone in Russian), encompassing about 150 questions covering a range of topics about identity, politics, media consumption and general issues facing the region.

Results from the survey show that a strong majority of Crimeans approve of the Russian annexation, with 84 percent of Russian and Ukrainian ethnic groups saying it was “absolutely the right decision”, with Ukrainian sentiment only modestly lower than that of Russians. The small minority of ethnic Tatars (who make up an estimated 12% of the population) is split  between approval and disapproval, with 20 percent saying the annexation was “absolutely” right.

Consistent with these results is the fact that few Crimeans consider themselves to be “European”, in contrast to sentiments in other parts of Ukraine.  And a clear majority (85%) expressed the view that Crimea is now moving in the right direction, in contrast to previous polling (e.g., 6% indicated this view in a 2009 survey by the International Republican Institute). Ethnic Tatars are largely divided on these questions.

The  research shows that the vast majority of Crimeans of Russian and Ukrainian identity approve of the recent annexation of their region, with the Tatar minority divided. Does popular will trump international law? openDemocracy describes this as an example of an act that is “illegal but legitimate”, the same words used almost 20 years ago to describe NATO’s intervention that led to Kosovo’s separation from Serbia.

Is this survey legitimate, in terms of accurately portraying the opinions of the Crimean population? openDemocracy describes the Levada Center as having a reputation for “integrity, professionalism, and independence”, and there was no indication of government interference with the survey fieldwork.  There is no other way to validate the results other than to await further research that might be undertaken by other organizations based outside of Russia.

Apart from providing important insights about the public mood in Crimea, this research provides a current and compelling example of the valuable role that survey research can play in international affairs. This type of research may not contribute directly to resolving political disputes, but it does provide necessary empirical evidence to settle key questions about public opinion that would otherwise be a battle of anecdotes and political spin.

Research Industry wrestles Margin of Error monkey

Almost any time you read about a public opinion poll you will see a sentence, usually at the end, stating a “margin of error” percentage, “plus or minus.” Those who know something about research will understand this to be a statistical measure of the representativeness of the sample of survey respondents recruited from the broader population under study. Those who know how survey research is done are likely aware that there is growing controversy about the use of this statistic as an indicator of survey quality.

The issues boils down to the following: The margin of error statistic applies to probability samples, such as those historically used for telephone surveys where every household is theoretically available to be sampled for a given survey. Such samples are increasingly difficult (and costly) to generate , and most surveys today are conducted through online methods that rely on non-probability samples.  And yet the research industry and its clients continue to rely heavily on margin of error as the sine qua non indicator of survey accuracy.

The issue is well known within the research community but not discussed, until now.  Annie Petit of Peanut Labs recently hosted a well attended webinar on this topic. The webinar featured four senior level researchers from the market research industry who discussed the relevance and applicability of margin of error in today’s world. The discussion is a bit dry, and may be difficult to follow for those who lack a basic understanding of margin of error and how it works since the webinar is aimed at industry practitioners.  But the discussion is well worth listening to for those involved in survey research as a practitioner, client, journalist, or anyone who wants to understand better about how surveys are done.

There is no surprise in the fact that the four panellists all agreed that margin of error statistics are no longer as relevant in survey research today, and that they may do more harm than good in providing irrelevant and possibly misleading information about survey data quality. So why does the practice persist?  In part this is because there are no other metrics of survey quality that offer the conciseness and face validity of margin of error. There are numerous sources of error that can affect the accuracy of survey results, and these are complicated if not impossible to measure.

The most revealing insight to come out of the webinar is how research companies are stuck with an irrelevant metric of survey quality because their clients demand it.  Several panellists noted how they write survey reports that include a margin of error and then state it does not actually apply to the results (as a way to appease clients who insist the statistic is included). One panellist commented that to stop quoting margin of error even when it does not apply could well risk to the loss of valued clients. What this reveals is an underlying conflict between the science and business aspects of market research in today’s world. Commercial and media clients need data to drive or justify decisions, and they need to show their data is sound. Margin of error has been cast in the role of providing that seal of approval, and the inconvenient truth behind the science is easily ignored. 

Not all survey research is conducted for business, and it would be illuminating to also hear the perspective on margin of error from practitioners in government, university and non-profit settings who are focused more on sound data than business confidence. Perhaps this will be the topic of a future webinar.

You can listen to the Peanut Labs webinar in its entirety here.

When survey research goes to war

Public opinion surveys are used for many purposes, and some have much less profile than others. A good example is how survey research is now being used by governments and their militaries as a counter insurgency tool in conflict areas. This research flies largely under the media radar, but is nicely discussed in a recent Monkey Cage blog post in the Washington Post by Andrew Shaver and Yang-Yang Zhou (both are Ph.D. political science candidates at Princeton University).

Shaver and Zhou discuss major research projects undertaken by US-led coalition forces in Afghanistan and Iraq to measure local population opinions and sentiment in support of military operations. These efforts are substantial in scope – in the case of Iraq entailing in-person interviews every month over a five year period, cumulatively totalling around 200,000 interviews. Topics included level of support for insurgent attacks against the coalition and Iraqi government forces, satisfaction with a range of public goods and services, and expectations about the capabilities of Iraqi security forces. Why this is relevant is because successful counterinsurgency initiatives are unlikely to succeed without local public support.

The scope of such ongoing investment would suggest that the research is proving valuable in helping to anticipate challenges facing military operations as well as measure progress in achieving public support. Shaver and Zhou report the data collected in Iraq revealed a clear positive relationship between public support for insurgent attacks against coalition forces and the actual number of such attacks. But they point out that such data do not tell us whether one leads to the other (does growing public sentiment lead to more attacks, or do such attacks result in more popular support?).

As well, the authors raise what they aptly describe as “the more fundamental and less exciting question of whether the survey responses accurately reflect the attitudes of the citizens they are designed to capture.” Do Iraqis and Afghanis tell the truth when being interviewed on surveys being conducted on behalf of an occupying army? There is no way to measure this precisely but it would have to be a concern to those sponsoring such research. Shaver and Zhou briefly outline some of the methodological approaches that have been developed to obtain accurate answers to sensitive survey questions. But these approaches were developed using western populations accustomed to survey participation, and their effectiveness with other cultures and contexts remain to be established.

Apropos this issue, the New England Chapter of the American Association of Public Opinion Research (AAPOR) is hosting a half day mini-conference entitled “New Frontiers in Preventing, Detecting, and Remediating Fabrication in Survey Research.” This free event will be held in Cambridge, MA on February 13, 2015 (and also broadcast over the web via WebEx). The event is likely to cover the issues facing surveys conducted by governments overseas, as the agenda will include speakers from the US State Department and the Arab Barometer.

What’s wrong with online survey research methods?

Perhaps the most significant trend in market and public opinion research in the past decade has been the emergence of online research methods as the dominant form of survey data collection. This trend has taken for three reasons: a) to leverage the expanding array of digital technologies and their rapid adoption across the population; b) to realize greater efficiencies and lower costs in collecting survey data; and c) to avoid having to deal with the challenges associated with telephone interviewing.

But what about the quality of online research?  Is something critical being lost by this expanding reliance on online research methods?  This question is addressed in the latest issue of MRA’s Alert Magazine, with a critique of online research methods by author Neil Chakraborty. He addresses a number of issues, but zeroes in the reliance on non-probability samples, which is widely considered to be the greatest limitation of online survey research.

Survey research blossomed in the latter half of the 20th century largely on the strength of the science of probability sampling that provided a statistically-credible basis for extrapolating results from small representative samples to the populations they are drawn from. This approach requires that every member of the population has a chance of being selected to be surveyed. This could be more or less accomplished when surveys were conducted by telephone, but cannot be done with Internet-based surveys because there is no online equivalent to telephone numbers (unlike telephone numbers, e-mail addresses cannot be randomly generated). This means that most online surveys rely on drawing samples from established panels of individuals who are recruited to participate through website promotions. However balanced such online samples might be, in terms of their demographic and regional characteristics, they do not possess the qualities of probability samples, and cannot be treated as such.

Chakraborty’s critique is not new, and his points have been covered by others (see for instance AAPOR’s 2010 report on online panels and 2013 report on non-probability samples). But it offers a useful overview of key issues, and includes an important admonishment to research practitioners to both focus on reducing survey errors, and be transparent about the limitations of their methods.

Practical advice about survey research goes video

When organizations, students and newcomers to the survey research business look for “how to” guidance, the choices are largely limited to textbooks, consultants and on-the-job experience. Now there is a new resource in the form of short practical advice videos that have been created by Elon University in Greensboro North Carolina (home of the Elon University Poll).

There are 10 videos in all (posted on Youtube) each about three minutes in length.  The featured speaker is Elon Professor Kenneth Fernandez, who covers such topics as:

  • Surveys in society
  • What is sampling error?
  • Methods of collecting survey data
  • How to read a crosstab
  • 7 tips for good survey questions


The material is basic, and the audience is primarily for beginners.  But even seasoned professionals can benefit from refreshing their knowledge.

2013 Survey of American Jews -- Lessons for Canada

Last fall the widely-respected US-based Pew Research Center released the results of a comprehensive survey of American Jews, which generated considerable attention in Canada as well as in the USA.

The survey addresses many themes, but chief among them is how Jewish identity and practice are changing among Americans, and the results may also be relevant to what is happening in the Canadian Jewish community. This was the focus of the Seventh Annual Elka Klein Memorial Lecture held at Congregation Darchei Noam in Toronto on June 23, 2014.

The event featured a presentation on the American survey results by Greg Smith, one of the senior Pew researchers responsible for the study. This was followed by a panel of Canadian experts who discussed how the American findings may or may not be relevant to the Canadian Jewish community (in the absence of any comparable survey of Canadian Jews).

The panel was moderated by Janice Stein (Director, Munk School of Global Affairs), and will include our member Frank Bialystok (University of Toronto), Bernie Farber (former CEO of the Canadian Jewish Congress) and Aaron Levy (Founder and Executive Director of Makom: Creative Downtown Judaism).

See full video coverage of the event.

AAPOR launches new initiative on survey methods

The American Association of Public Opinion Research (AAPOR) righly bills itself as the leading association of public opinion and survey research professionals. In keeping with this role, AAPOR has just announced the launch of its latest special task force initiatives, in this case to encourage dialogue and deepen our understanding of today's survey methods.

The current research environment is characterized by an expanding array of methodologies, each with its own challenges — increased cost, under-coverage, low participation, mixing of modes, uncertainty in the links between theory and practical application. This trend is producing numerous reactions, from those who are developing and testing new techniques and innovative methods to address many of these issues, to those who have grown skeptical of contemporary survey methods, whether new approaches or traditional methods.

The Task Force will be composed of leading research methodologists, and will have three principal responsibilities:

  1. Organizing a "mini-conference" at the 2015 AAPOR Annual Conference (May 2015).
  2. Develop guidance to help consumer of survey data assess the quality of what they commission.
  3. Identify other ways to feature innovation and learning in this critical area of research methodology.

This type of independent, expert-driven initiative is critically needed at this point in time, and is well worth paying close attention to as it evolves over the next year or so. Watch for updates in this space.

Gallup Tracks Americans: Concerned Believers and Cool Skeptics on Climate Change

By Kristen Pue

INTRODUCTION

Gallup recently concluded a series of public opinion reports on climate change, culminating in a final report that established three categories to articulate American public opinion about climate change. The report showed that age and political affiliation, but not level of education, are important in explaining public opinion on the subject.  It also included trend data that can be used to compare the story of climate change skepticism in Canada and the US.

WHAT GALLUP FOUND

Gallup has released ten reports since March as part of its ongoing climate change series. Each of the reports addressed one aspect of American public opinion about climate change. The reports concluded that:

·         Americans value the environment and a majority believes climate change is man-made. Fifty-seven percent of Americans believe that human activity is responsible for climate change, a proportion that has remained steady since 2001. Likewise, Americans prioritize environmental protection over economic growth and favor energy conservation over energy production to solve the nation’s energy problems.

·         However, climate change is not a top of mind issue for most. Americans express a low level of concern about climate change, particularly when compared with other environmental concerns such as polluted drinking water, soil contamination, and air pollution. A majority of Americans believe the seriousness of global warming has been exaggerated.

·         Terminology doesn’t matter. The term climate change has increasingly been used instead of global warming, in part because it is believed that climate change will be a more persuasive term. But Gallup found that there is no substantial difference in how the American public responds to the terms global warming and climate change

Each of the first nine reports, summarized above, highlighted individual aspects of climate change public opinion. Gallup’s final climate change report attempted to pull these different snapshots into a single, coherent narrative. In particular, it determined that American public opinion about climate change is polarized, with two prominent clusters and a “mixed middle.”

CONCERNED BELIEVERS, MIXED MIDDLE, COOL SKEPTICS

Gallup organized Americans into three categories, according to their views on climate change.

·         “Concerned Believers” (39%) believe that climate change is attributable to human activity, are concerned about it, and believe its seriousness has either been underestimated or given a correct level of attention in the news. 

·         “Cool Skeptics” (25%) do not believe that climate change is man-made, worry little about it, and believe news coverage has exaggerated the seriousness of the threat. 

·         The “Mixed Middle” (36%) constitutes those people that do not fit into the other two categories. They hold a range of views on the three questions.

Gallup then used this segmentation to offer some conclusions about demography and perceptions about climate change. They found that age and political affiliation are the key dividing factors on climate change opinion.  65% of Cool Skeptics identified as conservative, compared with only 18% of Concerned Believers. The majority of Concerned Believers are under fifty years old, whereas the majority of Cool Skeptics are older than fifty years old. It is worth noting that there is a relationship between age and political affiliation in the US (the average Democratic Party voter is younger than the average Republican Party voter).  Interestingly, Gallup also found that climate change opinions in the US are not divided according to level of education.

TREND DATA: CONCERNED BELIEF REMAINS STEADY, COOL SKEPTICISM GROWS

Gallup also issued trend data for its three clusters, demonstrating the general trajectory of American opinions about climate change.  The proportion of Concerned Believers has remained largely the same throughout this period.  The Mixed Middle is shrinking, while the Cool Skeptics category has grown.


CLIMATE CHANGE OPINION TRENDS: COMPARING CANADA AND THE US

This section compares Gallup’s trend data on this issue to similar data for Canada. Although the Environics Institute’s Focus Canada 2013 Climate Change Report is based on answers to different questions, it is possible to qualitatively compare the two reports. Both attempt to measure and track climate change skepticism over time.

The Focus Canada data asks to what extent Canadians believe science has conclusively proven that climate change is caused by human activity. Notably, a clear majority of Canadians believe that climate change is man-made. Climate skepticism has risen and subsequently fallen, returning to just above 2007 levels.

The two surveys asked different questions, so direct comparison isn’t possible. But it is possible to draw some conclusions from analysing the trajectory of public opinion in each. In the US, climate change skepticism has been rising since since at least 2006, but in Canada views on climate change science have reverted, and are roughly the same today as they were in 2007.

Further, it is worth highlighting American and Canadian climate change public opinion in the context of major events and public discussions on the subject.

Climate Skepticism Began to Grow in 2006/2007

Uncertainty about whether climate change is attributable mostly to human activity began to grow in 2006 in the US and in 2007 (or earlier, as Focus Canada trend data on this issue only captures the period of 2007-2013) in Canada. There is no obvious reason for this to have occurred, but it may reflect the general intensification of public debate about climate change.

2006 would seem to have been an excellent year for climate change activists. An Inconvenient Truth was released and the Stern Report controversially argued the costs of adapting to climate change will be greater than the costs of prevention. That same year, the Environmental Protection Agency (EPA) in the US was sued for its failure to regulate greenhouse gas emissions. Massachusetts v. EPA was decided by the Supreme Court in 2007. The Court ruled in a 5-4 decision that the EPA has the power to regulate greenhouse gas emissions from automobiles and that it could not sidestep its authority unless there was a scientific basis for its refusal. It is surprising -- given these developments -- that American climate change skepticism grew during this period.

The 2007 Intergovernmental Panel on Climate Change Report is seen by some as a watershed moment for climate change science: it declared with 90% certainty that human activities cause climate change. However, climate change skeptics had some notable moments in 2007: questions were raised surrounding factual inaccuracies in An Inconvenient Truth and The Great Global Warming Swindle, a divisive climate denialist UK documentary, was released. Both Canadians and Americans became less certain during 2007 that climate change is caused by human activity, suggesting that rebuttals to An Inconvenient Truth and other climate change denialist publications were successful in calling into question the conclusive nature of climate change science.

“Climategate” Deepened Skepticism

In both data sets, climate change skepticism accelerated between 2009 and 2010, likely in response to the “climategate” scandal in which over 1000 emails between climate change scientists were stolen. Although this scandal did not actually unveil evidence that scientists had been manipulating climate change data, they did show that the scientists were very reluctant to share their data with people they saw as only wanting to make trouble. It is likely that this scandal impacted public opinion in both Canada and the US.  In each case, the percentage of climate change believers -- in the US, “Concerned Believers” and in Canada those believing the science is conclusive that climate change is caused mostly by human activity -- reached its nadir in early 2010.

The Copenhagen Accord, which was signed in late 2009, likely contributed to the high rates of climate skepticism in 2010. The meeting in Copenhagen failed to deliver a global agreement on climate change, and was seen as a spectacular disappointment. This failure halted momentum on climate change action, and was followed with negative media coverage.

Belief in Climate Change is Slowly Recovering

In Canada and the US, the view that climate change is attributable to human activity is regaining popularity, but has not returned to 2007 levels.

In 2011, the International Energy Agency (IEA) released a report concluding that the world has just five years before climate change will be irreversible.

The percentage of Concerned Believers in the US began to grow again in 2012. This may be because the sense of urgency expressed in the IEA report was made tangible for many in North America. Summer ice cover over the Arctic Ocean reached a record low. Hurricane Sandy catalysed discussion on climate change; most Americans now link climate change and extreme weather.

2013 proceeded with a leaked draft of the UN Intergovernmental Panel on Climate Change that made headlines, particularly for its findings on the implications of climate change for food security. This draft was timely, given the protracted heat wave across the Southern US in 2012, which prompted discussion about climate change and concerns about current and  future wheat yields. The Intergovernmental Panel also declared 95-100% confidence that human activity is the primary influence on planetary warming.

Increasingly since 2010, Canadians and Americans believe that climate change is man-made. This may be due to a combination of increased scientific data about climate change – and efforts to articulate these findings in publicly salient ways – and extreme weather occurring in 2012.

FURTHER READING

For a comprehensive timeline on climate change, click here.

For more on climate change and American public opinion, check out the Yale School of Forestry and Environmental Studies’ Project on Climate Change Communication. This project, which has been ongoing for nearly a decade, conducts social and public opinion research on climate change; designs and tests strategies to engage the public on the issue of climate change; and empowers opinion leaders by giving them tools to more effectively engage audiences on climate change. As part of this mandate, they recently released the following report: Public Support for Climate and Energy Policies in November 2013.

Page 1 of 7 |Next|Last