This story was last updated on August 8, 2021
In his best-selling book Breaking the Quebec Code, Jean-Marc Léger wrote “Political polls represent about 1% of all my income, but they represent 99% of my problems”.
So why do it then? For many market research firms, political polls serve as an efficient way to promote their brand, attract customers, and, when successful, show customers that their numbers can be trusted. In a functioning supply and demand market, polling firms whose results are consistently and consistently misplaced are doomed in the long run, somewhat similar to Darwinian economics. (However, it should be noted that even the best survey companies from time to time missing the mark, such is the nature of statistics and the challenge of aiming for a constantly moving target).
“But there is more to the polls than voting intentions,” I have been told many times, and with good reason. Taking polls on social issues can be far more important in understanding the political landscape than simply measuring the horse race. However, the reason I don’t pay much attention to polls that don’t include voting intentions is simply that their results are not verifiable.
For example, imagine a market research company X that publishes a survey that indicates, say, that 80 percent of Canadians are in favor of stricter gun control, or that 25 percent of Canadians will not take the COVID-19 vaccine under any circumstances. How do you know if the data is calibrated correctly? How do we know that the results are not biased, not even unintentionally?
Now, let’s say, hypothetically, that same poll measures conservatives leading liberals by 25 points nationally. Since the current 338 Canadian projections (based on a weighted moving average of polls) showing Liberals ahead by a 4 to 8 point margin, how much traction and / or media attention should this poll be given, especially the results of its questions? on social issues? (“Very little to none” would be the correct answer). Voting intentions are the only data points that are truly verifiable from experience.
Also, since about a dozen companies regularly poll federal voting intentions in this country, we can compare the results between those companies and track all the numbers over time.
Because elections happen from time to time and represent the perfect opportunity to compare the results of the polls with the results of the elections. Then we can compile the report card for each company (see 338 Canada Canadian pollster ratings here). Which company was right, which was not? How much? And towards which party did the results lean? Is the mistake systematic Or does it fluctuate from one part to another?
Today we present graphs and data to measure, if any, the “house effect” of Canadian polling companies in federal surveys. This analysis will focus on the voting intentions of federal Liberals and Conservatives over the past three years, from January 2018 to July 2021 (and we will look at other party numbers at a later date).
Below is a “bullseye chart” that compares everybody Federal polls during that time period with the 338Canada moving average (liberal horizontally, conservative vertically).
The scales in the graph represent the number of percentage points from the average at the time of the middle field date of each survey.
As you might expect, the target is surrounded by almost symmetrically distributed dots. In the long run, the best calibrated and accurate companies should have surveys distributed almost evenly around the target. In summary: the surveys close to the target approached the weighted averages during their field dates; those far from the center are statistical outliers or harbingers of new trends. (All surveys used in this analysis are listed in this page.)
Let’s look at the same graph, but with added colored dots to identify the surveys. by signature. Here is the Léger graph (green dots):
As you can see, the Léger polls are distributed almost symmetrically along the diagonal from the top left to the bottom right. Some Léger polls were several points below average at the time of their field dates, but most of the points are close to the bullseye. This is a well calibrated chart.
Here is the same graph for Nanos Research:
Note: Most of the Nanos Research surveys are continuous weekly surveys, so only one survey per four-week cycle is shown in the graph. Once again, we note that the points are almost evenly distributed.
Here’s the Ipsos chart:
As with Léger and Nanos Research, the Ipsos surveys are neatly distributed around the target without apparent and systematic bias.
Here is the graph from Abacus Data:
Same story here. Although there are slightly more polls in the lower half of the chart (hence lower CPC numbers), we don’t see a major bias in the data.
Here’s the chart from Mainstreet Research:
The federal surveys from Mainstreet Research appear to have a slight tilt towards the upper right quadrant of the chart, which means a high LPC and CPC numbers higher than the moving average (hence a lower NDP). However, by averaging them (as we will show below), we do not measure significant bias.
Here’s the chart from Innovative Research Group (IRG):
Regular readers of this column and poll nerds may have noticed that IRG polls often appear to be favorable to liberals, and that impression would be supported by the data. As you can see from the graph, IRG surveys are almost all located in the lower right quadrant, which means high LPC and low CPC numbers. This is a clear example of a systematic house effect. From a purely statistical point of view, it simply cannot be a coincidence that all these points lean in the same direction. That do not It means that the IRG surveys are incorrect or unusable, but we must be careful and interpret them in context.
Finally, here is the graph from the Angus Reid Institute (ARI):
We see the opposite trend for IRG here: Virtually all Angus Reid polls are located in the upper left quadrant, which means high CPC and low LPC numbers compared to the survey average. This also shows a systematic house effect. Of note: in the last two federal elections, the Angus Reid Institute was within the margin of error for the CCP’s final result (but underestimated the liberal vote). And interestingly, the measurable bias toward federal conservatives doesn’t translate into noticeable bias in Angus Reid’s provincial polls.
By calculating the average bias of each company that regularly polls federal voting intentions, we obtain the following graph:
Eight of the 10 companies represented in the graph are located within the radius of 2 points, which, all things considered, should be seen as a minor bias in the last three years. As for Innovative and Angus Reid, they seem to mirror each other from opposite quadrants.
“But what if those companies are right and everyone else is wrong?” In the 2019 federal election, Angus Reid’s final poll saw the Conservatives win by four points over the Liberals. As for Innovative Research, their final figures showed a two-point victory vote for the Liberals. The Conservatives ended up winning the popular vote by one point. Angus Reid had the LPC too low and IRG had the CPC too low. The overall numbers were pretty accurate, but they deviated in the same general direction shown above.
Again, I want to emphasize this point: it’s not that your surveys are necessarily wrong or unusable. They generally observe the same trends as their competitors, which is why their work still has tremendous value (and is regularly and rightly quoted by the media across the country, including this magazine). But while they see the tide moving in the same way as other survey companies, they seem to be measuring it from a slightly different water level.
Follow, continue 338 Canada on Twitter
CORRECTION, AUGUST. October 8, 2021: An earlier version of this story misstated the result of ARI’s final poll of the 2019 federal election. It had the Conservatives beating the Liberals by four points, not five.