Thursday, May 23, 2024
HomeTopicsElectionsVenezuelan Pollsters, Their Records and the 2012 Race—Post-election Update

Venezuelan Pollsters, Their Records and the 2012 Race—Post-election Update


Iñaki Sagarzazu (Léelo en español en YV Polis) Finally, on October 7, after a very long electoral season, Venezuelans went to the polls to answer the question we’ve all been waiting for: which pollster was right? The answer is not as simple as it might seem and in this space I will try to discuss why. I would like to start by highlighting something that was extremely interesting in the first days after the election. After an election I am used to hearing politicians—winners and losers—claim victory. It has always been an interesting feature of elections. No matter how badly a party loses, its members always find a way to say they won with phrases such as “we got more votes than before” or “we conquered this or that region.” What I have never seen before this election is pollsters engaging in post-electoral spin. Hours after the results of the presidential election were unveiled, several polling houses started public relations campaigns spinning the success of their results. This is completely absurd. When polling houses feel like they have to do much more than just report their findings, then we have a serious problem. This goes back to the point that I have been trying to make since I started my blog: when polls done on similar dates show differences that range from a 20-point lead for the incumbent to a 2-point lead for the challenger, someone is not doing their job.


First, let’s briefly review the previous sequence of posts. I argued that polling houses typically tend to err in one direction, which allows us to determine a measure of bias with which we could correct current predictions. Using all the polls for each election I generated the average error per firm, per election and used this as a measure of bias. See table 1 for the measures estimated in the first post in July. Table 1. Mean error by polling house for each election The 2012 errors Now that the election is over we can repeat the exercise for the 2012 polls. This is based on a data set of 107 polls for the 2012 election. Table 2 shows the number of polls per polling house. As can be seen, Datanálisis and GISXXI are the two with the most (22 and 17 respectively). Table 2. Number of 2012 election polls by polling house As I did in my first post back in July I am going to start by looking at the last poll put forward by each polling firm. As the election is settled, I am using the “polarized” figures. Polarized figures take into account only the vote intention totals for each of the options (in this case for Chávez and Capriles) and adjust them to make sure they add to 100. Doing so means dividing each vote intention (v.i.) percentage over the sum of both options. For instance, if a poll showed the race to be 53-35 in favor of Chávez, then the polarized vote intention would be : Chávez Polarized V.I.=53/(53+35)=53/88=60.23 ; Capriles polarized V.I. = 35/(53+35)=35/88=39.77 Figure 1 shows in blue horizontal bars the vote intention each firm gave to Capriles; and in red horizontal bars the vote intention for Chávez. The two vertical lines show the actual percentage of votes obtained by Chávez (55.12%) and by Capriles (44.24%) . As can be seen, the firm that was closest to the election outcome was GISXXI, followed closely by Datanálisis. The rest of the firms made large errors; except for Varianzas, they all erred by more than five (5) percentage points. Figure 1. Last polls before 2012 election However, as I’ve also said before, given the huge differences in polling numbers throughout the election cycle, it does a disservice to only analyze the last poll. Throughout the election season pollsters gave extremely different accounts of a similar reality and these I believe need to be factored in the estimation of biases, especially when looking at polls during the campaign. Based on this I pulled together all the polls of each of the houses and generated the mean bias for both their calculations of government and opposition support. Figure 2 shows these numbers. The first thing to notice is that despite the accuracy of GISXXI’s last poll (almost perfect) their polls throughout the campaign were very different. GISXXI is effectively the polling firm with the largest biases in favor of President Chávez. The firm with the lowest average biases is Datanálisis, followed by Varianzas and Hinterlaces. Table 3 shows the actual numbers per polling firm in the dataset. Negative numbers imply that the vote intention reported was lower than the actual votes obtained, and positive numbers imply a higher vote intention than reality. Figure 2. Mean bias for government and opposition in 2012 polls Table 3. Mean error by polling house for the 2012 race     With all these results we can update the first table to include the biases of the current election. Several things are worth highlighting in this table. First, 30.11 Consultores had very similar biases than in the 2007 race. Consultores 21 outperformed its worst showings (from the 2004 and 2009 referendum) but did worse than its best performances, passing the 5-point bias threshold. Datanálisis had one of its best performances, with errors just barely within the margin of error. GISXXI actually managed to underperform its weak 2010 showing by passing the 10-point threshold in pro-government bias. Hinterlaces improved upon its previous worst (2009) but underperformed its best results, in which it had come within a point of the actual result. This time around its error passed three point, placing them outside their statistical margin of error. IVAD had biases of close to five points which is close to average for them. Finally, Varianzas did not perform as well as its previous record would have predicted, although the pro-opposition bias is consistent; just like Hinterlaces, it is below the 5-point error threshold but past the 3 points of standard statistical error. Table 4. Mean error by polling house for each election—with 2012 update With this last post contributing to this blog I would like to reiterate the claim I’ve been hammering upon since I started. Such wide differences cannot be the result of simple statistical error. There is definitely something more going on among Venezuelan pollsters. Whether these firms are incompetent or are playing politics does not really matter. What does matter is that such wide differences reduce trust not only in polling firms, but in the political system. Polls create expectations and political actors use them to consider scenarios and plan their actions. When such widely varying numbers are thrown around, institutions pay the price. For example, most of the people that claimed fraud in the week after the election based their rationale on what the polls said. As such, if we are interested in building long-lasting, stable institutions, then we need to improve the means of knowledge creation that affects them. Once again I call upon the Venezuelan polling firms to come together and set professional standards just like other countries have. Taking on standards of disclosure such as those of the U.S.-based National Council on Public Polls (NCPP) for instance, will only strengthen the polling industry. These principles of disclosure are used in countries such as the United States, the United Kingdom, and Mexico. In these countries polling firms—via their associations—commit themselves to providing significant levels of disclosure of their products. While in the United States and the UK this has been an initiative of pollsters themselves, in Mexico is mandated by the electoral authority (Instituto Federal de Elecciones – IFE). In these countries, polling firms have to make available information on their polls such as: the complete questionnaire with the exact question wording, the sponsors of the study, and full cross-tabulations for all the questions made public. This is just a minor part of the principles of disclosure of the NCPP which encourages members to even make public the actual raw dataset. As I claimed in a post days before the election (Un llamado a la transparencia) the more information is available in the public domain the better the assessments that can be made from it. In Venezuela the National Electoral Council (CNE) made a failed attempt at regulation similar to Mexico’s IFE. The Electoral Regulation of the 2012 Presidential election set some requirements for polling firms. If firms wanted to make their polls publicly known then they had to register with the CNE and had to make available certain basic information of each poll. However,as I argued when this regulation was established, this fails to provide real control or real disclosure as the information they require be publicized is just basic poll information such as the sample size, the field dates, and the sampling methodology. In short this regulation by CNE seemed more like an empty gesture than a real policy aiming to bring some control and transparency to the field of electoral polling. NCPP – Principles of Disclosure BPC – Principles of Disclosure As a last note I would very much like to thank David Smilde for inviting me to write these four pieces on this blog. I hope they have contributed to its purposes. Iñaki Sagarzazu is a Lecturer in Comparative Politics at the University of Glasgow and author of the blog YV Polis (  

This article is funded by readers like you

Only with regular support can we maintain our website, publish LAB books and support campaigns for social justice across Latin America. You can help by becoming a LAB Subscriber or a Friend of LAB. Or you can make a one-off donation. Click the link below to learn about the details.

Support LAB