top of page
bg-breadcrumb.jpg

ER PREDICTIVE MODEL

icon-heading.png

Election Recon Predictive Model - "The Quadrivial Model"

How Our Model Works

Since our launch in 2020 the Election Recon Predictive Model has correctly predicted the results of 1000 out of 1,041 seats/elections. Giving the model a historical 96% rating for raw accuracy. 

​

  • In Presidential Elections the model correctly projected 48 out of 50 states in 2020. (96% Accuracy)

  • In U.S. Governor Elections the model correctly projected 44 out of 47 states in 2020 & 2022 (93.5% Accuracy)

  • In U.S. Senate Elections the model correctly projected 66 out of 70 states in 2020 & 2022. (94% Accuracy)

  • In U.S. House Elections the model correctly projected 838 out of 870 seats in 2020 & 2022. (96% Accuracy)

  • In Special/Other Elections the model correctly projected 4 out of 4 races from 2020 -2024. (100% Accuracy)

    • 2020 Special Election CA-25, 2021 Virginia Gov, 2021 California Gov Recall, 2024 Special Election NY-3.

​

​

METHODOLOGY & HISTORY:

​

The Election Recon Predictive Model or "The Quadrivial Model" at the most basic level breaks down into four segments. Quadrivial is an adjective meaning, "having four ways or roads meeting in a point." The first segment, like most models, is collecting polling data. Like many models we weight and adjust the polls we collect based on the historical accuracy of the polling firm. For our model, we grade each pollster (or polling organization) on a scale of 1-to-10 in terms of confidence that we have in their historical accuracy up to the last four election cycles. If a pollster (or polling organization) is rated a it is rejected from our model. A pollster (or polling organization) that is rated 1-4 or higher is accepted into the model, but is weighted significantly lower than the average of 5-6. A pollster with a 7-9 rating is weighted higher in our model. A rating of 10 would mean a pollster has a perfect history in accuracy and absolutely zero bias, because this is not possible no pollster would ever be rated a perfect 10. The model also weights consistent and proven trends in the polling data. For example, if a candidate shows two weeks of consistent gains in polling, our model will provide favorable polling for that candidate additional weight in their favor. We agree with the premise that polling can be flawed and is elastic. However, we know of no better way to test the status (or take a snapshot) in real time of a specific race other then a scientific political poll.  While the confidence in polling has waned in some circles over the years, we believe they still play a intricate role in projecting results for elections. We also agree that some pollsters (or polling organizations) can either intentionally or unintentionally show bias in their results. Natural inaccuracies occur in all polls (hence the margin of error) and how pollsters (or polling organizations) believe the final electorate will look is still subjective. So we worked to build a model that buffers, or hedges, against the potential biases or inaccuracies of sampling in our other three segments. This brings us to our second segment of our model which is demographics & local data. Again, many other models use a similar methodologies as we do in this segment. We break down every district or state down by the following demographics or fundamental data sets:

​

  • Economic Data (Median Household Income, Cost of Living, Inflation, & Housing-Rental Market)

  • Racial, Gender & Age Demographics

  • Partisan Registration Demographics

  • Cook Political PVI Index

  • Poverty Index

  • Crime Rates

​

Each race in our model is weighted separately based on the importance and influence each of these data points has had in past like election cycles.  How each data point is ranked in priority for weight, and the specific amount of weight given to each data point, depends on the historical trends for the race. We then will take the historical trend data and weight it against the raw polling data and adjust the results accordingly. The third segment of our model is fundamentals. We have to say it here clear as day: yard signs do not win elections. Voter contact wins elections and this segment is all about tracking a campaign's ability to successfully conduct voter contact. Here we take the fundamental elements that make up a campaign's strength and weigh the results against our model. Specifically we look at the following aspects:

​

  • Fundraising

  • Independent Expenditures  or PAC Involvement

  • Media Buys (TV - Radio - Print)

  • Mail Operation

  • Grass Roots Organization

​

For fundraising we are constantly adjusting the model for reported cash on hand and money raised in between reporting periods. The same goes for major I.E. or PAC involvement in a race (either for-or-against) a particular candidate. As the old saying goes: money plays. The weight in our model is scaled back the closer we get to election day for the financial part of the fundamental segment. So the impact in our model of a Candidate who has a large fundraising lead will diminish over the course of the election in our model. The theory is that if that money was effective it would show in other results in the models projections. We also track the media and mail expenditures of campaigns and apply them into our model accordingly. A campaign which is outspending their opponent on T.V. by a ratio of 2:1 would receive a weight in favor in our model for example. We then work to verify simple yes/no answers to grass roots questions for each campaign in a race. Such as do they have a viable campaign headquarters. Do they have a phone banking operation? Are they micro targeting? Are they conducting door knocking? Basic grass roots campaign operations. We do not make the effort to compare the campaigns in this regard, but simply ensure a quality control check that the campaigns are at least following the basics. A campaign that is not knocking on doors or did not have a phone bank will be weighted lower in our model in this segment as an example.  The fourth and final segment of our model is what really separates us and makes our model unique. We call the fourth segment the Environmental Data or 'Enviornmentals' for short. Our team's two decades of combined campaign experience has taught us there are ways to track a race other then polling. We track of the following for this segment of our model:

​

  • Voter Registration Data

  • Early Vote Data

  • Projected 50% + 1 Vote Goals for Each Race

  • Primary Turnout Strength Data (For General Elections Only)

  • (SEIU Factors) Scandal, Enthusiasm, Incumbency, Unrest  

​

We track both raw and trends in voter registration data over the course of an election. We will compare the data against the last two like election cycles and look for significant movement in one direction that may affect a race. Additionally we will watch the Early Vote returns for every race. We will likewise compare the raw data and trends against the last two like elections. If there is significant statistical movement in one direction that may affect the race, our model will weight and adjust accordingly. We develop what campaigns call 'vote goals' for each race. This takes a tremendous amount of time and work to create. Our team will use the turnout numbers from the last two like election cycles to develop the 'vote goal' which is a projection of the raw vote total needed to win a probable victory in a specific race. As we get more real time data with regard to turnout this number can be adjusted. Additionally, we will take the primary turnout strength from each candidate in a race and apply it to a general election model. Lastly we have a set of weight metrics for what we call the SEIU factors or Scandal, Enthusiasm, Incumbency and Unrest. Our model will weight each factor accordingly if it exists for a specific candidate in a race. Has a candidate been accused or involved in a scandal? Is it recent or is it in the past? The model will adjust more heavily the more recent the scandal. Is there a significant enthusiasm advantage in the polling for one candidate? If a candidate is an incumbent the model will weight the likeability/job approvals. If the incumbent is above water it will weight favorably and if the incumbent is below water is will weight against. Is there social unrest around or involving the race? The model will weight this factor according to the demographics of the specific race. All four segments are combined in our model and run through thousands of simulations where we take the average return and rate each race by the following:

​

  • Tilt (0.0-3.0% projected difference between candidates)

  • Lean (3.1-6.5% projected difference between candidates)

  • Likely (6.6-9.9% projected difference between candidates)

  • Safe (10.0% or better projected difference between candidates)

​

Our model attempts to project the actual certified election result and takes in the historical trends for late, proxy and mail in voting for each state. In the 2020 General Election our model nearly hit the popular vote share right on target, called 48 out of 50 states correctly in the Presidential Election (missing Arizona & Georgia) and we called 49 out of 50 states correctly in the U.S. Senate Election (missing Maine). Note: Our model correctly projected that the GOP would have the lead of the combined share of the vote in Georgia on election night, but that we projected both races would move to a run-off. As the run-off progressed our model correctly projected the Democrats victory for both seats in Georgia. Our model projected 12 more Democratic seats then the final result for the U.S. House. At the state level our model correctly projected the result of every Governor's race.

​

In 2021 our model correctly projected the result of the Virginia Governors race.

​

On August 23rd, 2022 we adjusted our polling metrics to weight LV (Likely Voter) polls heavier and RV (Registered Voter) polls lighter in our model. This adjusted our model slightly more to the right as a result. We believe the RV polls are too biased (and too inconsistent in weights & samples) in their make up and that was unfairly weighting our model towards the left. We have a higher confidence in LV screens being put into place and believe them to be more accurate then RV polls.

​

In the 2022 Mid-Term Election our model once again nearly hit the overall popular vote share right on target. However, the model did slightly worse in seat-by-seat projections. Our model called 32 out of 35 states correctly for the U.S. Senate Election (missing Arizona, Pennsylvania & Nevada). Our model called 33 out of the 36 states correctly for U.S. Governors missing (Arizona, Kansas & Wisconsin). Note: Once again our model correctly projected that the Georgia Senate race would move to a run-off. As the run-off progressed our model correctly projected the Democrats victory for the runoff in Georgia. Our model projected 18 more Republican seats than the final result for the U.S. House.

​

In our own self evaluation after the 2022 election we believe we gave too much deference to polling error and response bias which affected our model to skew slightly to the Republicans. We debated in the last weeks of the election if we should heed the warnings of respected politicos who were screaming at the top of their lungs about this very issue. We decided as a team to let the model stay as it was despite sharing the same concerns. However, we have adjusted the model to account for a heavier weight to balance against response bias and polling error in the 2024 General Election.

​

Additional changes in our model from 2020, 2022 and into the 2024 election year include a heavier weight against outlier polls which break more than 4.5 points from the base aggregate in our model. Additionally, we also reinserted polls weighted 1-4 which were rejected in our 2022 model, however polls rated 1-4 have been given a significant penalty in their weight for our model. We also have added in a small weight on historical polling error at the state level. This is aggregated from the last two like elections. Lastly, in 2024 we added inflation rate down to the state level as a new data point for the model.

bottom of page