top of page



Election Recon Predictive Model - "The Quadrivial Model"

How Our Model Works

The Election Recon Predictive Model or "The Quadrivial Model" at the most basic level breaks down into four segments. Quadrivial is an adjective meaning, "having four ways or roads meeting in a point." The first segment, like most models, is collecting polling data. Like many models we weight and adjust the polls we collect based on the historical accuracy of the polling firm. For our model, we grade each pollster (or polling organization) on a scale of 1-to-10 in terms of confidence that we have in their historical accuracy up to the last four election cycles. If a pollster (or polling organization) is rated a 1-to-4 it is rejected from our model.  A pollster (or polling organization) that is rated 5 or higher is accepted into the model, but is weighted the lowest. A pollster (or polling organization) with a 10 rating is weighted the highest in our model. The model also weights consistent and proven trends in the polling data. For example, if a candidate shows two weeks of consistent gains in polling, our model will provide favorable polling for that candidate additional weight in their favor. We agree with the premise that polling can be flawed and is elastic. However, we know of no better way to test the status (or take a snapshot) in real time of a specific race other then a scientific political poll.  While the confidence in polling has waned in some circles over the years, we believe they still play a intricate role in projecting results for elections. We also agree that some pollsters (or polling organizations) can either intentionally or unintentionally show bias in their results. Natural inaccuracies occur in all polls (hence the margin of error) and how pollsters (or polling organizations) believe the final electorate will look is still subjective. So we worked to build a model that buffers, or hedges, against the potential biases or inaccuracies of sampling in our other three segments. This brings us to our second segment of our model which is demographics & local data. Again, many other models use a similar methodologies as we do in this segment. We break down every district or state down by the following demographics or fundamental data sets:

  • General Economic Data (Median Household Income & Housing Market Rates)

  • Racial, Gender & Age Demographics

  • Partisan Registration Demographics

  • Cook Political PVI Index

  • Poverty Index

  • Crime Rates

Each race in our model is weighted separately based on the importance and influence each of these data points has had in past like election cycles.  How each data point is ranked in priority for weight, and the specific amount of weight given to each data point, depends on the historical trends for the race. We then will take the historical trend data and weight it against the raw polling data and adjust the results accordingly. The third segment of our model is fundamentals. We have to say it here clear as day: yard signs do not win elections. Voter contact wins elections and this segment is all about tracking a campaign's ability to successfully conduct voter contact. Here we take the fundamental elements that make up a campaign's strength and weigh the results against our model. Specifically we look at the following aspects:

  • Fundraising

  • Independent Expenditures  or PAC Involvement

  • Media Buys (TV - Radio - Print)

  • Mail Operation

  • Grass Roots Organization

For fundraising we are constantly adjusting the model for reported cash on hand and money raised in between reporting periods. The same goes for major I.E. or PAC involvement in a race (either for-or-against) a particular candidate. As the old saying goes: money plays. The weight in our model is scaled back the closer we get to election day for the financial part of the fundamental segment. So the impact in our model of a Candidate who has a large fundraising lead will diminish over the course of the election in our model. The theory is that if that money was effective it would show in other results in the models projections. We also track the media and mail expenditures of campaigns and apply them into our model accordingly. A campaign which is outspending their opponent on T.V. by a ratio of 2:1 would receive a weight in favor in our model for example. We then work to verify simple yes/no answers to grass roots questions for each campaign in a race. Such as do they have a viable campaign headquarters. Do they have a phone banking operation? Are they micro targeting? Are they conducting door knocking? Basic grass roots campaign operations. We do not make the effort to compare the campaigns in this regard, but simply ensure a quality control check that the campaigns are at least following the basics. A campaign that is not knocking on doors or did not have a phone bank will be weighted lower in our model in this segment as an example.  The fourth and final segment of our model is what really separates us and makes our model unique. We call the fourth segment the Environmental Data or 'Enviornmentals' for short. Our team's two decades of combined campaign experience has taught us there are ways to track a race other then polling. We track of the following for this segment of our model:

  • Voter Registration Data

  • Early Vote Data

  • Projected 50% + 1 Vote Goals for Each Race

  • Primary Turnout Strength Data (For General Elections Only)

  • (SEIU Factors) Scandal, Enthusiasm, Incumbency, Unrest  

We track both raw and trends in voter registration data over the course of an election. We will compare the data against the last two like election cycles and look for significant movement in one direction that may affect a race. Additionally we will watch the Early Vote returns for every race. We will likewise compare the raw data and trends against the last two like elections. If there is significant statistical movement in one direction that may affect the race, our model will weight and adjust accordingly. We develop what campaigns call 'vote goals' for each race. This takes a tremendous amount of time and work to create. Our team will use the turnout numbers from the last two like election cycles to develop the 'vote goal' which is a projection of the raw vote total needed to win a probable victory in a specific race. As we get more real time data with regard to turnout this number can be adjusted. Additionally, we will take the primary turnout strength from each candidate in a race and apply it to a general election model. Lastly we have a set of weight metrics for what we call the SEIU factors or Scandal, Enthusiasm, Incumbency and Unrest. Our model will weight each factor accordingly if it exists for a specific candidate in a race. Has a candidate been accused or involved in a scandal? Is it recent or is it in the past? The model will adjust more heavily the more recent the scandal. Is there a significant enthusiasm advantage in the polling for one candidate? If a candidate is an incumbent the model will weight the likeability/job approvals. If the incumbent is above water it will weight favorably and if the incumbent is below water is will weight against. Is there social unrest around or involving the race? The model will weight this factor according to the demographics of the specific race. All four segments are combined in our model and run through thousands of simulations where we take the average return and rate each race by the following:

  • Tilt (0.0-3.0% projected difference between candidates)

  • Lean (3.1-6.0% projected difference between candidates)

  • Likely (6.1-9.9% projected difference between candidates)

  • Safe (10.0% or better projected difference between candidates)

Our model attempts to project the actual certified election result and takes in the historical trends for late, proxy and mail in voting for each state. In the 2020 General Election our model nearly hit the popular vote share right on target, called 48 out of 50 states correctly in the Presidential Election (missing Arizona & Georgia) and we called 49 out of 50 states correctly in the U.S. Senate Election (missing Maine). Note: Our model correctly projected that the GOP would have the lead of the combined share of the vote in Georgia on election night, but that we projected both races would move to a run-off. As the run-off progressed our model correctly projected the Democrats victory for both seats in Georgia. Our model projected 12 more Democratic seats then the final result for the U.S. House. At the state level our model correctly projected the result of every Governor's race.

In 2021 our model correctly projected the result of the Virginia Governors race.

On August 23rd, 2022 we adjusted our polling metrics to weight LV (Likely Voter) polls heavier and RV (Registered Voter) polls lighter in our model. This adjusted our model slightly more to the right as a result. We believe the RV polls are too biased (and too inconsistent in weights & samples) in their make up and that was unfairly weighting our model towards the left. We have a higher confidence in LV screens being put into place and believe them to be more accurate then RV polls.

bottom of page