On Jan. 28, 2025, 538 launched our average of polls of President Donald Trump's approval rating. For Trump's second term, we are debuting a brand-spanking-new methodology for our presidential approval tracker, building upon refinements we made throughout former President Joe Biden's term and during the 2024 presidential election. (For now, though, we are not changing the methodology of our existing averages, such as our other approval averages, our favorability averages and our election averages.) Read on for a full description of how we're calculating Trump's approval rating.
538's philosophy is to collect as many polls as possible for every topic or race we're actively tracking — so long as they are publicly available and meet the basic criteria for inclusion listed in our polls policy. After determining that a poll meets our standards, we have to answer a few more questions about it before sending it off to the various computer programs that power our models.
After all this data is in our database, we compute three weights for each survey that control how much influence it has in our average, based on the following factors:
These three weights are all multiplied together in our final model.
Once we have all our polls and weights, it is time to average them together. But which methodology for aggregation should we choose? Broadly speaking, the most commonly used polling averages for U.S. public opinion have followed one of three approaches:
There are a lot of benefits to this third option, and it has historically been the solution used by 538. The average-of-averages approach allows you to use predictions from the best parts of a slow-moving exponentially weighted moving average and a fast-moving polynomial trendline; it is computationally efficient to do so; and it's easy to explain this model to the public. It's also relatively trivial to tweak the model if we find something is working incorrectly.
However, this model has some shortcomings too. Our poll-averaging model for favorability ratings, primary elections and the generic congressional ballot is really a set of many different models that work together iteratively: First, we use models to reduce the weight on polls that are very old or have small sample sizes; then we use models to average polls and detect outliers; then we run new averaging models to detect house effects; and so on and so on, for nearly a dozen individual steps.
If a modeler isn't careful, this can introduce some problems — some of them practical and others statistical. First, it's hard to account for uncertainty in the average, especially when using ad hoc weights for sample size and other factors. That's because we potentially generate statistical error every time we move from one model to the next, and we have to run the program thousands of times every time we want to update! It's also a little more sensitive to noise than we'd like it to be, even when designed to accurately predict support in future polls given everything that came before.
So this year, we are introducing a new type of statistical model for averaging polls of presidential approval. It is similar to the one we used for polls of the 2024 general election, and derivations of it have been used to model approval ratings of government leaders in the United Kingdom and for elections in Australia. Oversimplifying a bit, you can think of our updated presidential approval polling average as one giant model that is trying to predict the results of polls we collect based on (1) the overall state of public opinion on any given day and (2) various factors that could have influenced the result of a particular poll. These are:
Finally, our prediction for a given poll also accounts for the value of the polling average on the day it was conducted. That's because if overall approval for a president is 50 percent, we should expect polls from that day to reveal higher support than if the president were at, say, 30 percent overall approval. This also means the model implicitly puts less weight on polls that look like huge outliers, after adjusting for all the factors above.
That brings up the question of how exactly the average is being calculated.
We use a random walk to model averages over time. In essence, we tell our computers that support for the president in national polls should start at some point on Day 1 and move by some amount on average each subsequent day. Support for the president might move by 0.1 points on Day 2, -0.2 points on Day 3, 0.4 points on Day 4, 0 points on Day 5, and so on and so on. Every time we run our model, it determines the likeliest values of these daily changes in presidential approval while adjusting polls for all the factors mentioned above. We can extract those daily values and add them to the starting value for the president's approval at the beginning of his term: That gives us his approval rating.
(Actually, we run three different versions of our average, to account for the chance that daily volatility in public opinion changes over time. For every day of a president's term, we calculate one estimate of his approval rating by running our model with all polls from the last 365 days; one with all polls from the last 90 days; and one with all polls from the last 30 days. Then, we take the final values for those three models and average them together. This helps our aggregate react to quick changes in opinion while removing noise from periods of stability.)
Finally, we account for any additional detectable error in a poll. This is noise that goes above and beyond the patterns of bias we can account for with the adjustments listed above. The primary source of this noise is sampling error, derived from the number of interviews a pollster does: A larger sample size means less variance due to "stochasticity" (random weirdness) in a poll's sample.
But there is also non-sampling error in each poll — a blanket term encompassing any additional noise that could be a result of faulty weights, certain groups not picking up the phone, a bad questionnaire design or really anything else that we haven't added explicit adjustment factors for. Our model decides how much non-sampling error is present across the polls by adding an additional constant to the standard deviation implied by each poll's sample size via the sum of squares formula (with the model deciding how large that constant should be).
Those familiar with our previous presidential approval averages will remember that they include error bands — shaded areas on our graphs that are meant to represent our uncertainty about the state of public opinion. Traditionally, this shaded interval has been calculated to show the range in which 95 percent of all polls are supposed to fall. For our average of Trump's second-term approval rating, we have adjusted this 95 percent interval to show the uncertainty we have about the average itself. You will notice the new interval is much smaller; that's because uncertainty about the average is much smaller than uncertainty about individual polls.
Our new model also lets us account for uncertainty in the polling average in a very straightforward way. Imagine we are not calculating support for a president one single time, but thousands of times, where each time we see what his support would be if the data and parameters of our model had different values. For example, what if the latest poll from The New York Times/Siena College had Trump's approval rating 3 points higher, as a poll of 1,000 people would be expected to have about 5 percent of the time? Or what if the population adjustment were smaller?
Our model answers these questions by simulating thousands of different polling averages each time it runs. That, in turn, lets us show uncertainty intervals directly on the average.
And that's it! 538's average of Trump's approval rating will update regularly as new polls are released. If you spot a poll that we haven't entered after a business day, or have other questions about our methodology, feel free to drop us a line.
2.0 | Debuted new presidential approval average methodology for the Trump administration. | Jan. 28, 2025
1.3 | Added downballot primary polling averages and clarified that presidential general election averages have a different methodology. | April 25, 2024
1.2 | Added adjustment for partisan polls, updated population and sample-size adjustments. | Nov. 3, 2023
1.1 | Improved outlier detection, more stable averages, better optimization. | Oct. 9, 2023
1.0 | New favorability, approval and horse-race averages debuted. | June 28, 2023