Year |
Governor
|
Representative
|
Senator
|
|||
---|---|---|---|---|---|---|
Incumbent | Candidate | Incumbent | Former | Incumbent | Candidate | |
2010 | 50 | 841 | 35 | - | 63 | 73 |
2011 | 50 | - | 435 | - | 100 | - |
2012 | 50 | 842 | 408 | 393 | 78 | 66 |
2013 | 50 | - | 433 | - | 100 | - |
2014 | 50 | 836 | 57 | - | 73 | 71 |
2015 | 50 | - | 435 | - | 100 | - |
2016 | 50 | 845 | 77 | - | 78 | 68 |
2017 | 50 | - | 435 | - | 100 | - |
2018 | 50 | 845 | 67 | - | 68 | 70 |
2019 | 50 | - | 432 | - | 100 | - |
2020 | 49 | 857 | 43 | - | 69 | 68 |
2021 | 50 | - | 433 | - | 100 | - |
2022 | 50 | 856 | 180 | - | 73 | 68 |
2023 | 50 | - | 434 | - | 100 | - |
2024 | 50 | - | 433 | - | 100 | - |
(C)CES Data
The data is all from the (C)CES. They have been asking, since 2009, how liberal or conservative different candidates are using a 7 point likert scale question. Here I provide some useful (maybe?) details about the data.
Candidates
Bridging
A critical need in A-M scaling is for the raters to rate entities that others rate, these are sometimes referred to as “bridges”. Bridges help to pin down the common scale, making ratings comparable. In the C/CES data there are a set of people/entities that everyone rates but these groups change over time. In most cases ths includes the President and the two major political parties.
Year | Democratic Party | Republican Party | Tea Party Movement | Supreme Court | House of Representatives | Senate | Mitt Romney | Jeb Bush | Ted Cruz | Rand Paul | Hillary Clinton | Donald Trump | Merrick Garland | Joe Biden | Kamala Harris | Barack Obama |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 | 51,096 | 50,864 | 46,517 | - | - | - | - | - | - | - | - | - | - | - | - | 51392 |
2011 | 18,052 | 17,985 | 16,708 | 16,575 | - | - | - | - | - | - | - | - | - | - | - | 18178 |
2012 | 67,058 | 66,292 | 62,939 | 62,946 | 17,203 | 17,359 | 64,710 | - | - | - | - | - | - | - | - | 67694 |
2013 | 14,329 | 14,256 | 13,039 | 13,296 | - | - | 13,931 | - | - | - | - | - | - | - | - | 14432 |
2014 | 56,586 | 56,446 | 52,023 | 53,009 | 8,621 | 8,637 | - | 40,757 | 34,864 | 37,064 | 48,553 | - | - | - | - | 58029 |
2015 | 11,994 | 11,863 | - | 10,954 | - | - | - | - | - | - | - | - | - | - | - | 12045 |
2016 | 56,293 | 55,591 | - | 52,375 | - | - | - | - | - | - | 57,072 | 49,490 | 5,501 | - | - | 57416 |
2017 | 15,698 | 15,555 | - | 14,080 | - | - | - | - | - | - | - | 13,944 | - | - | - | - |
2018 | 53,726 | 53,409 | - | 50,924 | - | - | - | - | - | - | - | 49,826 | - | - | - | - |
2019 | 15,957 | 15,755 | - | 14,543 | - | - | - | - | - | - | - | 14,311 | - | - | - | - |
2020 | 53,017 | 52,347 | - | - | - | - | - | - | - | - | - | 48,380 | - | 53,037 | - | - |
2021 | 22,834 | 22,395 | - | - | - | - | - | - | - | - | - | 20,770 | - | 22,972 | - | - |
2022 | 53,242 | 52,333 | - | 49,397 | - | - | - | - | - | - | - | 49,301 | - | 53,259 | - | - |
2023 | 21,371 | 21,176 | - | 20,100 | - | - | - | - | - | - | - | 20,341 | - | 21,508 | - | - |
2024 | 54,403 | 53,379 | - | 50,943 | - | - | - | - | - | - | - | 50,376 | - | 54,567 | 54,678 | - |
Panel Data
There are several years where the CES has included a panel of voters across multiple survey years. In this case I have opted to ignore the panel aspect and treat them as if they are different people. This is likely throwing out some information but it seemed like the best option. There are three ways to approach this:
Assume that voters have not changed at all across the survey panels. For this choice there would be one \(\beta\) and one \(\alpha\) for each voter which would be used repeatedly across panels. This assumes that they have not change their own political views in this time period, having roughly the same understanding of liberal-conservative across the time period. I thought this assumption was overly restrictive.
Use some sort of pooling model where each year’s scores would help inform the other year’s, but not entirely constrain them. This could be implemented with some sort of hierarchical prior on them or even a dynamic prior over time. Although I think this is interesting, the computational complexity concerned me.
Treat them as all independent. This, as I said, ignores potentially useful information but was feasible.
I might return to try 2 at some point, we will see what the future holds.