13 Comments
User's avatar
David Hugh-Jones's avatar

I don’t think adding more controls could ever fix this design. There are just too many potential confounds. You need an exogenous shock to state politics. There are plenty available - for example, narrow electoral wins are quasi-random, or you might use rainfall on election day.

It’s weird that whole disciplines still believe that you can get an unbiased estimate by just throwing in controls!

Expand full comment
Dr. Nathanial Bork's avatar

Throwing in more related Independent Variables increases R-squared values, but you'll almost never get to complete unbiased-ness. An improved model isn't the same thing as a perfect model.

Expand full comment
Sufeitzy's avatar

Interesting. It’s something I’ve looked at on and off, during Covid I correlated blue state /red state mortality curves and blue state exceeded red state at the onset, but by July the mortality curve flipped, and red states outdistanced blue states. My model predicted the crossover within the week which startled me. (Curve fitted CEIR)

I’d be curious to correlate behaviors rather than simple mortality which as pointed out here segments fairly severely by other factors than state.

For instance which states smoke more; worse obesity; more homelessness; opiate use; etc.

Expand full comment
Sufeitzy's avatar

Ugh! You don’t want my “code” - I built an excel spreadsheet. I downloaded mortality data every day and reran the model using standard excel functions to calculate the 4 parameters, using whatever function it has to iterate until an objective function converges.

I used a number of parametric logistic functions as well as SEIR, and compared symmetric backcast accuracy among the functions over (I think) 6 weeks rolling to choose forecast parameters going forward. It was very stable around mid-May.

My initial forecast was millions dead in the US by 2021 but fortunately we isolated and then I and R rates down fast.

It’s a variant on planning I help clients with, use a spectrum of forecasting algorithms, seed them and continually look at forecast accuracy to chose both the algorithm and parameters for prospective periods.

It’s quite easy when the forecast depth is 1 level, the horizon is day/week/month/quarter/years, and the refresh rate is daily.

Expand full comment
Dr. Nathanial Bork's avatar

I'd be curious to see your model.

Expand full comment
Doctor Hammer's avatar

It is kind of shocking they neglected to include demographics. I would think that would be the prime driver, not just race but age and such. I'd have thought that per capita income, race categories and some age distribution ranges would be obvious first choice controls one would include, but maybe that's just an economist thing.

Expand full comment
Dr. Nathanial Bork's avatar

The omission of race was the biggest sin in their article.

Expand full comment
José Duarte's avatar

Nice that you posted this.

Nate is too generous toward Montez, et al. (not to be confused with the solo Montez essay that Van Bavel and Knowles cited).

It's invalid and that it covers 50 states is irrelevant. Covering 50 states doesn't make something valid. It's invalid for the reasons I go into in my report. We don't know what their gun control variable is, because they don't tell us, it has things like "brady law" when that's a federal law in force since the late 90s, "assault weapon" ban when that was a federal law for a good chunk of their period, etc.

We don't know what most of their nine laws refer to, what the rubric was, what the state ratings were, or what it means to subtract zero from one, which is what they say they did.

There are more reasons it's invalid, like that they didn't control for race.

And as a reminder, they didn't actually find a general effect – Knowles lied about that. (This was the source he scrambled for as a fallback – they never cited it in their paper.) Their only effect was for women, which makes even less sense.

In fact, an effect for men, women, or general pop is mathematically impossible if we assume these arbitrarily chosen gun control laws affected life expectancy by reducing murder and/or suicide rates (it would have to be bottom line rates, not just via gun). Those would be the obvious implied pathways, but they never actually articulate a theory – not Montez, not Knowles. Gun laws won't reduce cancer or something, so it's got to be murder/suicide. But it turns out that even a chunky reduction of those wouldn't reduce life expectancy by anywhere near the 0.5 years they claim for women. (And women are much less likely to be killed with a gun anyway.)

There aren't nearly enough murders and suicides to drive such an effect, so no dice. The action is elsewhere. This is outlined in my report, with the life table example.

You guys all pretty much do that thing where you treat a published study as being valid by default, wanting to be nice to the authors, collegial, etc. No, just because something is published doesn't mean it carries any inherent validity or epistemic standing. The Montez, et al. study is nothing, has no standing. We can't do anything with it. If we don't know what someone did, if we can't see what they did, their ratings, data, etc. and they made big errors from what we can see, that's not anything. It shouldn't have been published and it's just polluting the literature and our brains.

As far as your search for "gun" in their paper, try "firearm(s)". It won't help them much, but that's where their false claim is, where they cite the Montez essay.

Expand full comment
Joshua Born's avatar

"You guys all pretty much do that thing where you treat a published study as being valid by default, wanting to be nice to the authors, collegial, etc."

I did a double-take when reading this because Lee Jussim recently wrote "~75% of Psychology Claims are False" (https://unsafescience.substack.com/p/75-of-psychology-claims-are-false).

I'm pretty sure the Unsafe Science newsletter is as much the opposite of "a published study is valid by default" mentality.

Expand full comment
José Duarte's avatar

Fair point. I was thinking of specific examples. Lee endorsed a fraudulent and invalid (separate issues) study by David Rand and Gordon Pennycook, and it seemed to be a case of just assuming something is fine from skimming and the fact that it's academic work.

They claimed that conservatives posted more misinfo than leftists, or Republicans more than Dems (they claimed both, but measured neither). Turned out they just coded links to different websites – a set of websites they chose – and they coded links to the Daily Wire and Daily Signal (Heritage Foundation) as misinfo (all links).

Whereas links to CNN, MSNBC, WaPo, etc were not-misinfo.

They based this on a stupid survey from 2018 where they had random people rate how trustworthy a domain was (literal domains like latimes.com), and people rated all the smaller, newer outlets as less trustworthy. They just converted that stupid survey into "misinformation" like at a site level.

And they rigged their arbitrary set of websites so that most of the smaller outlets were conservative like DW.

They never identified any misinfo, or tried to. They whole thing was a fraud, and rigged, and obviously we can't say that a link to a normal website like Daily Wire is misinfo, that an article has any false claims, based on nothing but an old opinion poll about a site.

So in that case, Lee wasn't paying attention. And a lot of what they did was hidden and undisclosed, which is normal in social psych. The field is full of fraud.

Expand full comment
Lee Jussim's avatar

Hey, thanks for the detailed post.

1. I will update the guns/firearms business. I have a q for you, though. Putting aside all the other stuff, am I reading the Montez 2020 article correctly? She had data on all 50 states' gun policies and life expectancies, and only reported a comparison between Mississippi and NY? I mean, even using her data (which I realize you believe are useless), she could have correlated gun laws with life expectancy ... but just didn't?

2. You are accusing me of treating a study as valid by default? Hmmm, my default is that most studies have been somewhere between complete nonsense and trivially true, with a few rare exceptions. Now, this is true: I will often do a critique that tentatively presumes everything in the study is perfectly valid, and show that, even if so, the study STILL does not show what the authors claim. That, then, is the best case scenario for the study; it goes downhill from there.

Expand full comment
José Duarte's avatar

Secondly, the analytical approach is much more complicated than correlation. Nate probably should've talked about it. They used joinpoint and state-level fixed effects. The way we talk about an "effect" is way too crude. These stats are very complicated, and they're layered with epistemic risk. The researchers get to make so many decisions affecting the results, many of which are undisclosed, and no one is checking the analyses or even the existence of the data. I think ultimately we shouldn't even be talking about studies for which no data is posted (and code).

In this case, they say they coded the state laws for every single year from 1970 through 2014, but we don't know how they did it or what their variables mean (e.g. "dealer background checks", "Brady law", etc)

And you have to think about how to properly test an effect of laws and life expectancy. When is the law supposed to kick in wrt life expectancy? Depends on the law. I think they treated all laws the same here, but they don't explain any detail.

When you think about the outcome variable, for all these years, in relation to the predictors, that's extremely complicated. It's not clear how to do it right, or how fragile their joinpoint method is. This isn't like any normal regression, and they excluded virtually all obvious covariates.

Expand full comment
José Duarte's avatar

Yes, in her solo essay, she only has the table with descriptives for two states, and that's the only thing Van Bavel and Knowles cited in their junk paper. In her essay, there is no other data presented, there are no analyses, significance tests, models, etc.

In the essay, she cites her Montez, et al. study (2020, same year) but doesn't refer to firearms specifically, and doesn't provide any details. She says: "Using more robust statistical methods, a recent study merged annual data on states’ life expectancy and 18 policy domains, such as labor and abortion, from 1970 to 2014, showing key differences in the characteristics of states and their populations. 27 Its findings suggest that changes in state policy contexts suppressed gains in US life expectancy during the 1980s and again after 2010. It estimated that after 2010, the trend of US life expectancy gains would have been 25% steeper among women and 13% steeper among men if state policies had not changed in the way that they did."

It's not mathematically possible for the gun control laws to boost life expectancy by even 0.1 years, through a chunky reduction suicide and murder rates (the effect of some gun control laws seems to be to boost murder rates).

I don't know about the other law categories, but the study is invalid. Note that she has **physician assisted suicide** in one of her categories, as a good "liberal" law that would increase life expectancy...

Expand full comment