Post by Bdub on Aug 18, 2012 22:59:45 GMT -5
Bias in PolitiFact’s Ratings: Pants on Fire vs. False
The self-styled "liberal counterpart" to my criticisms of PolitiFact delivered a critique approximately as inept as expected. Karen Street apparently failed to understand the study topic and ignored many of the key descriptions in its ill-informed and summary dismissal of the study.
Street:
The above is sort of accurate, though I described a number of refinements of the data that Street fails to mention (I did not count obvious party on party claims, for example) and the study shows the greater likelihood of Republicans receiving a specific subjectively-determined rating from PolitiFact, not the mere difference in the number of "Pants on Fire" ratings.
A serious review does not miss those types of points. They are very important.
Ostermeier's study is quite different from mine. His looks at the possibility of selection bias based on PolitiFact's descriptions of its process and the resulting numbers for a set period in time. My study looks at the bias in the application of a rating according to PolitiFact's descriptions to see if the resulting subjectivity affects one party more adversely than the other. I did not get the idea for my study approach from Dr. Ostermeier. It's as likely that I got the idea for my study by translating a set of golden plates revealed by an angel. Selection bias isn't really much of a factor in my study, the exception occurring if PolitiFact selects more ridiculous stories from one party specifically to try to appear more fair.
Street's confusion appears to result directly from her failure to appreciate the core proposition of the study: All "Pants on Fire" ratings are subjective and unfair. It directly follows that any substantial difference in application of a subjective standard between one group and another (any two groups) indicates a bias.
Street does not accept the obvious conclusion:
It could be liberal bias, indeed. But the data are not explained by "Republicans lie more" because lying is not a subjective thing. The "Pants on Fire" rating is explicitly defined in terms of ridiculousness, and the study takes pains to make that clear, though apparently it wasn't clear enough for Street. As for "Fox News or the Tea Party," neither is in charge of determining which statements PolitiFact finds ridiculous" are in addition to merely false. PolitiFact is apparently solely responsible. It's nonsensical to place the blame elsewhere. Suggesting that the "Pants on Fire" is actually an objective determination (contrary to PolitiFact's descriptions) represents the sole reasonable explanation of the data apart from the one suggested in the study. Arguing for that explanation, of course, obligates the critic to suggest the nature of the objective criteria and somehow excuse PolitiFact for describing its criterion as though it was purely subjective.
The study doesn't say the results mean the PolitiFact National staff is liberally biased. The study says the results show a liberal bias. This one: When PolitiFact National makes a ruling on a Republican, they are 74 percent more likely to make the False "Pants on Fire" than for a Democrat. That bias, in turn, contributes to an inference toward the best explanation: That the staff of PolitiFact National is biased toward the liberal side. That inference is suggested in the study but left to the reader to make.
Street's criticisms represent a parade of her own mistakes. She makes no relevant criticism of the study, and that's why I don't clutter my regular blogs by responding to this type of nonsense.
The self-styled "liberal counterpart" to my criticisms of PolitiFact delivered a critique approximately as inept as expected. Karen Street apparently failed to understand the study topic and ignored many of the key descriptions in its ill-informed and summary dismissal of the study.
Street:
It added up all the Falses and Pants on Fire from PolitiFact’s beginning through the end of 2011, then took a percentage of the Pants on Fire of the False + Pants on Fire, then compared that percentage between Democrats and Republicans as a way to show how many more Pants on Fire are given to Republicans.
The above is sort of accurate, though I described a number of refinements of the data that Street fails to mention (I did not count obvious party on party claims, for example) and the study shows the greater likelihood of Republicans receiving a specific subjectively-determined rating from PolitiFact, not the mere difference in the number of "Pants on Fire" ratings.
A serious review does not miss those types of points. They are very important.
It appears that PolitiFact Bias’ Bryan White may have gotten the idea from Eric Ostermeier’s much lauded study where he also took a look at a similar percentage
Ostermeier's study is quite different from mine. His looks at the possibility of selection bias based on PolitiFact's descriptions of its process and the resulting numbers for a set period in time. My study looks at the bias in the application of a rating according to PolitiFact's descriptions to see if the resulting subjectivity affects one party more adversely than the other. I did not get the idea for my study approach from Dr. Ostermeier. It's as likely that I got the idea for my study by translating a set of golden plates revealed by an angel. Selection bias isn't really much of a factor in my study, the exception occurring if PolitiFact selects more ridiculous stories from one party specifically to try to appear more fair.
I’m not sure how he could place such confidence or reliance in such a number as being “a clear bias against Republicans†or how in the world could he expect that “neutral judgment ought to result in approximately equal proportions of unfair Pants on Fire ratings.â€
Street's confusion appears to result directly from her failure to appreciate the core proposition of the study: All "Pants on Fire" ratings are subjective and unfair. It directly follows that any substantial difference in application of a subjective standard between one group and another (any two groups) indicates a bias.
Street does not accept the obvious conclusion:
(I)t could be liberal bias, it could be Republicans lie more, it could be the rise of Fox News or the Tea Party, it could be a lot of things—but I don’t think just adding up the Pants on Fire as a percent of all claims found false validates any conclusion of liberal bias. There's no PolitiFact "Achilles' Heel" here as White wants his readers to believe.
It could be liberal bias, indeed. But the data are not explained by "Republicans lie more" because lying is not a subjective thing. The "Pants on Fire" rating is explicitly defined in terms of ridiculousness, and the study takes pains to make that clear, though apparently it wasn't clear enough for Street. As for "Fox News or the Tea Party," neither is in charge of determining which statements PolitiFact finds ridiculous" are in addition to merely false. PolitiFact is apparently solely responsible. It's nonsensical to place the blame elsewhere. Suggesting that the "Pants on Fire" is actually an objective determination (contrary to PolitiFact's descriptions) represents the sole reasonable explanation of the data apart from the one suggested in the study. Arguing for that explanation, of course, obligates the critic to suggest the nature of the objective criteria and somehow excuse PolitiFact for describing its criterion as though it was purely subjective.
All he has found is that when PolitiFact National makes a False ruling on a Republican, they are 74% more likely to make the False Pants on Fire, than they are for a Democrat. This doesn’t mean they’re doing it because they are liberally biased.
The study doesn't say the results mean the PolitiFact National staff is liberally biased. The study says the results show a liberal bias. This one: When PolitiFact National makes a ruling on a Republican, they are 74 percent more likely to make the False "Pants on Fire" than for a Democrat. That bias, in turn, contributes to an inference toward the best explanation: That the staff of PolitiFact National is biased toward the liberal side. That inference is suggested in the study but left to the reader to make.
Street's criticisms represent a parade of her own mistakes. She makes no relevant criticism of the study, and that's why I don't clutter my regular blogs by responding to this type of nonsense.