Click to Skip Ad
Closing in...

Why Ranking Critics by the Numbers Doesn't Tell You Squat

Photo of Sam Adams By Sam Adams | Criticwire April 30, 2014 at 11:40AM

A new study divides critics into shills and haters based on how often they agree with the herd.
12
Ratings

We live in the era of Big Data, where there's no quality that can't be quantitized. Nate Silver's FiveThirtyEight, which devotes itself to "data-based journalism," may be the first site to be organized around a methodology rather than an area of interest or a point of view -- the idea being that Silver's brand of ostensibility objective number-crunching is a selling point regardless of how or to what it's applied.

In "Screening for Hacks," Vocativ's Adam K. Raymond and Matan Gilat turns the tools of Big Data to America's film critics, ranking their top 50 critics against the average Metacritic score of the movies they reviews to determine, in their words, "Who are the most shameless cheerleaders -- and plain old hacks -- who consistently give movies better reviews than the rest of the pack?  Who are the nastiest grouches who rarely seem to like anything (Spoiler: They’re exactly who you think they are), and who are the straight shooters who reliably deliver reviews in the critical dead center?"

These are their results, delivered, of course, in handy infographic form.

Infographic

I could quarrel with Raymond and Gilat's methodology in a dozen different ways, beginning with the fact that Metacritic's scores are notoriously unreliable gauges of an individual review's assessment: It's not uncommon for a critic to write that they think is a lukewarm review only to have it slapped with a Metacritic score of 80, or vice-versa, and  though the site will sometimes alter the score if the critic writes in to complain it doesn't reflect what they wrote, no working critic has time to police every rating. There's the fact that these 50 critics, selected by such nebulous qualifiers as "reach and reputation, the frequency of their reviews, and whether or not they’re still in the game, with a focus on those who write for newspapers, magazines and websites people actually read," are then measured against a movie's overall Metascore, a weighted average calculated according to a proprietary formula to which the Vocativ authors do not have access, which is why the dividing line between shills and haters falls two-thirds of the way down their list rather than in the middle.

But let's stipulate, for a moment, that Raymond and Gilat's methodology is sound, and provides an accurate record of which critics regularly rank big-budget movies above their peers. That tells us... what, exactly? In exploring the data, and awarding such titles as "Least Discerning Critic in America," they take it as read that critics who stick out from the herd are less valuable than those who vote with it. Owen Gleiberman, we're informed, not only liked "Confessions of a Shopaholic," but "gave it a higher grade than 'Goodfellas,'" while his former Entertainment Weekly colleague looked kindly on Britney Spears' "Crossroads." (Funny how their analysis denigrates female-focused films while holding up Martin Scorsese's tough-guy opus as the inarguable gold standard, innit?) Meanwhile, the Wall Street Journal's Joe Morgenstern is singled out as the "biggest hater," for the sin of rating movies 13 percent lower than his colleagues and nearly 23 percent lower than their their average Rotten Tomatoes user review.

What strikes me looking over the breakdown is that, as far as the critics whom I consider the most perceptive and fun to read, there's no distinction between those who fall above the line and those below it. I never would have guessed that Wesley Morris ranks movies 10 percent higher than A.O. Scott, and knowing it now doesn't change the way I think of them in the slightest. What Raymond and Gilat's method divulges is not really which critics are more and less discerning, but which run closest to the middle of the pack -- which is to say, who's the most average. No offense to the Chicago Reader's "straight shooter" J.R. Jones, who's singled out for most consistently agreeing with a movie's Metascore, but that's not actually a compliment. 

Reducing a piece of criticism, which, when done right, is as much an argument as an evaluation, to a numerical score is problematic in the first place. It takes the art that critics have spend their careers honing and boils it down to the one part that any idiot can do: The web hasn't proven that "anyone can be a critic," but it's certainly proven that anyone can slap a grade onto a work of art. If you want to know what everyone thinks, the user ratings on IMDb or Rotten Tomatoes are the best place to look (although it's fairly obvious that their user base skews heavily white and male). If you want to know what an individual critic thinks -- and, more importantly, how they think -- you turn to a review, which is to say, an articulate and informed argument, and not simple the letter grade or star rating that sits atop it. (One of my pet peeves is the substitution of "good" for "positive" in describing a review, as if it weren't possible to write a "good" review whose overall judgement was negative.) It's worth noting that the bulk of the 15 critics below Raymond and Gilat's dividing line don't employ any kind of rating system, which might sooner suggest that Metacritic is more likely to tag an unrated review with a low score rather than that those critics are a bunch of Debbie Downers.

For the most part, "Screening for Hacks" is harmless fluff, of interest only to those for whom a critic's "reliability" and her or his proximity to the mean are one and the same. But in perpetuating the idea that a critic's function can be reduced to a series of plots on a graph, it devalues that actual work that they do in a time when critics need to heighten the distinction at every turn. No one ever spent four hours slaving over a star rating.


E-Mail Updates