Click to Skip Ad
Closing in...

Why Ranking Critics by the Numbers Doesn't Tell You Squat

Features
by Sam Adams
April 30, 2014 11:40 AM
12 Comments
  • |

We live in the era of Big Data, where there's no quality that can't be quantitized. Nate Silver's FiveThirtyEight, which devotes itself to "data-based journalism," may be the first site to be organized around a methodology rather than an area of interest or a point of view -- the idea being that Silver's brand of ostensibility objective number-crunching is a selling point regardless of how or to what it's applied.

In "Screening for Hacks," Vocativ's Adam K. Raymond and Matan Gilat turns the tools of Big Data to America's film critics, ranking their top 50 critics against the average Metacritic score of the movies they reviews to determine, in their words, "Who are the most shameless cheerleaders -- and plain old hacks -- who consistently give movies better reviews than the rest of the pack?  Who are the nastiest grouches who rarely seem to like anything (Spoiler: They’re exactly who you think they are), and who are the straight shooters who reliably deliver reviews in the critical dead center?"

These are their results, delivered, of course, in handy infographic form.

I could quarrel with Raymond and Gilat's methodology in a dozen different ways, beginning with the fact that Metacritic's scores are notoriously unreliable gauges of an individual review's assessment: It's not uncommon for a critic to write that they think is a lukewarm review only to have it slapped with a Metacritic score of 80, or vice-versa, and  though the site will sometimes alter the score if the critic writes in to complain it doesn't reflect what they wrote, no working critic has time to police every rating. There's the fact that these 50 critics, selected by such nebulous qualifiers as "reach and reputation, the frequency of their reviews, and whether or not they’re still in the game, with a focus on those who write for newspapers, magazines and websites people actually read," are then measured against a movie's overall Metascore, a weighted average calculated according to a proprietary formula to which the Vocativ authors do not have access, which is why the dividing line between shills and haters falls two-thirds of the way down their list rather than in the middle.

But let's stipulate, for a moment, that Raymond and Gilat's methodology is sound, and provides an accurate record of which critics regularly rank big-budget movies above their peers. That tells us... what, exactly? In exploring the data, and awarding such titles as "Least Discerning Critic in America," they take it as read that critics who stick out from the herd are less valuable than those who vote with it. Owen Gleiberman, we're informed, not only liked "Confessions of a Shopaholic," but "gave it a higher grade than 'Goodfellas,'" while his former Entertainment Weekly colleague looked kindly on Britney Spears' "Crossroads." (Funny how their analysis denigrates female-focused films while holding up Martin Scorsese's tough-guy opus as the inarguable gold standard, innit?) Meanwhile, the Wall Street Journal's Joe Morgenstern is singled out as the "biggest hater," for the sin of rating movies 13 percent lower than his colleagues and nearly 23 percent lower than their their average Rotten Tomatoes user review.

What strikes me looking over the breakdown is that, as far as the critics whom I consider the most perceptive and fun to read, there's no distinction between those who fall above the line and those below it. I never would have guessed that Wesley Morris ranks movies 10 percent higher than A.O. Scott, and knowing it now doesn't change the way I think of them in the slightest. What Raymond and Gilat's method divulges is not really which critics are more and less discerning, but which run closest to the middle of the pack -- which is to say, who's the most average. No offense to the Chicago Reader's "straight shooter" J.R. Jones, who's singled out for most consistently agreeing with a movie's Metascore, but that's not actually a compliment. 

Reducing a piece of criticism, which, when done right, is as much an argument as an evaluation, to a numerical score is problematic in the first place. It takes the art that critics have spend their careers honing and boils it down to the one part that any idiot can do: The web hasn't proven that "anyone can be a critic," but it's certainly proven that anyone can slap a grade onto a work of art. If you want to know what everyone thinks, the user ratings on IMDb or Rotten Tomatoes are the best place to look (although it's fairly obvious that their user base skews heavily white and male). If you want to know what an individual critic thinks -- and, more importantly, how they think -- you turn to a review, which is to say, an articulate and informed argument, and not simple the letter grade or star rating that sits atop it. (One of my pet peeves is the substitution of "good" for "positive" in describing a review, as if it weren't possible to write a "good" review whose overall judgement was negative.) It's worth noting that the bulk of the 15 critics below Raymond and Gilat's dividing line don't employ any kind of rating system, which might sooner suggest that Metacritic is more likely to tag an unrated review with a low score rather than that those critics are a bunch of Debbie Downers.

For the most part, "Screening for Hacks" is harmless fluff, of interest only to those for whom a critic's "reliability" and her or his proximity to the mean are one and the same. But in perpetuating the idea that a critic's function can be reduced to a series of plots on a graph, it devalues that actual work that they do in a time when critics need to heighten the distinction at every turn. No one ever spent four hours slaving over a star rating.

Features
  • |

12 Comments

  • Dave (or Fred, whatever, it's all good) | May 7, 2014 5:47 PMReply

    "...But in perpetuating the idea that a critic's function can be reduced to a series of plots on a graph, it devalues that actual work that they do..."

    But isn't that what critics do themselves? Reduce months/years of work into dismissive paragraphs of hyperbole, conjecture and opinion? Just a thought, here..

  • Fred (or Dave, either way) | May 8, 2014 11:22 AM

    Wait, "lousy" critics? Then what exactly does a "good" critic do, if not sit in judgement under the guise of "informative consumerism"?

  • Sam Adams | May 7, 2014 5:50 PM

    If they're lousy critics, then yes.

  • Thomas Prieto | May 1, 2014 8:25 PMReply

    I think you guys meant J.R. Jones, not his Chicago Reader colleague, Ben Sachs.

  • Michael Denvir | April 30, 2014 8:19 PMReply

    I'm agnostic as to the value of this infographic, but I will point out that it actually measures the numbers assigned to critics' reviews rather than the reviewers themselves, so it might be helpful as a guide to evaluating the metacritic value from assigned to a particular review from one of the outlying reviewers.

  • Sam Adams | April 30, 2014 11:48 PM

    I think that's right. As I mention at the end, it seems to suggest Metacritic assigns lower scores to reviews published without explicit ratings, although you'd have to do a more purposeful examination than I have the energy for to find out how true that is.

  • Greg Cwik | April 30, 2014 5:21 PMReply

    Lumping together all critics from all forms of media is innately silly; an online-based writer, like Scott Tobias or Keith Phipps or Dana Stevins, has far more freedom and a more malleable word count limit with which they can work. Their audience is more nebulous, at once niche and arguably wider-ranging. Writers at EW (I refuse to write out the words, as "ew" feels far more apt for what that magazine has become) or the Tampa Bay whatever are writing for different readers than The Dissolve, Slate, AV Club and Indiewire (the last two sadly absent from the faux-study). In short, this is stupid, and I'm a 24-year-old curmudgeon.

  • Sean Axmaker | April 30, 2014 1:23 PMReply

    Wouldn't the "least discerning" critic, if you are using statistics rather than content as your guide, be the critic who gives more positive reviews to films at the bottom of the Metacritic (or Rotten Tomatoes or whatever bar you're measuring by) scale than any other critic? It's a meaningless experiment however you crunch the numbers because numbers don't measure critical engagement or quality of writing.

  • Fernanda | April 30, 2014 12:47 PMReply

    We will some day come to realize that anyone can be a critic. "Anyone" meaning, any person who can construct an intelligible argument about their sensory-intellectual experience. And we will come to realize that all our pretty theories about the supposed value of certain objects (and assuming films have value is QUITE a presupposition) are completely ridiculous and the result of the desperate clinging to a very distant tradition which is kept alive in a way very similar to Poe's Valdemar. Words like beauty, genius and value are nothing but the empty vessels of a bygone era and its greatest hopes and fears. The conditions of possibility for such wordings are, now, non-existent.
    The future of art criticism as "specialized work" -based in class privileges, ideological entrapment, and a very clear despise of the "stupid masses"- is very much like the future Nietzsche dreamed for philology: disappearance.

  • Louisa | April 30, 2014 12:47 PMReply

    Owen Gleiberman, an excellent critic in my opinion, was recently fired by Entertainment Weekly after working there for two decades so the magazine can publish unpaid reviews by unknown critics. Disgusting! I will never read "EW" again.

  • Keith Phipps | April 30, 2014 12:46 PMReply

    I used to be the critical consensus (or "most conformist" or whatever) the last time someone crunched these numbers. (A 2009 Slate piece called "Can A Film Critic Be Too Contrarian," which I'm not being allowed to link to.)

    Is my ascent to generosity progress? I don't know!

  • Vince Mancini | April 30, 2014 12:05 PMReply

    I can't even fathom being so dumb and boring that your idea of a "good critic" is someone who's the nipple on the bell curve. You certainly don't need a graph to find yourself a boring, predictable film critic.

Email Updates