Click to Skip Ad
Closing in...

The World's "Most Reliable" Movie Critic Speaks

Photo of Sam Adams By Sam Adams | Criticwire May 9, 2014 at 10:04AM

The Chicago Reader's J.R. Jones, at the dead center of a study ranking critics by how often they agreed with their peers, speaks out against turning criticism into math.
0
Reader
Chicago Reader

Last week, the website Vocativ posted a ranking of English-language film critics according to how frequently they agreed with a movie's overall Metacritic score, using the results to place them on a continuum ranging from "Least Discerning" to "Haters." Smack dab in the middle was Chicago Reader critic J.R. Jones, who was honored with the title of "Most Reliable."

In the new Reader, Jones takes stock of his dubious distinction, grudgingly accepting that as the new "Mr. Reliable" he'll have to give up his traditional aisle seat for the middle of the theater's middle row. But he also registers a forceful objection against the conversion of thoughtful criticism into a numerical value, and the pseudo-scientific analysis that inevitably follows. 

Over the years, as contributors to the Reader's movie section have come and gone, I've always tried to impress on them that, as long as they write well, I couldn't care less whether they like a movie or not. Manny Farber, one of the greats, was known for the ambiguity of his reviews; from his work comes the ideal, imparted to me by my predecessor, Jonathan Rosenbaum that precise, colorful description of a movie can be more helpful to the reader and more revealing of the writer than opinion, because opinions are like -- well, you know what they're like. Rosenbaum always fought like hell the suggestion that every capsule review carry some sort of rating, though eventually the paper instituted the little backward R as a "recommended" icon throughout all the arts sections. Thumbs up, thumbs down; four stars, two stars, no stars; 59 out of a hundred -- there's no respect for words in this racket. 

As a critic, I've reconciled myself to giving ratings, whether in terms of stars or letter grades, when I write for outlets that use them, but I still dislike the practice, which unavoidably lessens the value of the review itself. No matter the argument behind it, a starred or graded review will be remembered first and foremost by the rating attached to it, reducing the hard work of good criticism to the one part of the process any idiot can do. Readers, I've been told, prefer ratings, although that seems to be more an article of faith than the product of actual research.

I believe, in part because I have to, that intelligent readers will look past the stars and the grades to the substance of a review, and not end up quibbling that a B review sounds more like a B+. But ratings travel in a way arguments and insight do not, and when those ratings are remapped onto Metacritic's 100-point scale and those numbers are treated as hard data in turn, the signal is vastly overpowered by noise. 


E-Mail Updates