Why, Peter Suderman wonders, are music reviews so often positive, while film reviews seem to be equally split between hosannas and hatchet jobs? Peter suggests it has to do with the different cultures that have grown up around movie and music reviewing—which in one sense almost has to be true as a proximate explanation, but for that very reason doesn’t count as much of an explanation. Rather, it defers the question: Why are the cultures different? It can’t just be a question of historical path dependence. Suderman cites the formative influence of folks ike Roger Ebert and Pauline Kael, but they’re pikers in the poison pen game next to the likes of Lester Bangs or Robert Christgau.
I think Tyler Cowen gets nearer the mark with the suggestion that movie reviews are more geared toward helping readers decide what to see, while music reviews are catering to an audience already heavily subdivided by genre and subgenre tastes. He’s also probably right to expect that reviews will trend more positive as a function of cultural fragmentation.
What Tyler doesn’t say outright—maybe because he assumes it’s obvious—is that there’s an basic difference in the economics of film and music that’s driving all this. To wit: Movies are a lot more expensive to make than records, and so you see a whole lot fewer of them, aimed at a relatively broad audience, and backed by mass-media marketing campaigns.
So the movie reviewer is facing a small pool of films you’ve probably heard about already, then basically rating all of them so you can decide which of the ones you may have been thinking of are worth your while.
The music reviewer, by contrast, is going to be able to consume a lot more albums, at least at a first pass, than a movie reviewer can, but can only actually write up a tiny fraction of the enormous output, even within a fairly circumscribed niche. And the music review reader is much less likely to come to the review knowing much about given album, again because albums aren’t as heavily advertised, and because of the sheer quantity of them.
Given the underlying dynamics, it’s no surprise that the distribution of positive and negative reviews looks different, because the two types of reviews are serving different functions. And in a way, I think it’s almost the reverse of what Tyler lays out: Someone reading a movie review is probably moderately interested in the specific film already, based on trailers or ads or buzz, and wants to know whether it’s worth taking the plunge. Someone reading a music review may be interested in the broad genre (or set of genres) covered by, say, Pitchfork—but is mostly looking to be alerted to something novel. And there’s another sort of symmetry here: The moviegoer has probably seen a promising clip, and consults the review to see whether the movie is likely to deliver on that promise. The album shopper wants the universe of freely available music clips narrowed down—once that’s done, he can make the decision to buy based on his own listening experience.
6 responses so far ↓
1 dhex // Nov 24, 2008 at 4:25 pm
i think volume has a lot to do with it. stuff that doesn’t hit the right way is simply not going to be reviewed.
pitchfork has a lot of negative reviews, though it largely appears to be flavored by trendiness and (occasionally) what seems to be spite.
2 Kevin B. O'Reilly // Nov 24, 2008 at 4:33 pm
Very good explanation. If you limited the pool of music reviews to those of the most popular, best-selling artists — IOW, those that are being reviewed out of obligation due to heavy marketing/consumer awareness — you’d see the negatives go up significantly.
3 Julian Sanchez // Nov 24, 2008 at 7:04 pm
Absolutely. And, vice versa, when you see a mainstream/mass-market movie critic review some obscure little indie film, it’s almost a positive review — something the critic wants to get noticed.
4 Sandy // Nov 24, 2008 at 9:15 pm
To further Kevin’s point, from a time when there was less niche marketing of music, one of the most memorable negative reviews I ever read was a music review in (IIRC) Audiophile:
SADE: STRONGER THAN PRIDE
Review: …and faster than Sominex.
5 wph // Dec 3, 2008 at 9:27 pm
I don’t dismiss JS’ approach, but I think that part of the problem is it is just more dificult to judge music. I think it was Steve Martin who said that writing about music is like talking about architecture. Most music reviews don’t really review the music with any level of intelligence; they are more likely to describe what some of the lyrics are about.
The Grammies are broadly understood to reward the most popular artists, with popularity being a substitute for quality. The Oscars certainly have their flaws, but they don’t just nominate the most popular movies and call them the best.
6 shakti // Apr 2, 2010 at 7:00 am
I agree with you. Critics can create hierarchies of their top and bottom films of all time. These lists are referred to as Top 100 and Bottom 100, but these lists aren’t limited to 100 films, nor are 100 films required to create these lists. EaC automatically creates other lists based on the Top and Bottom 100 Lists. EaC will show the critic hierarchies of the actors, directors and writers that appear/work in the films that are listed in their Top and Bottom 100, along with the number of films each actor, director and writer appears in within the lists. There are several lists on EaC that reflect the overall community rankings. There is The Top 200 Movies and The Bottom 200 Movies that show these hierarchies based on the community average.