by Pat Holt
Friday, July 13, 2001
BRINGING THE 'HARD NUMBERS' TO REGIONAL BESTSELLER LISTS
Writing about Bookscan, the data-collection system that's supposed to take the guesswork out of national bestseller lists (see #249), reminded me of the importance of regional bestseller lists and the lure of "hard data."
Shortly before I left the Chronicle, there was talk at the executive level about using "real numbers" when compiling the Sunday Book Review's Bestseller List.
"Every bookstore's got electronic inventory control, right?" an editor said. "Why can't you require these booksellers to report EXACTLY how many books they've sold of each title? Otherwise, what they're sending you now is some by-guess-and-by-golly ranking system, and when you put it all together, it's a lot of voodoo."
Oh, well, grumble, grumble. I had explained that the Chronicle compiled its bestseller list as most newspapers do: We asked about 15 stores (chains and independents) within our metropolitan area to rank their in-house bestsellers in the categories of Hardcover and Paperback. Their lists were faxed to us, and then we weighed each bookstore's "vote" according to sales volume - so a big city store might be a 5 and a smaller suburban store a 2, for example.
So right there were 2 examples of the by-guess-and-by-golly factor, the editor growled. "The booksellers do not provide hard numbers of actual copies sold to prove each book's place in the ranking. Hell, for all you know, they could be making it all up."
Correct, they could. "And then you assign them a subjective number according to your guess about their size, at which point you add up all these subjective figures and come out with a Bestseller List people think is scientifically compiled."
Tee hee, oh well, SCIENCE, now there's a word we can debate until sundown. If you think "hard numbers" will offer anything but quicksand, you're nuts.
Well, let's go back a bit, to 1982 when I felt, like many book editors at the time, that bestseller lists were popularity contests that made lemmings of us all: The New York Times Bestseller List was considered the most influential list in the land.
But week after week, as my colleague Bill Chleboun phoned the bookstore managers who reported their bestsellers to us (and talked to them long enough to make sure they weren't 'making it all up'), the more I could see how glorious it was (and is) for a newspaper to keep close tabs with local bookstores and create a kind of community mirror in which we all (readers and writers) saw ourselves reflected. So different was our list from the New York Times' list (which we ran each week as a comparison), and so beautifully did it define the character and appetite of our voracious regional book-buying public, that it began to tell a story of its own.
For one thing, new titles were showing up on the Chronicle list a LOT sooner than they would hit the New York Times' list - for example, an A.S. Byatt title was on the Chronicle list for six weeks before it got on the NYTBR.
For another, the Chronicle list began to reveal a shift in marketing trends nationwide, especially after Los Angeles replaced New York as the #1 book market in the country,
Instead of books selling first along the Atlantic Seaboard and traveling to the West Coast, which had been the tradition journey for decades, the trends were starting first in California and moving East.
Just watch the Chronicle Bestseller List, I would say - it's so prophetic, it's uncanny. It demonstrates how important ANY regional list can be in identifying the titles that aren't obvious big sellers, don't have money behind them yet are hitting a nerve with readers who buy hardcover books by the ton.
I argued with the Chronicle management that I wouldn't ask our reporting stores for "hard figures" because it would take them too long to find and report - and heck, many of the booksellers would not comply anyway. It was hard enough to convince managers of chain stores to contribute at all, and many independents felt it was AS MUCH ART AS SCIENCE to compile their own in-house lists as they gave them to us.
Further, if you want any confirmation that our list "works," I said, just watch the New York Times list a few weeks later and see how accurate Bay Area stores can be in reporting the hot ones that will go on to be national hits.
Finally, let's say we had a Bookscan that newspapers could use (fat chance - it's going to be so expensive that only the larger houses will afford it): Somebody's got to sort through the "hard figures" and take out all the bibles, cookbooks, gardening catalogs, baseball abstracts, war statistics, government publications, academic works, monographs, course-adopted novels and hundreds of other titles to list the kind of books we already find on bestseller lists that are put out with data from opinionated booksellers and subjective book review editors.
And I say all of this knowing the first letter-writer below is right, too.
Dear Holt Uncensored:
I have to take issue with your unqualified support for Geoff Shandler's ostrich stance on Bookscan, and real sales figures in general. First, Shandler claims that "the charts got worse" after the introduction of Soundscan to the record industry. Leaving aside the question of why we should let Geoff Shandler tell us what kind of music we should like, the truth is that the charts got *real*. He's allowed to dislike best-selling music, but what he can't do is deny that it *was* selling the best.
The major winners under Soundscan were country music -- previously considered "hick music" and ignored by the cognoscenti -- and various types of "black" or urban music. There had been a racist undertone to the music industry's reporting for decades, which Soundscan broke -- suddenly, it was clear that hip-hop was selling in huge numbers, both to the supposedly unimportant urban audience and to suburban white kids. The teen pop movement, which seems to bear the brunt of Shandler's scorn, is entirely post-Soundscan, and has been primarily driven by demographics -- it would have happened anyway, and similar teen booms have happened several times in the past.
More importantly, I doubt Shandler actually has any golden pre-Soundscan age in mind, when only records he personally liked were selling well and everything was right with the world. Perhaps he's a huge grunge music fan -- if I recall my history right, that's what was in vogue right before Soundscan was implemented -- but I suspect that's not his point. I think he's saying that he doesn't like real sales numbers, because he can't finagle them. But that's the whole point of Soundscan/Bookscan; it gives people real numbers that aren't subject to manipulation.
Who knows what the winners would be under Bookscan? I suspect they would be, first, evangelical Christian works of various types, then, romances and, possibly, other things that none of us would even suspect. Both he and you seem to believe that you already know who the winners and losers would be, but the lesson of Soundscan is that the real bestsellers were a *surprise*. It's equally likely that the winners under Bookscan would be a surprise.
I also find it strange that Shandler claims that "businesspeople often settle for the least creative interpretation and manipulation of data" -- isn't *he* one of those very businesspeople that would interpret Bookscan information provided to his company? Is he claiming that he can't think up more creative solutions, and will simply have to sign up books he despises? Or is this merely another Dilbert-esque "managers are idiots, so you can't ever give them information" position? If you assume that everyone involved in the industry is an idiot, any policy or plan will sound bad.
In general, I don't see how ignoring the real world can ever be useful. If a particular book is selling, its publisher already knows it. But *other* publishers don't have that information. What Bookscan does that's so radical is to essentially share every publisher's proprietary information with every other publisher -- so, instead of having to do an educated guess at what the other guy's books are doing, you can *know*. Bookscan, as I understand it, also will not simply provide a single, massive list -- it has geographic and sales channel breakdowns. I would give my eye teeth to have a resource as useful as Bookscan has been promoted as being -- and, even if it only does half of what it's supposed to, it would be wonderful.
After the last few years, in which every bestseller list-maker tortured its entry criteria to keep the riff-raff out ("Oh, God, no! We can't have *Harry Potter* contaminating our precious list! That's a *childrens* book! Send it away! Send it away!"), I have very little faith in the conventional lists and their continual manipulations.
What I want is a list, such as Bookscan promises, that doesn't assume that I'm a moron who has to be protected from certain categories of books. I want a list that's honest, which none of the current lists are. I want to know what books are really selling to readers, where those books are selling, and through which channels. As an editor who believes in his books and authors -- and who works in a genre that gets no glory and lots of abuse from people with elitist attitudes -- I want to know what's working and what isn't.
Those against Bookscan mostly seem to be protesting that if we know that their darlings are failures, they won't be able to buy any more darlings. Well, if those books really are losing great pots of cash, I assume the parent company already knows it. I don't see Bookscan changing that, unless it's to show the publishing community that *everybody's* darlings are losing great pots of cash.
My apologies for the length of this letter; my enthusiasms got away from me.
Dear Holt Uncensored:
"Shandler believes Bookscan data could be dangerous if publishers misuse it. Since the New York Times Best Seller List lost its hegemony in the mid-'90s (when it chose Barnes & Noble as its "exclusive online bookseller" and many independent bookstores stopped reporting to it), a free-for-all set in: Amazon and the chain stores stopped using it and started their own. The independents created the Book Sense list, as Cader notes. The Wall Street Journal and USA Today created their own as well.
"Now, since no single bestseller list dominates, Bookscan can move in and look awfully attractive to those who want it to do the work of publishing for them. As Shandler indicates, there's something alluring about the way ‘hard data’ seems to measure everything scientifically.
"It's much easier to say we have ‘real’ evidence that proves our decisions are right than admit what has always been a sticky truism about the book biz - that publishing is a crap shoot and that literature is better off because of it."
There has to be a reasonable and ethical alternative to the abuse of data that doesn't requiring suppressing or demonizing it.
What concerned me most when Bookscan was first announced is a different aspect of the issue: that access to this cash-register information would be available only to the larger publishers who could afford to pay for it to an exclusive provider.
Now in open market ethics, there is nothing wrong with Bookscan selling its services at whatever prices it can get. However, there is something that seems to run counter to the ethic of the free flow of information and of the independence of booksellers from any set of publishers, when these booksellers provide exclusive access to this data to one provider which includes sales reports about all books from all publishers without providing assurance that they are not lending competitive and strategic advantages to the few with the data generated by books from the many.
Holt Uncensored provides this forum for the free and uncensored exchange of thoughts and ideas from writers of all callings. The opinions expressed here are not necessarily those of Pat Holt or the Northern California Independent Booksellers Association.
"Holt Uncensored" is an online column by Pat Holt
To subscribe, send a blank email to:
To unsubscribe, send a blank email to: