AAN Publishers Seek Best Way to Identify Readers

Ratings Services Use Surveys to Discover Audience's Demographic Characteristics and Spending Habits

A Media Audit rep is telling the sales staff of an AAN paper how its print and Website readers rank against the audience for everything from local shoppers to television stations—and how that ought to help them sell ads.

The rep scrolls through what seems like hundreds of charts, showing how many readers pick up the competitor (good news—the number is small) and the local business weekly (good news—the number is large, and those readers rank among the richest). The AAN paper’s readers even outnumber the local rock radio behemoth’s listeners in some segments of the population that advertisers always want to reach, the rep points out. “If you guys were a radio station, you’d be Godzilla,” he says. “It’s just an awesome audience.”

But simply knowing how many of the weekly’s readers have big incomes won’t help when ad reps pitch the paper to luxury car dealers, he maintains. “Isn’t the stereotype of your reader somebody who is usually pierced and tattooed, has no job? If you just schlep in the ranker [report], [the car dealers] will throw it away the moment you walk out the door: ‘Get a load of this, the dope-smoker paper, number one!’” So he demonstrates half a dozen other convincing ways to present the numbers. The sales staff eyes each other, obviously pleased.

Great tool, the ad director murmurs to a colleague shortly afterwards. If only my people would use it more often.

Publishers of even the most successful papers, from largest to smallest, say they depend on market comparisons and reader surveys to make the case that ads are reaching the particular audience a client is seeking. At the same time publishers doubt many things about the research tools—from their reach to their accuracy. Who sits still for printed surveys anymore? With cell phones rampant, replacing landlines especially among the youngest adults—and busy workers often away from home or screening their calls—who exactly are phone surveys finding? Are researchers talking to enough people, let alone the right people?

Several demographic research firms vie for AAN papers’ business

The media ratings service most AAN papers use is The Media Audit, a product of International Demographics in Houston, Texas. Through phone surveys in 82 of North America’s largest markets (covering the territory of more than 110 AAN papers), Media Audit collects data on demographics as well as consumer habits and plans. It then compares audiences among all local media. This allows users of its syndicated conclusions to compare their readership to that of individual sections of the local daily, for instance, or to rank its readers’ interest in particular consumer products compared to the readers (or listeners or viewers) of other local media. With such a broad reach and such in-depth data, Media Audit has made itself nearly indispensable in many markets.

AAN papers also turn to companies like Mediamark Research Inc. (MRI), Scarborough Research and Simmons Market Research Bureau, which provide market-wide assessments of an area’s consumer demographics (offering a core sample of every strata in a local market) or paper-specific reader surveys (connecting more closely with a publication’s actual fan base). Verified Audit Circulation also studies reader demographics for some within AAN. Other papers, particularly those in Canada or in smaller cities and college towns not covered by Media Audit, rely on in-house surveys or the talents of local companies or universities instead. Such services are also easier to fit into the budget of a small paper.

Alternative Weekly Network, which solicits national ads for groups of AAN papers as well as some nonmember papers, asked Media Audit to compile a composite picture of the readership of the entire alternative paper industry. The results, gathered in 2002, covered 111 papers in 75 markets. The alternative weekly audience is slightly more male (52 percent) than female, while more than half are single and 39 percent fall within the age range most coveted by advertisers: 18-34. Add a mere 15 years to the top end of that age range and 72 percent of weekly readers are included. Seventy percent are college graduates or have attended college.

Media Audit also found that, compared to the general population, the AAN audience is more likely to whip out their collective wallets for a number of items guaranteed to make an advertiser salivate. Readers of alt-weeklies have much bigger plans than the average North American consumer to buy such things as stereo equipment (51 percent more) and video equipment (41 percent more), for instance. They are habitual purchasers of beer, wine, Internet access, tapes, CDs, movie tickets and airplane tickets—about a third more than the average person.

Media Audit findings for individual AAN papers can vary widely, of course. As reported in the latest AAN directory, the median age for Reno News and Review readers is 46; for Omaha Weekly Reader, it’s 30. Median household income for Charleston City Paper readers is $42,988 and it’s $71,061 for the Hartford Advocate audience. Only 36 percent of L.A. Weekly readers hold professional or managerial jobs while that’s true for 60 percent of Houston Press readers.

Publishers question some research results

Though John Weiss, Colorado Springs Independent publisher, has not agreed with every one of Media Audit’s findings about his paper’s public through the years, “it is the only resource that we have to compare [ourselves to] different media in town,” he says.

Media Audit pollsters, Weiss says, “over-sampled people one year who lived in the county and not in the city. They did not control for the urban-rural orientation.” The company worked with the Independent, Weiss says, to straighten out that particular kink.

He’s pleased with Media Audit’s results now. Still, he adds: “I wish they used a bigger sample, particularly for markets our size.” Currently, the company surveys 750 people in the Colorado Springs area; the paper’s circulation is 36,000. If only a quarter of those surveyed read the Independent, Weiss points out, and they are divided evenly between the sexes, that means fewer than 100 male and 100 female readers are included in the sample.

“But we come out very strong in this market compared to the daily and other media, so we like it,” Weiss says.

No one AAN member would admit using an audit or survey to judge editorial content, apart from its placement. “The best use [of it] is to train my reps as to what kind of characters we should go after,” Weiss says.

Chuck Leishman, publisher and acting ad director of Birmingham Weekly, found some of Media Audit’s last demographic report “literally impossible,” since it showed worse figures for 2002 than 2000—even though Birmingham had increased circulation. “But overall,” he says, “I think Media Audit has a good methodology in getting numbers out of markets.”

Ben Eason, publisher of The Weekly Planet in Tampa, and president/CEO of Creative Loafing Inc., calls himself “a big fan” of Media Audit. Eason says its findings have helped his staff target the unlikeliest of advertisers—local suppliers of snow skiing equipment—after the Planet’s audit found a high percentage of skiers among its readership. “No [sales] rep is going to walk into a ski shop throwing around a lot of bullshit” without the Media Audit backing the rep up, he says.

“I think the big pitfall is to think any piece of research is the truth,” says Jane Levine, chief operating officer of Chicago Reader Inc., which also owns Washington City Paper.

For many years Levine’s papers have done both types of reader research—market portraits and readership polls, first through Simmons, then more recently—as part of the Ruxton Group of alternative newsweeklies seeking national ads—through MRI. Chicago Reader alone uses Media Audit to profile its readers; Washington City Paper employs Scarborough.

Surveys and market comparisons “each paint a slightly different picture but they don’t contradict each other,” she says. In Chicago, for instance, self-selected readers who returned surveys recently were a little bit whiter, richer and more educated than readers revealed by syndicated market research—but both results still skewed in the same direction.

With all that research at her disposal, is she satisfied with the data?

“I suppose not really. It’s not that I dislike the numbers we get. I just don’t feel that enough people are answering surveys. It used to be we would get 15 to 18 percent response. Now we’re getting 10 percent response. And it’s not because of the paper.” Its readership remains strong, she says. “I’m not distressed by the results themselves. I’m distressed by the state of the research industry,” Levine concludes, wondering whether its practitioners are still reaching the people they need to reach.

Rating firms use different methods to get results

“We were probably the first [media rating] company to go after the alternative press,” says Bob Jordan, cochairman and cofounder of The Media Audit. “I assume they have been fairly successful in using our data.”

As a sales tool for national ads, Media Audit is invaluable, says Mark Hanzlik, executive director of the Alternative Weekly Network. Media Audit’s aggregate data, covering 100 AAN papers, makes the sample size “pretty large” and also allows Hanzlik to pinpoint readership in college towns only, for example.

Jim Wolf, vice president of national advertising for Village Voice Media, uses Media Audit for its six papers, adding market assessments by Scarborough for New York and Los Angeles because it is well known to national advertising agencies. “I find a very strong reason for using both of them,” he says.

Media Audit works entirely by phone survey of 1,000 people (on average) chosen randomly in each market and then weighted (according to updated census data) to reflect each market’s demographics. The company tosses out the responses of anyone who refuses to give an age, since it balances and weights by that criterion, Bob Jordan says. The rating service does no estimating, which other companies might do by applying certain responses of people in one demographic to non-responders in the same bracket.

“Every statistic in [a market evaluation] is an estimate, since it’s subject to standard deviation,” Jordan allows. “Standard deviation” indicates how closely an average member of a sample population resembles the actual population. The standard deviation for each Media Audit assessment varies with the sample size and the market size, of course.

But how does the company know people are answering their questions accurately? “Well, you don’t, really,” he says. “You assume that there’s no reason to lie. Then when you compare [Media Audit’s findings] with other data sources, you get a feel for how accurate you are.”

Scarborough Research—run by the same outfit that owns the radio research firm Arbitron and television’s ACNielsen—uses a mix of phone and print surveys in 75 markets, reaching 2,000 to 10,000 people per market, says Cynthia Methvin, Scarborough’s account director for print sales. An initial short phone survey (which includes questions about newspaper reading habits) is followed by a print survey assessing consumer activity, another phone survey about radio listening habits and a seven-day television-viewing diary.

Scarborough offers small cash incentives ($2 to $10) and multiple reminders via phone and mail to urge people to complete the process, particularly to Hispanics and males 18 to 34, the industry’s toughest customers, says Gary Meo, Scarborough’s senior vice president for print and Internet sales.

The advantage of the combined phone/paper survey, opines Meo, is that Scarborough can get more information than Media Audit—about two-thirds more, he says. The disadvantage is that 53 percent of people on average don’t complete the mailed survey after taking the phone survey. “The choice we make is to ascribe respondent data to nonresponders” based on demographics—what the industry calls imputation. “You’re assuming that the responses of the non-ascribers would be the same as the ascribers. But that’s a fair assumption,” particularly because their findings are borne out by other independent assessments of the same markets.

Both Meo and Jordan say unreachable cell phones are not yet a problem for their companies’ work—since the number of people using cell phones exclusively is still tiny—but they will be. Scarborough is developing Web-based and email surveys for a limited number of markets in the second half of 2004; they expect to go national in 2005, with questions sent to respondents via email, along with invitations to answer via the Web. Jordan believes the industry may move toward demographically representative panels of people for its online surveys.

Some papers turn to readership surveys

Smaller AAN papers that rely on readership surveys—via print or phone—say they have advantages over market comparisons, and pitfalls as well.

In Nova Scotia, Canada, for instance, The Coast Publisher Christine Oreskovich, has so little competition for the dollars of Haligonians that she doesn’t even bother with consumer trend questions on Coast’s locally commissioned survey. Ithaca Times Publisher Jim Bilinski says the paper’s independent survey got the same answers as one administered previously by Simmons.

Bruce Mitchell, publisher of The Athens News in Ohio, uses a two-page custom survey overseen by Dr. Ashok Gupta, professor of marketing at Ohio University, and “a group of gung-ho graduate students.” Gupta says he got about 200 responses regarding this 18,000-circulation paper in a market of 60,000.

Mitchell is happy with the results, he says, and has included some of them in advertisers’ bills and has run some in the paper. Is such data convincing?

“We’ve done other surveys asking advertisers whether they believe our stuff or not, and that continues to be a challenge,” Mitchell says. He’s concerned that in surveys, just as in sweepstakes, females respond in disproportionate numbers to males. “We could not get an accurate sex breakdown using a survey.… Even on phone surveys, women answer home phones more often.”

Jeff vonKaenel, president/CEO of Chico News & Review in California, and of two other newsweeklies, is very happy with his phone survey, which uses the local University of California branch to call 400 people—1 percent of his circulation and 0.2 percent of the market. “I think that’s more accurate than having people send [a survey] in,” he says, then cautions: “They’re both poison—which one do you want to pick?”

He faces the same problem as Media Audit—finding cell phone users—but is confident of his survey results: “I’ve had the paper for 20 years. I’ve probably done six surveys. If they’re wrong, they’re consistently wrong.” And the time is long past when advertisers believed some cliché image of alt-weeklies: “[That] we say we reach a hell of a lot of people, and that they are younger than [readers of] the dailies, is not that shocking. It quantifies something people already feel.” Twenty years ago, he says, the claim was considered a little more suspect.

According to Niels von Doepp, director of sales for Verified Audit Circulation, advertisers look most to surveys for the old standbys—readers per copy, pass-along rate, retention. Only secondarily do they look at purchasing habits, incomes and demographics.

Doubters wonder about researchers’ methods

A few of the AAN publishers interviewed conclude that using any assessment requires “a leap of faith.” That includes Tim Keck, publisher of The Stranger in Seattle, who supplements Media Audit with a reader survey by MRI. “On the broad brushstrokes they kind of agree, but they could both be wrong in the same way. I always wonder, who fills out these things? I read The Stranger but I don’t respond to surveys on the phone.”

Some actual findings make Keck doubt the accuracy of market research, he says. In fall 2001, Media Audit reported that The Stranger’s readership (including pass-along copies) was 185,000; in spring 2002, the company found 207,000 total readers. Keck’s paper was back down to 166,000 readers in fall 2002, Media Audit said, then up to 216,000 by spring 2003. Obviously, such swings of 22,000 to 50,000 readers are not happening, Keck says.

“Things have changed,” says Thomas Yoder, treasurer of Chicago Reader Inc. “We did our first [survey] with Simmons in 1981. Somebody puts a survey in front of you in 1981, that was a pretty unique thing and you felt, hmmm, this is interesting.”

Over the past two decades, Yoder says, survey response rates have dipped. Media Audit and firms with similar methodologies arose after 1981, but Yoder isn’t fond of their techniques:

“You can’t be sure you’re really talking to readers of your paper. … When you put a survey in a newspaper, you can at least be sure that person saw one copy.”

Plus, he says, “I think you need to look in further than the reported reach of papers that have sort of a generic name to them—say, a City Paper.” Surveys gauging the readership of Willamette Week, for instance, are more likely to produce realistic numbers.

Premiums for reader responses may also affect the data. “How can we be offering a premium? That’s got to skew the results. Basically Simmons and MRI say ‘You’re not going to get any response at all [if you don’t]’ or ‘I don’t think they skew results.’”

“There are other problems,” Yoder says. “People also tend to say yes to questions they think they ought to say yes to.” And, he adds, “the surveys that we put out tend to be too hard. They take too long to answer.” Yoder once tried to fill out his own survey when he found it inside a Chicago Reader on the street. “I couldn’t believe how long it took me to fill out the survey—and I’m the one who’d written it. I said, ‘This is a bitch. This is hard.’”

The readership research industry still has some refining to accomplish, he concludes: “It’s a complicated, hard thing. There aren’t clear, regular answers.”

Marty Levine is the news editor of Pittsburgh City Paper.

Leave a Reply