Online news on a smartphone and laptop. (Mockup website). Woman reading news or articles in a mobile phone screen application at home. Newspaper and portal on internet, copy space.

Control tweak: Making the rules could lower online advertising costs

Associate Professor of Information Systems Sang Pil Han has found a work-around that may give advertisers insights for less cost. With a simple approach and some back-end analysis, advertisers can avoid the ad-valuation service charge from a demand-side platform and still know if what they’re paying for advertising is paying off.

By Betsy Loeff

How do ads for goods you’ve been researching online seem to follow you from screen to screen? How does Jane, the ad buyer for that shoe brand you like, know when your eyes will be reading The New York Times online so she can serve up an ad just for you? She doesn’t. Online ads get sold through real-time auctions that take place through intermediary organizations called demand-side platforms (DSPs). The demand is from advertisers seeking access to eyeballs, while websites and online media offer up the screen-based real estate they have available to sell.

Meanwhile, the DSPs are charging a premium to the advertisers for performance-optimization and ad-valuation services, but Sang Pil Han, associate professor of information systems, has found a work-around that may give advertisers insights for less cost. With a simple approach and some back-end analysis, advertisers can avoid the ad-valuation service charge from a DSP and still know if what they’re paying for advertising is paying off.

Up for auction

What is a demand-side platform? It’s a broker of sorts that uses software to link advertisers with websites and online media. “Say there are two businesses that are interested in displaying a banner ad on my laptop screen,” explains Han. “One is willing to pay 20 cents for the ad, and one is willing to pay 30 cents,” he continues. The DSP runs the marketplace, determines the willingness to pay from each advertiser, and sells the ad to the highest bidder.

All of this happens on a per-screen basis reflecting the individual IP addresses of each person looking at the website. It also happens instantaneously. “There’s no human touch at all,” Han says. “Everything is automatic.”

Advertisers can control their bidding in these real-time auctions through decision rules, such as target-audience settings, ad viewers’ propensity to click on ads, past buying behavior, the likelihood of a sale to that viewer, and other metrics. At the same time, a DSP may offer a premium service, such as ad-valuation research or something called performance optimizers, which are automated machine-learning algorithms designed to steer bidding so that it achieves certain goals, like maximizing clicks or conversion probability.

How much?

More than a century ago, retailer John Wanamaker said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” This quote remains popular because the problem persists, which is why DSPs offer ad valuation. The service provides research that helps advertisers determine how many ads a target customer should optimally see in a set period.

To pinpoint the value of ad impressions, the DSP uses test groups and control groups, each of which sees different numbers of the advertiser’s actual creative. Sometimes the control group sees a public service announcement from some nonprofit organization instead of the advertiser’s ad. Sometimes that control cohort sees “ghost” ads, which are promotions for products or services like those offered by the advertiser.

Whether using PSAs or ghost ads, control groups show advertisers if they should have bought the ad placement because both tools help advertisers see a lift in performance. With PSAs, if there’s no difference in performance between the real-ad group and the PSA group, it means the additional ad exposure would not be necessary. With ghost ads, users see more of an apples-to-apples comparison between the test group and the control group, but if the control group — which is seeing the ghost ad — clicks or buys, it means the placement added value because it’s offering something akin to the advertiser’s product. DSPs may be showing the second-highest bidder’s ad, which means there’s a possibility the ghost ad is promoting a competitive product.

Valuation experiments provide handy insights, but they don’t come cheap. As Han notes, the price DSPs charge for such valuation exercises can be cost-prohibitive for small-scale advertisers. Not only do advertisers pay the DSP for the testing, but they also pay for ghost ads. “It’s not pay-as-you-go for the advertiser. It’s pay-double-as-you-go,” he jokes.

Time and again

This is why Han wondered if there was a better way to determine the optimal number of ad impressions, and he recently tested an approach of his.

First, Han and his research team turned off the performance optimizers feature in working with the DSP. This was crucial to Han’s experiment because, as he explains, “DSPs have goals, and DSP goals are not always aligned with those of advertisers.”

As an example, he points to the advertiser who wants to reach more prospective customers and targets a goal to reach 1 million unique users. But what if the DSP gets paid on clicks or conversions to a sale? “If very few of those new users click on the ad or make a purchase in response to ad exposure, the DSP makes less money,” Han says.

That’s one of the reasons advertisers may want to work independently from the DSP's performance optimizers.

After turning those premier services off, Han’s team instructed the DSP to put a daily cap on the number of times given individuals would see the test ads. Some people would see the ads three times a day, some six, some nine, and so on. Finally, after running the experiments, Han’s team used regression models on the data to determine what happens to things like click and conversion likelihood as ad exposure increases.

What the team found was that their approach was as effective at determining the optimal number of ad impressions per day as the pricey valuation exercises the DSPs use. What’s more, Han’s approach was more cost-effective because advertisers won’t pay for the performance optimizers or the valuation testing the DSPs would run.

Han’s approach is more flexible, too. With this methodology, the advertiser can test things like whether keeping ads in front of a user’s eyes led to additional sales after one sale has been made. Likewise, advertisers could test whether opportunities are lost if ads stop appearing for website users who’ve already made a purchase. Han compared his approach to buying a box of Legos and creating a sculpture versus buying a pre-built Lego Eifel Tower. His approach gives advertisers more control over how they build experiments.

“The big question we were aiming to answer is, ‘Can advertisers evaluate ad effectiveness on their own?’ Our approach is a response to that question,” Han says. The answer is, “Yes, they can.

Latest news