Due to the numerous new technologies designed to measure digital advertisement—if it can be seen “on screen”—the actual “viewability” of this advertising is now being questioned. For example, was your ad placed below the fold in a position that would require the user to scroll down to see it? If so, did that user scroll down or not? It turns out that only about half of online advertising is “viewable.” Imagine that. Fifty percent of your advertising is wasted—and now we know which half (see prior John Wanamaker article: http://bit.ly/1RSG9ed).
These systems have grown quickly, in part to address the significant fraud that occurs in the digital advertising world. Most fraud is attributed to low quality, quick-buck con artists who produce MFA (made for AdSense) sites and artificially inflate page impressions through spiders and bots, which are computer-coded robots designed to continuously read and load pages. Good spiders and bots exist to do things like read your page so you can index it in a search engine. Google’s spiders crawl your site frequently—and you want them to. But, as with any technology, in the hands of the wrong person it becomes malicious.
How Technology Can Turn Malicious
Fraudsters use spiders and bots to load pages, generate false ad impressions and fake clicks on ads to artificially drive up fake traffic and payments. This is referred to as “NHT” or non-human traffic. We need systems to police for NHT because fraud happens, even sometimes unintentionally, on quality websites. For the most part, heavy NHT is limited to the seedy part of the Internet that would not make it into a well-vetted medical media plan.
The viewability technologies are designed to ride along in the ad tag of each ad impression to measure whether or not each specific ad unit was “viewable” when displayed and determine if it was “viewed” by a human or an NHT. Any Internet transaction requiring server hops creates technical and latency issues that cause discrepancies between the different measurement systems—even when designed to measure the exact same thing. Every additional server call also adds its own latency to the loading of an ad. The performance of this is measured in milliseconds and is important to keep efficiency—or risk hurting the user experience.
If you run Adobe (Omniture) analytics and Google Analytics on the same exact pages, for example, you will get different results from both. The same goes for viewability measurement tools and an ad server designed to count delivered impressions. If you get your campaign impression counts from a publisher’s ad server, and those same counts from your agency ad server too, you will get different numbers. It’s not uncommon for mature technologies (like analytics and ad servers) to have discrepancies of about 7% or more. Ten percent is considered to be “within an acceptable range” and not necessarily problematic or in need of deep investigation.
How Viewability Discrepancies—and Costs—Balloon
The industry has long chalked up these discrepancies as a cost of doing business and erred in favor of the advertiser (who has the lowest counts, as they don’t start counting until the impression fully renders on the page) for billing purposes. These discrepancies can balloon however, as you layers systems together. If there is already a 10% discrepancy with a third party server, and you then add another 20% discrepancy onto a viewability tool, these numbers are mostly additive—so the advertisers and publisher discrepancies can be 30% in this example.
The Internet Adverting Bureau has defined a viewable impression as an impression where at least 50% of the pixels of the ad unit are “on screen” for at least one second. This is a completely arbitrary definition designed to set a standard for the industry to modify their measurement tools and processes towards a standard. Without a standard we have chaos.
What is Considered Viewable
But why only 50% of the ad? If you were to see a partial brand logo, especially one of a large well-known brand, don’t you recognize it? After all, that’s part of the power of branding right? And why only one second? In the early ’70s, subliminal advertising, or seeing an advertisement for even just 1/25th of one second, was deemed “contrary to public interest” due to its impact on consumer behavior. So why are we suddenly calling micro-exposures of less than a second worthless or non-viewable?
The viewability tools produced by companies such as comScore, DoubleVerify, Moat and IAS by their own admissions, are still imperfect. There are things they cannot measure yet, code that prevents their code from working properly, devices types and systems that don’t permit them to measure, and therefore many of these systems “fill in the gaps” with assumptions and extrapolations.
Additionally, the use of these tools requires additions to the already complex and error ridden ad trafficking process. One mistake made by a young trafficker and it looks like your campaign is dark, when in fact, it’s running fine, it’s just not being measured due to a trafficking error. This is especially problematic, and creates exposure to liabilities, if billing is tied to viewability in any way.
Both publisher’s and advertiser’s agree that these new tools are valuable additions to our arsenal and provide an excellent planning and optimization tool. Viewability should be considered when managing your media. It is a metric publishers should strive to improve while balancing improvements against the user experience.
However, it’s a red herring to try to achieve the aspirational goal of 100% viewability. It is no more possible to do than to force everyone who reads a magazine to open to the page where your ad appears. Some do, some don’t. However, the fact that this can be measured in digital advertising will give digital advertising further advantage over traditional media in the long run.
The industry debate now is centered on agencies that are pushing to pay only for the viewable impressions. As the non-viewable inventory value will drastically decrease, the viewable inventory value, and prices for it, will inevitably increase. This is good for quality medical publishers who already have high viewability. But tying viewability to payment right now, given the immature state of the technology, is arguably premature. The technology is simply not ready and the operational tools to make it happen efficiently are not yet in place.