Blog

Measuring Online Video Quality: Labels vs. Signals 

Picnic
Press team

There is a version of online video advertising that works exactly as promised. Sound on. Player front and centre. User watching. Ad seen, heard, considered. Brand value delivered. 

Video buying on the open web has been organised around two classifications: Instream and Outstream. Instream became the golden child and default label for quality: safer, more premium, worth the higher CPM. While Outstream became catch-all for… everything else.  

Most buyers narrowed to Instream-only strategies and assumed the label was doing the work. Sure, that ideal exists, it just has nothing to do with the label. 

Labels Measure Moments. Users Experience Journeys.

The IAB Tech Lab's Instream definition was built around what a video player looks like at page load: is video the primary content? Is sound on by default? Is the player properly integrated into the editorial environment?

These are all reasonable questions but they describe a single moment, not what happens next. Users scroll, content shifts and attention wanders. These signals that actually determine whether an ad had a genuine opportunity to be seen and heard only become visible once the session is underway. This is the gap labels can't close.

Beyond the Label: The Signals That Define Video Quality

PIQ set out to measure video quality across the open web and didn’t just find a quality gap but an entire canyon. Turns out low-quality video inventory shares the same characteristics regardless of how it's labelled. The three most telling signals: ads playing out of view, concurrent players competing for the same attention, and audio that defaults to muted. Here's what the data shows:

Find out more about Quality Video.

Playing out of view: 23.85% of low-quality video impressions continue playing after the user has scrolled them entirely out of view. Technically the ad is serving, the completion event is firing, but nobody is watching. And it still counts as a completed view.

That metric has another problem. Many buyers rely on it as a proxy for engagement, but sticky video units, players that follow users down the page in a fixed corner after they scroll past the original placement, complete whether the user wants them to or not. Neither a genuine view nor a welcome one.

Concurrent video ads: 19.72% of low-quality  sites run several  video ads on the same page. Multiple players competing for the same attention on the same screen is not an environment built for viewing, but to maximise sell-side revenue. 

And if several video ads are playing simultaneously, how could advertisers possibly know which one captured attention, whether any did? There is no reliable way to measure or attribute an outcome. Meanwhile completion rates and viewability scores still register. The confidence they imply doesn’t.

Muted by default: 18.36% of low-quality video impressions default to muted. Sound-on is one of the central quality signals the Instream label is meant to guarantee. In practice, it's absent 1 in 5 times, before Chrome's browser-level muting policies even enter the picture.

Across high-quality inventory, the picture is completely different: out-of-view playing drops to 4.46%, concurrent ads to 3.73% and muted-by-default to 6.23%. Same label, yet completely different user experience. That gap is the difference between an ad that had a chance to work and one that never did. Between budget well invested and ad spend wasted.

See all of PIQ's Video Signals here.

Instream-Only is Not a Quality Strategy

There‘s another problem with leaning on Instream as a proxy for quality: it excludes around 90% of available supply. When buyers narrow to Instream-only, demand concentrates on a fraction of inventory. Auctions intensify, CPMs rise and those higher prices get interpreted as evidence of quality, when they are simply a function of shortage.

Meanwhile, genuinely high-quality Outstream inventory, environments where ads load well, play in view, run without competition from other players, sit excluded by default. Not because the experience is poor or not a valuable impression for advertisers, but simple because the label says Outstream.

The Mislabeling Problem 

As if that’s not enough, there is the mislabelling problem: labels that aren’t just imprecise, but deliberately wrong.

Instream inventory typically attracts double-digit CPMs, while Outstream sits in the single digits. The gap is significant enough for some publishers and SSPs to not always declare their inventory accurately. The Media Rating Council has even advised measurement vendors to treat intentional mislabeling as invalid traffic. 

DSPs are in a difficult position, as they rely on the labelling that publishers and SSPs declare in the bid requests. They can’t see what's actually happening on the page pre-bid. They can apply filters, bidding only when sound is on or the player is above a certain size, but they can’t determine whether the video content matches what it says it would. The label is the only signal available, and it’s frequently wrong. So you have the one putting the label on it (publisher, SSP), the one unable to check if it’s correct (DSP) and the one buying the label (buyer) having no idea what they actually bought. 

Time to ditch the label?

Complexity Covering for Waste

For advertisers navigating online video buying, it’s natural to reach for simplicity. Buy Instream. Apply verification tools. Trust the label. Move on.

The problem is that the complexity of the video landscape, the proliferation of formats, players, labelling standards and supply paths, has made it genuinely difficult to know what’s working. Verification tools sample impressions and extrapolate. DSP filters catch some mislabelling but miss behavioural signals. Brand safety scores assess content but not player behaviour.

The result is that a significant portion of video spend disappears into environments that look acceptable but deliver little in practice. Roughly 25–30% of programmatic spend is wasted. For video, it’s likely no better.

This is not primarily a fraud problem, as most of it is legal. It’s a quality problem and quality is not something labels were ever designed to measure.

Behaviour Is the Signal. Labels are the Assumption.

Fixing this doesn't require abandoning what's already in place. It requires extending it beyond the page load moment and measuring how inventory actually behaves throughout the session.

The signals that matter are measurable and directly connected to viewing experience. Does the player maintain its size, or collapse into a sticky corner mid-session? Does the ad play in view throughout, or drift off-screen while the completion counter keeps running? Is sound on by default, or muted until the user happens to notice? Reasonable questions. Yet the industry simply hasn’t had the tools to answer them.

Quality Video, built on PIQ's Inventory Intelligence, answers  them directly. Rather than filtering by format label, it scores video supply against real behavioural signals, evaluating how ad units perform throughout the user journey. That means buyers can identify and curate genuinely high-quality inventory across both Instream and Outstream, filter out poor supply before spend begins, and base buying decisions on what actually determines viewing opportunity.

But video doesn't exist in isolation. A video player can behave perfectly and still sit on a page that undermines the surrounding experience. Quality Video evaluates both the video itself and the broader page context, using PIQ’s full suite of quality signals.

The result is not a narrower media mix. It's a better one, wider in reach, more accurate in quality. By evaluating inventory based on how it actually behaves rather than how it's labelled, Quality Video opens up high-quality environments that Instream-only strategies routinely exclude. Publishers that don't carry the right label but offer genuine viewing opportunities become visible. Static site lists get challenged. And buyers can now match inventory selection to specific campaign objectives, rather than defaulting to the same old filtered pool everyone else is buying from. More reach. Better reach. Not dependent on a label that the sell side controls and the buy side can't verify.

Don't Buy a Label. Buy Quality.

Instream was designed to identify premium video contexts. It was never designed to guarantee them. The label describes intent at page load but intent isn’t experience, and experience is what determines value. The open web's video quality problem is fixable. But not by applying more pressure to a classification system that is structurally incentivised to mislead. It requires measuring what actually happens on the page, how the video behaves, whether the ad is in view, whether anyone is watching.

That is what quality means. And it’s what labels, for all their convenience, have never been able to tell you.

Find out more about Quality Video here.

Book a Demo

Picnic needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

SIMILAR ARTICLES

Signal Washing: The Ad Industry's Greenwashing Problem
This is some text inside of a div block.
This is some text inside of a div block.
read article
Blog
AI Slop Is Funding Itself With Your Media Budget
This is some text inside of a div block.
This is some text inside of a div block.
read article
Blog
The Open Web's Video Quality Problem
This is some text inside of a div block.
This is some text inside of a div block.
read article
Blog