Frequently Asked Questions
Is this another tag-based solution?
There is no required tag or technology to utilize our services. We can run tagless or layer onto any existing tag on your campaign. Our technology is utilized internally to inform the human review of sites / apps, but we like to keep things simple by not passing off another technology to you.
Why do we need another safety / security vendor?
We believe that the multi-layer approach is the only way to stay ahead of the bad guys. Planning a campaign around metrics alone can expose a brand to poor quality, unsafe, and potentially fraudulent activity. By incorporating Trust Metrics in the planning stage, we are able to ensure your brand is in the correct environment before you target for viewability metrics, etc.
We work off a blacklist already, why do I need you?
Blacklisting is insufficient. The universe of poor quality and unsafe sites is growing exponentially while the number of good sites remains relatively static. Even if you create a blacklist of 1,000,000 domains, there will always be more bad sites out there. We believe that brands need to be exclusive, not inclusive and define their world of acceptable inventory as a Whitelist in the planning stage.
But don’t Whitelists reduce scale?
Yes and no. While it is true that you might notice reach decrease with a Whitelist, we would argue that the scale that is removed is either junk, NHT, or otherwise inflated numbers. It might seem like a Whitelist inhibits scale, but in reality Whitelists will ensure that a much higher percentage of your impressions are served at your target and will often reduce a campaign’s effective CPM.
People are so used to inflated numbers that it might be hard to swallow the truth sometimes. If you’d like to send an impression log, we will happily run a free audit and show you where your money is being spent.
Do you check for NHT?
While we do not have a technology to detect NHT, we end up reducing well over 90% of NHT by being exclusive upstream with Whitelists. The fraudulent domains that serve insane traffic numbers often look completely fine to a technology as there are no major safety flags, but quickly fail upon human review. These sites will frequently have outdated layouts, poor design, scraped or aggregated content, and high amounts of ad clutter and would be caught upstream as a low quality publisher. Common sense can be applied to determine whether or not the site can conceivably carry such high traffic spikes (if this site has a million unique users per month, why is there no UGC?, etc.)
At the end of the day, a certain amount of NHT is inevitable. You wouldn’t want to blacklist ESPN or NYTimes just because they carry a certain amount of NHT because not all NHT is malicious. For example, Trust Metrics is a crawler-based technology, so our evaluation of domains will show up as NHT by bot detection. NHT is definitely a problem, but being proactive and defining a Whitelist is a great first step to address the problem.
Do you look at every site on the web?
No, we are not complete gluttons for punishment. Our database has a little over 750,000 domains and 30,000 mobile apps that represent most of the US-based English language ad-serving inventory available through networks, exchanges, and DSPs. We are constantly adding new inventory to our database through network submissions but we don’t actively seek out inventory that will never be approved on a campaign (pornography, copyright infringement, sites that don’t serve ads, etc.) unless there is a client demand for it.
Can you create a list that is highly viewable?
We believe viewability is a measurement, not a target. We often see lists that have viewabilty numbers through the roof and we quickly see that many of these sites are gaming the metrics with high ad clutter. We think the best approach is to first weed out the bad apples by creating a whitelist, then layering viewability metrics to create a highly viewable list.