Legal
Brand Safety Policy
Adbustr operates a brand-safety framework aligned with the IAB Tech Lab's Content Taxonomy 3.0 and the GARM Brand Safety Floor + Suitability Framework. Content categories below are restricted across our exchange, and contextual classification is performed at the impression level prior to bid surfacing.
Restricted categories (default block)
- Adult / sexual content (incl. nudity in non-news contexts)
- Hate speech, harassment, and content targeting protected classes
- Graphic violence, gore, and shock content
- Illegal goods, drug paraphernalia, and weapons (unregulated)
- CSAM and exploitation — zero tolerance, immediate enforcement
- Misinformation about health, elections, or major civic events
- Terrorism, extremism, and recruitment content
- Piracy, IP infringement, and counterfeit goods
Contextual analysis methodology
For each candidate impression, we score:
- Page / app context against the IAB Content Taxonomy
- Sentiment and risk classification of surrounding editorial content
- Publisher historical brand-safety incident rate
Advertisers may layer additional suitability tiers (Limited / Standard / Expanded) on top of the default-blocked categories above.
Third-party verification
Brand-safety classification is independently verified through our Pixalate integration. Quarterly aggregate brand-safety metrics will be published in our Transparency Center beginning Q4 2026.
Reporting concerns
Brand-safety issues should be reported to compliance@adbustr.com with the originating impression ID where available. We commit to a 24-hour first response for category violations.