Home Internet The Anti-Fraud Checklist for YouTube Promotion: How to Vet Traffic Sources Without Killing Your Recommendations

The Anti-Fraud Checklist for YouTube Promotion: How to Vet Traffic Sources Without Killing Your Recommendations

9 min read
0

Cheap views can quietly poison your channel faster than a bad video can.

For a long time, people have known that YouTube does not just count views on a video, it also learns from how those views behave. The result is that your early traffic sets the wrong expectations for who watches your content, and then those expectations are used to test whether or not your later content will succeed with the wrong crowd. Everyone wants a quick boost to get things moving, and creators are wont to try something like YouTube promotion package in a limited pilot or “experiment” (ie. posting some nichifyr dope hip hop beats) – but we need to see this kind of stuff go through intense procurement-level planning and measurement before we even consider whether or not it’s a good idea.

Every musician I know freaks out the first day of a push and then is confused for the rest of the week when their release stops getting Suggested impressions. This is because the initial push gives a lot of very low-intent clicks that bounce in 10-20 seconds, and then warps the viewer model of the channel. You can spend hours trying to fix this, but the reality is that the training data for the algorithm is already in your analytics history, and it’s very hard for YouTube to “unlearn” that.

Why bad traffic breaks Suggested

YouTube’s recommendations are a feedback loop. Promoting a video to a large audience of occasionally viewers will typically have a negative effect, by depressing performance (watch time etc.) signals, and also skew who the platform’s algorithms think should watch your channel next. The second effect is often overlooked but has real effects on how your channel performs over time. I saw one team drive a newly added channel to something like 5m views of fairly low-engagement content, only to have their Browse and Suggested traffic largely stuck flat for weeks thereafter, because the algorithms had learned to show that low-fit audience the video over and over again.

Also consider the policy implications. If a vendor is faking engagement, you’re the one left holding the bag. YouTube explicitly calls out invalid traffic / deceptive practices in their creator policy rules, which you should read at least once a year because it changes and gets more detailed.

The vetting checklist (questions that make vendors squirm)

We approach promotion sourcing for Treats the same way we approach buying email lists: if we wont show our work, then we should just walk away. We’re not asking for any trade secrets (it’s not rocket science to promote a platform that has treat-filled kongs), we just want enough transparency to confirm that some real human is choosing to watch our dog for a few hours.

  • Understanding traffic sources: Traffic Source Transparency. This means finding out exactly where your ads are being published – i.e. are they being displayed on YouTube, in and app store, embedded in a website, on a partner site, posted by an influencer, sent to people on an email list. An answer of “a network” is not an acceptable answer – you want to know where your ad dollars are going. Also does OTS really capture user engagement like it purports to?
  • Targeting details: We provide a written description of how we’ll select viewers to reach for your campaign. This includes details about the keywords, topics, channels, geo locations, languages and device categories that will best match your goals. Note that “all countries” target usually means we’re trying to buy cheap inventory.
  • Expected watch patterns – Explain what the typical viewing behaviour is for your niche, i.e. a music video is likely to have a significantly different drop off point to a tutorial. As an example a music video is unlikely to have a flat ‘watch time of 0:10’.
  • We look at the click and track out. If we toss someone directly to a video where the autoplay is off or they don’t have any context, the resulting exits are typically brutal.
  • Frequency and pacing asks how many times the same player hits something. Over-frequency can make for annoyed clicks and short sessions.
  • Exclusions: Can you exclude existing subscribers or users who have already viewed the video? If not, the story that you are reaching a new audience begins to fall apart.
  • Brand Safety: Is your brand being advertised in places that will cause accidental playback (i.e. kids apps, low-quality sites, forced browser redirects)?
  • Ridiculous: Refund policy for invalid traffic. If you can’t define what’s invalid, then you can’t be held accountable.

What to demand in reporting

For the sake of your sanity, you should be able to reconcile the vendor’s claims with your YouTube Analytics. For at least a baseline comparison, you should demand a report that breaks down their delivery against your normal baseline over the same time frame (same day-of-week, same time of day, uploaded around the same time). And for gods sake, saveable, not just a screenshot.

  • UTM-like labeling or clear campaign identifiers (so you can isolate the test in your notes).
  • Breakdown by geo and device, even if it is messy.
  • Traffic sources on YouTube: External, Suggested, Browse, Search, Channel pages, Notifications. If all of your traffic is classified as External and none of it as anything else, it’s likely that the push did not result in meaningful downstream discovery.
  • Average view duration and retention curve snapshots. A single average number hides a lot.
  • New vs returning viewers and subscribers change during the window.
  • A list of placements or publisher IDs for any off-platform delivery.

From many years of selling on Amazon, I can tell you that when a seller claims that their report data is proprietary this usually means that either a) the quality of the data source used to produce the report is crap or b) the mix of low-quality data is enough to confuse algorithms you’re trying to train and turn them against you. You don’t need access to their ad account to decide how to spend your ad budget, you just need enough information to determine if the report will help or hurt you.

Red flags of botted behavior

Fraud on YouTube is not constant, it evolves over time. For this reason, it is important to understand the typical viewing habits of your account audience and watch for anything that deviates from that pattern. Look for: 1) “Bumps” in views in unusual times of day for your account, 2) Views that cluster tightly together (e.g., three or four views in a row at exactly the same time), 3) View sessions that are exactly the same length, 4) Comments that seem to have been generated by an artificial intelligence bot, and 5) A view spike that has zero secondary actions (no likes, no saves, no playlist adds, etc.).

The scariest thing an analyst can see for these videos is high views with low retention and no increase in viewers who have watched previous parts of the show. That is the sign of content that was shown to people and they did not want it. I’m also keeping an eye on the ratio of impressions to views, because I’m starting to get a sense that the YouTube algorithm doesn’t even treat these “views” as normal distribution.

Always compare the viewing drop off to your last 3-5 videos. It’s gonna be a little worse. But a cliff like this that hits at the exact same timestamp across thousands of views is very fishy. This viewer drop-off occurs around 0:30 in a 3-minute video, which is typical of scripted viewing.

Run a small, instrumented test

If you do decide to experiment, keep it testable & limited. Start with 5-10% of your budget for 5-7 days on 1 video. Let the video have decent packaging (title, thumbnail, opening 15 seconds) before you try to spend money on it. Promoting a bad video just gives you cleaner data that you failed with.

Define for yourself what you mean by “pass-fail” before you begin to analyze. Does your content retain at least as much viewing time as it did before as compared to a proper baseline? Are you increasing the percentage of returning viewers? Is there at least some bleeding into normal YouTube surfaces over time and not just a big spike in External traffic? A big spike in External traffic that is largely click-heavy and results in little to no meaningful watch time is a stop sign and NOT a “maybe we need more budget” flag.

When running a controlled test, you ideally want the provider to be able to speak to targeting and provide some insight/reporting into results. While some shops treat promotion as if it were a “magic button,” it is really a bounded experiment that still needs to be decided upon based on your own results in tandem with how well the provider is able to provide transparency into results. PromosoundGroup has come up in shops I’ve worked at in the past, which is probably why I’m aware of them now.

The world misses out on one critical factor: pacing. A slow upload and a pause here and there can create more natural content, and give the creators time to figure out why something isn’t quite right. Patterns create understanding, and in the first two weeks, try not to feed YouTube garbage that it can remember for life.

The platforms are also increasingly vocal about how policies should be developed and enforced, “we did not know” is not a satisfactory answer. Their explanation of goals and openness for their policies is worth perusing, and then it will be clear what they consider to be manipulation versus legitimate marketing activity policy openness .

Procurement-style hard stops

Add these to your internal checklist to avoid arguing with your account team on Slack about scope. Hard stops on scope: no traffic source information, no information about targeting, reporting limited to “views delivered”, excessive pressure to scale within the first 48 hours, and any mention of incentivized viewing such as “rewards”, “tasks”, or “earn points”.

But for the love of all things good, don’t let the test turn into a dumpster fire of your channel’s audiences. Keep it small, keep it transparent, and keep your eye on the prize by judging the effectiveness off the back of watch time as opposed to engagement metrics you want to flatter.

Last Updated: April 20, 2026

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

Relumi Review: A Practical AI Photo Retake App for Fixing Photos That Almost Came Out Right

Relumi is best understood as an AI Photo Retake app — a mobile tool designed to rescue mea…