The Email Attribution Problem
Email attribution sounds simple: someone clicks a link in your email, then buys something — credit the email. But two layers of complexity make this extremely difficult to do accurately.
The first layer is technical contamination: automated systems are clicking your email links at scale, and those clicks look identical to human clicks in your ESP's reporting. The second layer is model complexity: even with clean click data, attributing a purchase to a specific email requires decisions about attribution windows, channel weighting, and what counts as a "conversion."
Most senders focus on the model complexity problem while ignoring the technical contamination problem — which is actually easier to fix and has a larger impact on data accuracy.
The result: email programs are routinely claiming revenue credit for purchases they didn't directly cause, while simultaneously failing to properly measure the email touchpoints that actually influenced purchase decisions. This leads to bad decisions about send frequency, list segmentation, and email investment.
How Bot Clicks Corrupt Revenue Attribution
Every link click your ESP records is not necessarily a human click. Corporate email security systems — Microsoft SafeLinks, Barracuda Email Security Gateway, Proofpoint URL Defense, Cisco Secure Email, and others — automatically scan every link in incoming emails by following (clicking) those links to check for malware, phishing, and other threats.
This scanning happens milliseconds to seconds after the email is delivered to the inbox, before any human has seen it. The scanner visits each URL, checks the destination for malicious content, and returns. From your ESP's perspective, this looks like a link click — complete with a timestamp, a user-agent string, and an IP address.
How to Identify Bot Clicks
Bot clicks have distinct patterns that separate them from human behavior:
- Timing: Occur within 0–30 seconds of email delivery, before most recipients would have even seen the email in their inbox
- IP ranges: Come from data center IP ranges, not residential or business ISP ranges — Microsoft SafeLinks clicks originate from Microsoft Azure IP ranges, for example
- Click pattern: Click every single link in the email simultaneously — bots don't choose which link to click based on interest
- No downstream behavior: The click doesn't result in session time, further page views, or purchases — the scanner hit the URL and left immediately
- User-agent strings: Security scanners typically use unusual or generic user-agent strings that don't match real browser environments
The attribution chain breaks at the bot click
When a security scanner clicks a link 5 seconds after delivery, your ESP records a click event tied to that recipient's email address. If that person later makes a purchase within your attribution window (1–7 days), the ESP attributes it to the email click — even though the "click" was a bot and the purchase may have been driven by a social ad, organic search, or direct visit. The InboxEagle bot finder identifies and filters these events before they pollute your attribution data.
Scale of the Problem
For B2B senders with a corporate audience, bot click rates of 20–40% of total recorded clicks are common. In highly securitized industries (finance, healthcare, technology companies), it can be even higher. Consumer email to personal Gmail/Yahoo addresses tends to have lower bot click rates (5–15%), but still meaningful contamination. A click rate of "5%" that includes 30% bot clicks is actually a 3.5% real click rate — a meaningful difference when optimizing campaigns.
Attribution Models for Email
Even after filtering out bot clicks, you still need to decide which email interactions get credit for a conversion. The major attribution models each have different implications for how you measure email performance:
Last-Click Attribution
The most common model: the last email click before a purchase gets 100% credit. Simple to implement but deeply flawed — it ignores the role of earlier touchpoints (awareness emails, product announcement emails) in driving the purchase decision. It also over-credits the most recent email, which may have been a low-value promotional send rather than the email that actually moved the needle.
First-Click Attribution
The first email click in a purchase journey gets all the credit. Better for measuring which emails introduce customers to products or trigger initial interest — but ignores the closing touchpoints that converted that interest into a purchase.
Linear Attribution
Equal credit distributed across all email touchpoints in the conversion path. More realistic than single-touch models but requires tracking the full email journey, which most ESPs don't do by default.
Time-Decay Attribution
More credit given to touchpoints closer to the purchase date. A reasonable middle ground — acknowledges that recent emails likely played a larger role in the immediate conversion decision while still crediting earlier awareness touchpoints.
Data-Driven Attribution
Uses machine learning to assign fractional credit based on the actual conversion paths in your data. Requires high volume to work reliably (typically 400+ conversions per month to have statistical significance). Google Analytics 4 offers a version of this for multi-channel attribution.
Building Accurate Attribution
Accurate email attribution requires three things: clean click data (bot-filtered), a consistent attribution model, and proper UTM parameter implementation.
Step 1: Filter Bot Clicks at the Source
InboxEagle's bot finder analyzes your email click data and identifies automated clicks based on timing, IP geolocation, click patterns, and user-agent analysis. Bot clicks are flagged and excluded from engagement metrics and attribution calculations. What remains is clean click data representing real human interest.
Step 2: Implement Consistent UTM Parameters
UTM parameters in email links allow your analytics platform (Google Analytics, Mixpanel, etc.) to attribute traffic and conversions to specific campaigns, even when the conversion happens in a later session. Use consistent naming:
utm_source=emailutm_medium=emailutm_campaign=[campaign-name]utm_content=[link-name-or-position]
Step 3: Set a Realistic Attribution Window
Match your attribution window to your actual purchase cycle. Most ESPs default to a 7-day click window — meaning any purchase within 7 days of a click is credited to that email. For low-consideration purchases (e-commerce), 1–3 days may be more appropriate. For high-consideration purchases (SaaS subscriptions, B2B services), a longer window might be warranted but should be used in conjunction with multi-touch attribution.
Step 4: Compare ESP Revenue vs. Analytics Revenue
Run your ESP's reported email revenue alongside your analytics platform's email channel revenue for the same period. Large discrepancies usually indicate either bot click inflation (ESP overcounts) or session/cookie tracking gaps (analytics undercounts). The truth is usually somewhere in between — use the discrepancy to calibrate your expectations.
Attribution Windows Explained
The attribution window defines how long after an email interaction a subsequent purchase gets credited to that email. This is one of the most impactful — and most arbitrary — decisions in email analytics.
| Window | Best For | Risk |
|---|---|---|
| 1 day | Highly promotional emails, flash sales | Undercounts influence of awareness campaigns |
| 3 days | Standard e-commerce, considered purchases | Reasonable middle ground for most consumer email |
| 7 days | Complex purchases, subscription products | Can overcount for high-frequency senders (multiple emails in window) |
| 30 days | B2B, high-consideration products | Significant overcounting risk; most purchases not email-driven |
For senders using a 7-day window with daily sending, a single purchase could theoretically be credited to 7 different emails simultaneously — each claiming 100% credit in last-click models. This makes your email program's reported revenue dramatically higher than reality. Reduce the window or switch to a de-duplication approach where only one email can claim credit for a given conversion.
What Your ESP's Attribution Misses
ESP attribution reports have several structural gaps that lead to systematic overcounting:
Cross-Device Journeys
Someone opens your email on their phone, decides to buy, then completes the purchase on their laptop later that day. The click event is tied to their email address, but the purchase session on the laptop may not be linked back to the email click — especially if they didn't click a UTM-tagged link in the session where they purchased. Result: the email gets no credit even though it drove the purchase.
Multi-Channel Influence
A customer saw your email, clicked away without buying, then saw a retargeting ad, then came back and purchased via the retargeting ad's link. Most ESPs will give the email zero credit because no "click-through purchase" happened — even though the email initiated the consideration process.
Cookie Expiration
If a subscriber clicks your email link, visits your site, leaves, and returns 8 days later to buy — and your attribution window was 7 days — the email gets no credit even though it introduced the customer to the product.
These gaps are real and they partially offset the overcounting from bot clicks. A mature attribution strategy acknowledges both directions of error and focuses on using clean, consistent data to make directionally sound decisions about email investments — rather than treating any single number as ground truth.
Get Clean Email Attribution Data
InboxEagle's bot finder identifies and filters automated link clicks — giving you clean engagement data so your revenue attribution, send frequency decisions, and segment analysis reflect actual human behavior.