5 Instances of App Install Mis-Attribution and How To Avoid Them

Published on September 07, 2016/Last edited on September 07, 2016/6 min read

5 Instances of App Install Mis-Attribution and How To Avoid Them
AUTHOR
Team Braze

Data-driven app marketers rely on accurate measurement to guide them through the rough waters of today’s mobile advertising ocean. The good news is that mobile measurement and app install attribution have come of age, giving app marketers a whole lot of data to work with and optimize on. However, there are still instances in which attribution is or may be inaccurate. When does this occur and how can marketers prevent it from happening? We take a look at five common cases:

1. Double attribution

What happens when two (or more…) different media sources take credit for the same install? Well, the marketer is charged twice (or three times) for the same action. Does that make sense? Would you pay twice for the same pair of shoes?

Double attribution is still a common threat to app marketers who work directly with ad networks. They add the SDK of each media partner and start measuring! But, these media companies do not have a bird’s eye view of the user’s path to conversion, and they are basically blind to the activity of other networks. So when a user clicks on network A’s ad and downloads an app, network A charges the advertiser for the install under the cost per install (CPI) model. And the same goes for network B… But in an ecosystem dominated by last click attribution, a marketer should only pay for… ONE last click.

Beyond the significant waste of budget on double and triple charges, and the impact multiple SDKs have on an app’s performance, double attribution can also inflict significant damage on a marketer’s optimization efforts. After all, optimizing media buying or calculating cost, lifetime value (LTV), and return on investment (ROI) using network-provided attribution is, to put it simply, often wrong.

Independent mobile measurement companies are in a position to rule on last click because they do have a bird’s eye view of the ecosystem as they are integrated with hundreds and even thousands of media partners.

2. Biased attribution

The mobile attribution analytics space can be divided into two groups: first there are end-to-end (or multi) solution firms, where you have companies that buy media, measure campaigns, and optimize them. This includes networks that charge advertisers based on measurement of their own campaigns, and tracking companies that have a media buying arm. The other type groups independent providers that only deal with attribution and do not engage in media buying in any way. The former are biased because of the inherent conflicts of interest a media-measurement marriage may bring. The latter, however, are unbiased and are therefore in a position to assume the role of ecosystem regulators, raising the flag of neutrality, transparency, and reliability.

3. Facebook mis-measurement

Don’t just rely on deep links for Facebook attributions. There are only two ways to accurately measure app promotion campaigns on Facebook: to embed its SDK or to work with the social network’s official mobile measurement partners (MMPs). Even though it’s possible to use data on a deep link to know where a user came from, it’s not possible to use time-based attribution like last click or first click because there is no timestamp on a deep link. It’s a simple as that.

4. Data discrepancies

Because different platforms use different timezones, discrepancies happen, especially if data is sliced by hour or day. What often happens is that data tracked by one platform spills over to a previous/following day/hour in the other. For example, let’s say platform A records timestamps based on a GMT timezone, while platform B is set to a PST timezone. In such a case, data from 12:00am-6:59am GMT will be marked as 17:00pm-11:59pm in PST–the previous day!

If alignment between different platforms cannot be achieved, it is recommended to use a wider time frame (but not too wide) so the hourly difference between timezones becomes negligible–two weeks can be good.

Another common cause of discrepancy involves the use of different tracking methods. When different services or platforms define actions differently, they may also attribute actions differently: last engagement, first engagement; or when one provider attributes in-app purchases based on the first time a user engaged with an ad even before he or she were acquired, while another provider to the app install event.

Eliminating all data discrepancies is not a realistic outcome. Minimizing them definitely is. And the key to this is using raw data reports that enable marketers to dive from user level to aggregated data. Idealy, when a suspected discrepancy surfaces, you can create a list of timestamped events with a unique key identifier that is common for the two (or more) platforms in question and as such identify which events were recorded on one platform but were missing from the other (the delta).

If different tracking methods do not explain the absence of the delta data points, one could deduce that it’s a technical issue. In such a case, raw data can be used as evidence that events were tracked/not tracked and provided to the technical teams you work with for further investigation.

5. Install fraud

CPI is the most common pricing model in app marketing. As such, install fraud is also most prevalent. It happens when Install bots mimic human behavior to automate an install, or even a re-install if the app was already installed. Savvy fraudsters can also make their actions harder to detect by randomizing device IDs or click locations.

Another type of install fraud is stolen attribution, which is like cookie stuffing on the desktop. In this case, fraudsters misappropriate the attribution of an install, when in fact that install came from another source of traffic like SEM, organic traffic, or another affiliate. Fraudsters do this through tactics like background redirects to other App Store apps and spoofed or faked clicks.

Last but not least, there are fake installs which originate from desktops where a fraudster spoofs aspects of the device to make it appear as if it’s mobile. Fraudsters can also use virtual machines and proxies to generate this type of fraud.

Combatting fraud takes a couple approaches. Generally speaking, it’s about working with trusted media partners, while demanding transparency from them all the way down to the publisher level and using direct publishers so marketers know exactly where their ads are running. It is also important to work with a fraud specialist and / or ensure that a measurement partner has double-layered protection which includes both prevention via automatic IP filtering and detection. Last but not least, having raw data access enables marketers to dive into their data and find anomalies.

The attribution bottom line

In a space shifting from acquisition to engagement, from quantity to quality, post-install marketing analytics tied to attribution are at the heart of app marketing. That’s why marketers must prevent any instances of mis-attribution, and ensure they’re accurately attributing their marketing activities to properly calculate their campaign results.

Related Content

View the Blog

Join the movement to journey orchestration.

The move to highly-intelligent, always-on journey orchestration is happening. And much of it is happening on our platform. Join brands of all sizes who are taking the craft of customer engagement to the next level.