Why do pixels or tags break?

Monita
4 min readNov 8, 2022

--

Introduction

If you’re an analytics manager or digital marketer you would know of the pain of your pixels either not firing or your pixels sending bad data. For example when facebook pixels start sending null price fields or incomplete customer ID fields, this can lead to reduced attribution, smaller audiences and therefore lost leads and sales revenue. But how do these issues typically happen?

Background

We can break down pixel failure by using the people process and technology framework.

People

Most often the deployment of martech/analytics code will come down to single or a few individuals who are responsible to deploy large amounts of Javascript on behalf of product managers, or marketing teams.

A key issue is that very often these individuals are siloed in comparison to large web development teams who are responsible for developing the front-facing website.

Given the large discrepancy and sheer number of people in web development teams this often leads to breakdowns in communication of major site code changes (HTML/CSS/JS) which may impact analytics tracking or ad vendor tracking. More often than not, changes will be made, these changes are not communicated, and the martech developer will find out from other stakeholders that data is not populating in a google analytics dashboard (for example).

The solution to this problem is to ensure martech developers are always included as part of web development teams+sprints and have the ability to test the impact of site changes on analytics/ad tracking prior to going live.

Process

A lack of process around deployment and testing of code to

  1. Content Management Systems (CMS) Wordpress, Webflow, Sitecore, Drupal, Shopify etc.
  2. Tag Management Systems (TMS) Google Tag Manager, Adobe Tag Manager, Tealium etc.

is very often the central reason for pixel failure. Below is a diagram of two processes. The top one is most typical even in large enterprises. The second one is found in a highly mature analytics team (where data is most often their business, required to generate revenue. Eg ad businesses which sell segments of customers for retargeting.)

An important thing to note is that one approach is not necessarily better or worse than the other. Rather there is a spectrum with both extremes shown below. In general there is a tradeoff between speed and quality in which each business eventually strikes an equilibrium.

Siloed Approach

In this extreme on the left, martech devs, don’t talk that much to web devs and operate fairly independantly, each reacting to the others changes with very little communication. Where the frequency of deployment to the website is low, this may be a fine approach, but as the frequency of deployments both to CMS and TMS increase, collaboration on the code releases, especially around the data layer become pivotal in minimising JS erros and hence maximising tag data quality.

Coordinated Approach

In this extreme on the right, web devs and martech devs work in tandem on ANY code releases, whether they are TMS or CMS related. Given TMS changes are often times less frequent and smaller than CMS, martech devs would form part of the sprints and called upon to contribute to the overall release. The advantage is that there is more testing and hence less errors in this process, but the disadvantage is the relative slowness (break fast

There are pros and cons for both approaches and neither are necessarily true all the time, but it is important for teams to strike a balance and find a process that works and provide best business outcomes.

Technology

Website are made up of many complex layers. When martech developers create their event hooks (triggers in Google Tag Manager or events in Adobe Launch), these are based on HTML elements or CSS classes or server driven events. When web devs change the HTML or CSS or server, this has the potential to cause any pixel triggered by these events to stop firing. This is most often the cause of failure from a technological perspective.

A key challenge is that, given the vast number of events and variables, it is very difficult to detect errors in a timely manner. Given breaks are often inevitable, it is crucial to have technology able to proactively detect issues.

Over the past couple of years we’ve realised that there is a huge gap in this space, where marketers don’t exactly have a system that can alert them of data breaks, relevant for their tags/pixels.

We’ve also realised that teams suffer from time expensive testing procedures (including use of selenium test driven frameworks) and also a lack of tooling around martech testing. This gap means that martech testing is often overlooked and runs reactively as opposed to being proactive.

Conclusion

Overall, people will fail, processes will fail, and tech will fail. If these are true, there must be a way to ensure time to detection is as small as possible so that high severity issues can be dealt with in a timely manner. As investment into first party data increases, so too does reliance on tracking.

If you’re interested to chat about monitoring and observability for martech, please feel free to reach out or leave a comment!

Andrew

--

--

Monita
Monita

No responses yet