If you’re in charge of content marketing, performance marketing, web analytics or tag management at an enterprise company or digital agency, you probably want to read the following.
We’ve put together a list about tag auditing, it’s pros and cons and how people do it at the moment.
How do people audit tags now? (client side)
At present if you want to audit a website’s client side tag implementation or ensure it is operating effectively you have a few options either reactive or proactive:
1. Be reactive and do nothing until things go wrong
Definitely not recommended. Although this has the pro of saving you time, it definitely does not earn you brownie points when client stakeholders notice their web analytics report has dropped off a cliff. The outcome is a complete loss of data for the entire period in which your tags were not firing. In some cases it’s a couple of days or upto 3 weeks until most realise a tag is not firing.
2. Proactively check dashboards in your third party systems
You may frequently log into the third party systems which rely on your tags to operate such as Google Marketing Platform, Google Analytics, Facebook Ad Manager, LinkedIn etc to ensure tags are firing effectively. On the positive side, these systems are catching up, providing dashboards for your pixels, such as the one below by Facebook.
Although overtime this becomes completely unscalable, as the number of 3rd party systems and clients begin to increase. You begin to realise at this point you need an overview dashboard of all tags.
3. Crawling — Implement a tag auditing tool to crawl pages on your website to ensure that tags are firing. (There are several SaaS tools available)
There are several of these, feel free to google ‘Tag Auditing’. They provide several benefits. You can audit a specified ‘journey’ or path on your website using a GUI. You can ensure data layer variables are operating to your desired standards. But there is a major caveat here, this is all done via a web crawler which requires constant updating and maintenance. It also produces simulated data. We will explain why this is not so good in the next section.
4. Implement a realtime tag monitoring tool
We think that auditing is a point in time activity, but monitoring happens all the time.
Tag monitoring tools will monitor tags firing in realtime on your website as triggered by your customers. We believe this is by far the best option. The benefits here are quite significant. Real tags fired by your customers instead of a simulated crawler, low maintenance and full test coverage given you can visualise and analyse every tag fire. (Check the next section)
If you want to implement realtime monitoring you have two options;
a) Implement a custom solution — such as the one found here in Simo Ahava’s blog. This may suit the tinkerers but overtime the solution is relatively non-scalable as it requires custom setup for every site you implement it on. It also costs a lot more on GCP than a SaaS platform such as Monita.
b) Use a SaaS solution — Such as Monita (we are a bit biased here because this field is quite early days, please feel free to suggest other tools in comments and I will add here over time.)
Before getting onto the benefits of a SaaS platform, we’ll first assess some issues with Crawling.
What’s wrong with crawling?
1. High setup and maintenance costs
Crawler based tools rely on users to specify the paths or ‘journeys’ to monitor on your website. Every time you update your website or your tagging implementation you must also update your audits to ensure they are now auditing your new tags or new website journeys. In many cases this results in having a dedicated tag QA resource solely dedicated to managing and updating audits.
2. Crawlers have limited visibility
Crawlers cannot holistically audit your site without you explicitly specifying the journeys. This means that new tags or journeys setup on your website, are not monitored if they are not specified in the audit setup.
3. Results are not representative of real users
Given a website crawler is by definition a simulation executed on your site, this does not mimic real-world customer behaviour, only the behaviour the tester specified. The results produced are only beneficial in so far as they represent the majority cases that your customers may experience such as leveraging multiple browsers, devices, OS as well as other issues such as broken URLs, which may prevent certain tags from firing. Ultimately the number of scenarios become overwhelming to plan and test for, making the solution unscalable.
The solution — Realtime Tag Monitoring
We feel the solution to this problem is to Monitor tags firing in realtime instead of Crawling. The reasons as to why it is more efficient are indicated below:
1. Little to no setup and maintenance
Once tag monitoring is setup it begins monitoring your tags firing in realtime as triggered by your customers (not a crawler). It also begins recording data points associated with your tags such as data layer variables. Users can view this data in their visualisation platform of choice.
2. Get full visibility of every scenario on your site
Given you can now monitor all or some of your tags firing in realtime, you no longer need to specify paths/journeys. Instead you can simply check on a specific tag’s volume over a time period to ensure that it was working effectively.
3. Results are fully representative of your customers
All tag fires are recorded including the user agent details such as device, browser and OS. Checking on a scenario is as simple as filtering a report to view tags fired by a specific user agent or URL on your webpage. You can also see extra details such as tag execution time which is beneficial to optimise site load time.
4. Added benefits
- Get realtime alerting in Slack or your communication platforms when tags don’t operate as expected.
- Visualise this data in any system of choice
Checkout the the dashboard below for an overview of how monitoring may look for you. Note this is fully customisable.
View here: https://datastudio.google.com/s/oigf5d1rjH0
- View tag success and failure broken down by URL or any other metric collected.
- Leverage Machine Learning on this data to show you when Anomalies occur
Conclusion
Given we’re a startup we’re really open to your praise or criticism. We feel it is really early days in this industry, and advancements like server side on GTM and on Adobe Launch will really benefit companies monitoring and data validation activities.
As you have probably realised we really think that monitoring is objectively better than crawling, but we’re open to discussion here :)
If you want to find out more about us feel free to reach out to andrew@getmonita.io