The Demystified Library is the world's only freely available source of tips and tricks critical to the success of every digital analyst. Want to contribute to the Demystified Library?
Analysis entry for Adobe submitted on 10/31/2017 7:31:03 PM by Cathy Morse
Show Entire Entry · Link to this Entry
|
I recently set up some alerts in Adobe Analytics Workspace and what I learned is that you can monitor a lot without much effort. The low hanging fruit is to track all your events in one alert. The set up is so fast. You just open a new alert and drag in all your metrics that you want to track. This includes the instances of evars, which will at least tell you that your evars are firing as often as they have in the past.
Now, it won’t the values being passed in your evars or props. You will need to set up more complicated alerts for that. But you can get really far just using metrics.
What I did was set up one for my commerce site and then applied segments for the different regions, since each region has their own code base. Then I set up one for my content sites and applied relevant segments there as well. The benefit of adding segments is that the calculations will occur within those respective data sets, giving you even more confidence in your data.
I chose the “anomaly exceeds 95%” test. This will check to see if your metric counts are outside the normal range for the lookback window (which is dependent on your granularity). You can select the confidence interval you would like. If you have a few metrics that are super important you may want to select those ranges to be tighter.
I’ve had these set up for about a week and it’s working well. When I get an alert email, there’s enough information in the email itself that I can decide whether I want to drill deeper. If I do, I can click straight into the Workspace alert results and see the data in graphical form.
There’s so much more you can do with Adobe Alerts, but this will get you off to a running start with minimal efforts.
|
Analysis entry for Adobe submitted on 9/9/2017 6:37:11 PM by Cathy Morse
Show Entire Entry · Link to this Entry
|
Apps data isn’t always realtime. If you enable offline app tracking, you will inevitably get late hits, which means yesterday’s data isn’t yet complete. But how long should you wait to assume that data is “complete” enough? In other words, you want to know a distribution of when late hits come in.
A quick primer on how offline hits work. If a user is offline using an app, the app will store the hit data in memory until the user comes back online. Often that is in a subsequent session. But is that 2 days, 1 week or longer? It depends on the type of app you have. Apps with high repeat frequency will see late hits come in sooner than apps with lower use. It also depends on how much functionality a user gets while being offline. Users can use a fitness or game app offline a lot more readily than an ecommerce or news app.
One way to track this curve is to pick one day to track, let’s say September 1. Then each day after and including September 1, you need to look at your traffic numbers until you feel the traffic has “settled out”. Then assume that September 1 is representative of all your days.
Or you can look at Adobe raw data to measure the time between two key time stamps: 1) the time the hit was collected by the app (while online or offline) and 2) the time the hit arrived at Adobe’s servers (note, only when the app was online).
In order to pull Adobe raw data, you have to be an Admin and you will see the “Data Feeds” link under the Admin menu. I won’t go into details here but a key thing to note is that if you want to open the data in Excel, you will want to try and keep the data under 15MB.
Here’s what you want to pull in:
1. post_cust_hit_time_gmt (the time the device collected the hit)
2. hit_time_gmt (this is the time Adobe received the hit)
Then convert the timestamps from unix to Excel using this formula:
= (((COLUMN_ID_HERE/60)/60)/24)+DATE(1970,1,1) Then use this formula to subtract the two:
=DATEDIF(column1,column2,"d") and get the number of days lapsed. Make sure you put the earlier of the two dates in the column1 place and the later date in column2.
Now you can create a pretty Excel chart that shows you the distribution of hit latency.
|
Analysis entry for Adobe submitted on 9/9/2017 6:34:09 PM by Cathy Morse
Show Entire Entry · Link to this Entry
|
One thing that can be difficult to detect is whether your campaign tracking codes are “falling off”. There are a variety of reasons they can break or fall off: in the query string syntax or publishing process, or if your IT team adds a redirect to the landing page URL.
One way to detect if your campaign codes might be falling off is to look at the % of your traffic that comes from the Typed/Bookmark bucket (aka, "No Referral" Traffic). Here’s my theory.
You can’t artificially deflate Typed/Bookmark traffic, but it can be artificially inflated when campaign codes fall off. You should be able to look at a historical trend of the % of your traffic that is from Typed/Bookmark. As a % of total traffic, it shouldn’t increase drastically. If it there’s a big PR or social media campaigns, referrals from PR and social media sites should still rise more than Typed/Bookmark.
Note, I’m assuming that all PR events nowadays generate referral traffic from news/social media sites.
How often, anymore, do you have a purely offline PR event that drives online traffic directly to your site. If you are using any social media listening tools, you should be able to detect this PR somewhere. Therefore, you can reasonable assume that only paid campaigns with a query string can artificially inflate your Typed/Bookmark as a % of total traffic. (Again, normalizing it against total traffic is key to this theory).
So, my conclusion is this: Typed/Bookmark traffic should not spike as a % of total traffic. If it does, then I would start looking into whether your campaign codes are dropping off somewhere. Test your campaign landing pages and make sure they don’t redirect. Check with your marketing team to see examples of campaigns “in the wild” and test the campaign links to make sure they are working correctly.
|
Analysis entry for Google submitted on 8/9/2017 5:27:05 PM by Michele Kiss
Show Entire Entry · Link to this Entry
|
Analytics tools like Google Analytics can automatically detecting some things about your incoming traffic (for example, that they came from a search engine, or a previous website.) However, this information is fairly rudimentary. Therefore, analytics solutions provide a way for you to TELL them how the traffic is getting there. For GA, this is utm (aka campaign) tracking. You can read about GA's solution here: http://bit.ly/ga-url-builder ( Link 1)
The most important thing is that your marketers are consistent. (For example, that they all use "social" as the Medium - not "socialmedia" or "sm" or "social" - these will lead to multiple Mediums that all mean the same thing!) So whatever you choose to name things, keep them used consistently.
A shared spreadsheet like this one can be a helpful start. You can add drop downs and data validation to force even more consistency! Just make a copy, and customise to your heart's content. http://bit.ly/ga-url-builder-sheet ( Link 1)
RELATED:
Link 1
|
Analysis entry for Other submitted on 8/8/2017 6:31:39 PM by John Lovett
Show Entire Entry · Link to this Entry
|
A Digital Analytics Measurement Plan is a vital tool for any organization. Some organizations build measurement plans that capture everything under the Marketing sun, but this model is designed to take a small bite out of individual initiatives. If used correctly, this Measurement Plan will become a critical tool in your analytics planning process. And, it will also help project owners articulate their strategies and empower them to request data that will provide the results they need.
The Digital Analytics Measurement Plan offered here answers a number of key questions such as:
· What is this initiative designed to do?
· Why is this important to your organization?
· What are your desired outcomes for this initiative?
· Are there specific questions that you are trying to answer?
· How will you measure success?
· What tools will be used for data collection and analysis?
· What new data collection requirements exist?
· How will results be reported?
I encourage you to use my template ( Link 1) as a starting point and to customize it for your company's needs. There are many questions that you can ask, but I've found that in my experience, beginning with the questions above helps Project Owners to organize their thoughts and to ensure that their project aligns with corporate goals. It also helps Analysts to gain the information they need to document measures of success, to determine what additional tracking parameters (or variables) are needed, and to determine a starting point for analysis. This pre-work, which really only requires a small bit of planning is often the difference between an initiative that can provide key insights, to one that leaves everyone wondering why they spent the energy and effort.
Feel free to reach out if you have questions about this Measurement Plan to john@analyticsdemystified.com
RELATED:
Link 1
|
Analysis entry for Google submitted on 8/3/2017 10:57:49 AM by Tim Wilson
Show Entire Entry · Link to this Entry
|
This is a web-based app that works with Google Analytics data to explore site search usage on a site. There are three main components of what it does:
* "Stemming" of site search terms -- Sébastien Brodeur did a demo at Superweek 2017 of how he collapsed the variations of search terms into a single "stemmed" term. This makes for more meaninful frequency counts.
* Selective removal of terms -- many sites have some "dominant" search terms that are valid...but that dwarf the ability to get to the really interesting stuff. This app allows the user to simply type in words to remove them from the frequency counts and word cloud.
* Questions in search -- this was something Nancy Koons presented a few years ago -- filter down to just the searches that start with a "question word." These are searches well out on the long tail of searches, but they can be very insightful
Link 1 included here shows the first two items -- a word cloud and how "dominant but uninteresting" terms can be removed to make a more meaningful word cloud.
Link 2 shows the second item -- how "questions" get surfaced by filtering for specific terms.
A more complete description of this approach is available at: http://analyticsdemystified.com/google-analytics/exploring-site-search-help-r/ ( Link 3)
If you have access to a Google Analytics account that is configured for site search tracking, you can try this tool out without doing any coding at: https://gilligan.shinyapps.io/ga-site-search/.
If you would like to download the R code to run it locally or make modifications, it is available on Github: https://github.com/gilliganondata/site-search-wordcloud.
RELATED:
Link 1 · Link 2 · Link 3
|
|
DEMYSTIFIED LIBRARY
Segment Library
Test Library
Analysis Library
Technical Library
Full Library
Filter by Platform
Adobe
Google
Other/Agnostic
All Platforms
Sign In
|