Website analytics can tell us a lot about our audience and how they interact with our site. Oftentimes, we rely heavily on these analytics for reporting. But what if I told you that Google Analytics provides data that can be used as a strategy tool?
In this post, we are going to quickly look at three very specific, very actionable Google Analytics views for uncovering SEO opportunities.
Track Core Web Vitals
Google has verified that Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS) are now part of the Page Experience ranking factor. These metrics together make up Core Web Vitals. This topic has already been covered many times in the SEO industry, and Google itself has covered the topic along with how to measure the metrics, so we won’t dive too deep into the metrics themselves.
In the documentation provided by Google, they break down how you can pull LCP, FID, and CLS in data into Google Analytics. This can be done by setting up custom events using the code found on GitHub.
Upon setting up those events, you’ll be able to see all of the Core Web Vital metrics in Google Analytics. They will show up when you go to Google Analytics > Behavior > Events > Top Events and toggle over to Event Action. To get further insight into how each page is performing in each category, use a secondary dimension of Page.
To find the underperforming pages, use advanced filters to look for pages that fall under the “good” benchmark according to Google.
Using this data, you can tackle Core Web Vitals head-on and keep a close eye on performance as you make changes.
Find and fix 404s
The last thing you want is for people to finally come to your site just to be sent to an “Oops” page. This can happen for a variety of reasons: a mis-shared link, a forgotten redirect, a misspelled word in the URL, etc. It’s important to find these pages early and set up a fix right away to create the best possible experience for users.
The easiest way I’ve found to identify these URLs is to navigate to a page I know doesn’t exist on my website. For example, you may type in example.com/roger-rocks, then, when the page loads a 404, grab the title tag. Now you can navigate to Google Analytics > Behavior > All Pages and toggle over to Page Title. Once here, do a search using the title tag of your 404 page.
You’ll be shown one row with all of the stats for your 404 page. If you click on the title name, you’ll be presented with a new screen with all of the URLs that resulted in a 404 page. These are the URLs you need to research, determine why people are going to them, and then decide what you need to fix.
Again, those fixes may require creating or fixing a redirect, fixing a link (internal or external), creating content for that URL, and so on.
Find and capitalize on easy traffic opportunities
Search Console is a great tool for SEOs, as it gives us insights into how we’re performing in the search engine result pages. The downfall of Search Console is that the filtering options make it tough to manipulate the data — this isn’t the case with Google Analytics.
In Google Analytics, under Acquisition, you’ll find Search Console. If you have correctly connected your Google Analytics account with Search Console, your position, CTR, query, and landing page data should all be there.
So, if you go to Google Analytics > Acquisition > Search Console > Query, you can use the advanced search bar to help you find the data you want. In this case, let’s include Average Position less than 10, include Average Position greater than 3, and include CTR of less than 5%.
After applying this search filter, you'll find a list of keywords you currently rank well enough for, but that could use just a little boost. Increasing the CTR may be as simple as testing new title tags and meta descriptions. A higher CTR may lead to an increase in rankings, but even if it doesn’t, it will lead to an increase in traffic.
Pro tip: track your changes
The only way to know what is affecting your traffic is to track your changes. If you update a page, fix a link, or add a new resource, it may be enough to change your rankings.
I find that tracking my changes in the annotations section in Google Analytics allows me to deduce potential effects at a glance. When a date has an annotation, there is a small icon on the timeline to let you know a change was made. If you see a bigger (or smaller) than usual peak after the icon, it could be a hint that your change had an impact.
But remember, correlation does not always equal causation! As Dr. Pete would say, run your own tests. This is just meant to be a quick reference check.
In conclusion
Google Analytics is often used for reporting and tracking. But, that same data should be used to put a strategy into action.
By taking your analytics just a step further, you can unlock serious opportunities.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
On February 19, 2021, we measured a dramatic drop in Featured Snippets on Google SERPs in the US. Like any responsible data scientist, I waited to make sure it wasn't a fluke, did my homework, and published when I was sure I was onto something. Then, this happened (30-day view):
C'MON, GOOGLE! I did all these beautiful analyses, found a lovely connection between Featured Snippet losses, YMYL queries, and head terms, and then you go and make me look like a chump?!
Is there anything we can learn from this strange turn of events? Do I really need this stress? Should I just go pour myself a cocktail? Stay tuned for none of these answers and more!
You want more data? Ok, fine, I guess...
Could this recovery be a fluke of the 10,000-keyword MozCast data set? It's unlikely, but let's dot our i's and cross our t's. Here's the Featured Snippet data from the same time period across roughly 2.2M US/desktop keywords in the STAT data set:
So, this gets a lot messier. We saw a significant drop on February 19, followed by a partial recovery, followed by an even larger drop, finally landing (for now) on a total recovery.
Our original study of the drop showed dramatic differences by query length. Here's a breakdown by four word-count buckets for the before and after Featured Snippet prevalence (the data points are February 18, February 19, and March 12):
You can plainly see that the bulk of the losses were in one-word queries, with longer queries showing minor but far less dramatic drops. All query lengths recovered by March 12.
Who really came back from holiday?
If you take two kids on vacation and come back with two kids, it's all good, right? What if the kids who came back weren't the same? What if they were robots? Or clones? Or robot clones?
Is it possible that the pages that were awarded Featured Snippets after the recovery were different from the ones from before the drop? A simple count doesn't tell us the whole story, even if we slice-and-dice it. This turns out to be a complicated problem. First of all, we have to consider that — in addition to the URL of the Featured Snippet changing — a keyword could gain or lose a Featured Snippet entirely. Consider this comparison of pre-drop and post-recovery:
Looking at the keywords in MozCast that had Featured Snippets on February 18, 79% of those same keywords still had Featured Snippets on March 12. So, we're down 21% already. If we narrow the focus to keywords that retained their Featured Snippets and displayed the same page/URL in those Featured, we're down to 60% of the original set.
That seems like a big drop, but we also have to consider that three weeks (22 days, to be precise) passed between the drop and recovery. How much change is normal for three weeks? For comparison's sake, let's look at the Featured Snippet stability for the 22 days prior to the drop:
While these numbers are a bit better than the post-recovery numbers, we're still seeing about three out of 10 keywords either losing a Featured Snippet or changing the Featured Snippet URL. Keep in mind that Featured Snippets are pulled directly from page-one organic results, so they're constantly in flux as the algorithm and the content of the web evolve.
Are Featured Snippets staying home?
It's impossible to say whether the original drop was deliberate on Google's part, an unintentional consequence of another (deliberate) change, or entirely a bug. Honestly, given the focus of the drop on so-called "head" queries and YMYL (Your Money, Your Life) queries, I thought this was a deliberate change that was here to stay. Without knowing why so many Featured Snippets went away, I can't tell you why they came back, and I can't tell you how long to expect them to stay around.
What we can assume is that Google will continue to evaluate Featured Snippet quality, especially for queries where result quality is critical (including YMYL queries) or where Google displays Knowledge Panels and other curated information. Nothing is guaranteed, and no tactic is future-proof. We can only continue to measure and adapt.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
Google must be one of the most experimental enterprises the world has ever known. When it comes to the company’s local search interfaces, rather than rolling them all out as a single, cohesive whole, they have emerged in piecemeal fashion over two decades with different but related feature sets, unique URLs, and separate branding. Small wonder that confusion arises in dialog about aspects of local search. You, your agency coworkers, and your clients may find yourselves talking at cross-purposes about local rankings simply because you’re all looking at them on different interfaces!
Such is certainly the case with Google Maps vs. the object we call the Google Local Finder. Even highly skilled organic SEOs at your agency may not understand that these are two different entities which can feature substantially different local business rankings.
Today we’re going to clear this up, with a side-by-side comparison of the two user experiences, expert quotes, and a small, original case study that demonstrates and quantifies just how different rankings are between these important interfaces.
Methodology
I manually gathered both Google Maps and Local Finder rankings across ten different types of geo-modified, local intent search phrases and ten different towns and cities across the state of California. I looked at differences both across search phrase and across locale, observing those brands which ranked in the top 10 positions for each query. My queries were remote (not performed within the city nearest me) to remove the influence of proximity and establish a remote baseline of ranking order for each entry. I tabulated all data in a spreadsheet to discover the percentage of difference in the ranked results.
Results of my study of Google Maps vs. the Local Finder
Before I roll out the results, I want to be sure I’ve offered a good definition of these two similar but unique Google platforms. Any user performing a local search (like “best tacos san jose”) can take two paths for deep local results:
Path one starts with a local pack, typically made up of three results near the top of the organic search results. If clicked on, the local pack takes the user to the Local Finder, which expands on the local pack to feature multiple listings, accompanied by a map. These types of results exist on google.com/search.
Path two may start on any Android device that features Google Maps by default, or it can begin on a desktop device by clicking the “Maps” tab above the organic SERPs. These types of results look quite similar to the Local Finder, with their list of ranked businesses and associated map, but they exist on google.com/maps.
Here’s a side-by-side comparison:
At first glance, these two user experiences look fairly similar with some minor formatting and content differences, but the URLs are distinct, and what you might also notice in this screenshot is that the rankings, themselves, are different. In this example, the results are, in fact, startlingly different.
I’d long wanted to quantify for myself just how different Maps and Local Finder results are, and so I created a spreadsheet to track the following:
Ten search phrases of different types including some head terms and some longer-tail terms with more refined intent.
Ten towns and cities from all parts of the big state of California covering a wide population ration. Angels Camp, for example, has a population of just 3,875 residents, while LA is home to nearly 4 million people.
I found that, taken altogether, the average difference in Local Finder vs. Maps results was 18.2% across all cities. The average difference was 18.5% across all search phrases. In other words, nearly one-fifth of the results on the two platforms didn’t match.
Here’s a further breakdown of the data:
Average percentage of difference by search phrase
burgers (11%)
grocery store (19%)
Pediatrician (12%)
personal injury attorney (18%)
house cleaning service (10%)
electric vehicle dealer (16%)
best tacos (11%)
cheapest tax accountant (41%)
nearby attractions (8%)
women’s clothing (39%)
Average percentage of difference by city
Angels Camp (28%)
San Jose (15%)
San Rafael (24%)
San Francisco (4%)
Sacramento (16%)
Los Angeles (25%)
Monterey (14%)
San Diego (16%)
Eureka (25%)
Grass Valley (15%)
While many keyword/location combos showed 0% difference between the two platforms, others featured degrees of difference of 20%, 30%, 50%, 70%, and even 100%.
It would have been lovely if this small study surfaced any reliable patterns for us. For example, looking at the fact that the small, rural town of Angels Camp was the locale with the most diverse SERPs (28%), one might think that the smaller the community, the greater the variance in rankings. But such an idea founders when observing that the city with the second-most variability in LA (25%).
Similarly, looking at the fact that a longer-tail search like “cheapest tax accountant” featured the most differences (41%), it could be tempting to theorize that greater refinement in search intent yields more varied results. But then we see that “best tacos” results were only 11% different across Google Maps and the Local Finder. So, to my eyes, there is no discernible pattern from this limited data set. Perhaps narratives might emerge if we pulled thousands of SERPs.
For now, all we can say with confidence is that we’ve proven that there’s a good chance that the rankings a business enjoys in Google’s Local Finder frequently will not match their rankings in Google Maps. Individual results sets for keyword/locale combos may vary not at all, somewhat, substantially, or totally.
Maps vs. Finders: What’s the diff, and why?
The above findings from our study naturally lead to the question: why are the results for the same query different on the two Google platforms? For commentary on this, I asked three of my favorite local SEOs for theories on the source of the variance, and any other notable variables they’ve observed.
“I think that the differences are driven by the subtle differences of the 'view port' aspect ratio and size differences in the two environments. The viewport effectively defines the cohort of listings that are relevant enough to show. If it is larger, then there are likely more listings eligible, and if one of those happens to be strong, then the results will vary.”
Here’s an illustration of what Mike is describing. When we look at the results for the same search in the Local Finder and Google Maps, side by side, we often see that the area shown on the map is different at the automatic zoom level:
Uberall Solutions Engineer Krystal Taing confirms this understanding, with additional details:
“Typically when I begin searches in Maps, I am seeing a broader area of results being served as well as categories of businesses. The results in the Local Finder are usually more specific and display more detail about the businesses. The Maps-based results are delivered in a manner that show users desire discovery and browsing. This is different from the Local Finder in that these results tend to be more absolute and about Google pushing pre-determined businesses and information to be evaluated by the user.”
Krystal is a GMB Gold Product Expert, and her comment was the first time I’d ever heard an expert of her caliber define how Google might view the intent of Maps vs. Finder searchers differently. Fascinating insight!
Sterling Sky Founder Joy Hawkins highlights further differences in UX and reporting between the two platforms:
“What varies is mainly the features that Google shows. For example, products will show up on the listing in the Local Finder but not on Google Maps and attribute icons (women-led, Black-owned, etc.) show up on Google Maps but not in the Local Finder. Additionally, searches done in the Local Finder get lumped in with search in Google My Business (GMB) Insights whereas searches on Maps are reported on separately. Google is now segmenting it by platform and device as well.”
In sum, Google Maps vs. Local Finder searchers can have a unique UX, at least in part, because Google may surface a differently-mapped area of search and can highlight different listing elements. Meanwhile, local business owners and their marketers will discover variance in how Google reports activity surrounding these platforms.
What should you do about the Google Maps vs. Local Finder variables?
As always, there is nothing an individual can do to cause Google to change how it displays local search results. Local SEO best practices can help you move up in whatever Google displays, but you can’t cause Google to change the radius of search it is showing on a given platform.
That being said, there are three things I recommend for your consideration, based on what we’ve learned from this study.
1. See if Google Maps is casting a wider net than the Local Finder for any of your desired search phrases.
I want to show you the most extreme example of the difference between Maps and the Local Finder that I discovered during my research. First, the marker here locates the town of Angels Camp in the Sierra foothills in east California:
For the search “personal injury attorney angels camp”, note the area covered by map at the automatic zoom level accompanying the Local Finder results:
The greatest distance between any two points in this radius of results is about 100 miles.
Now, contrast this with the same search as it appears at the automatic zoom level on Google Maps:
Astonishingly, Google is returning a tri-state result for this search in Maps. The greatest distance between two pins on this map is nearly 1,000 miles!
As I mentioned, this was the most extreme case I saw. Like most local SEOs, I’ve spent considerable time explaining to clients who want to rank beyond their location that the further a user gets from the brand’s place of business, the less likely they are to see it come up in their local results. Typically, your best chance of local pack rankings begins with your own neighborhood, with a decent chance for some rankings within your city, and then a lesser chance beyond your city’s borders.
But the different behavior of Maps could yield unique opportunities. Even if what’s happening in your market is more moderate, in terms of the radius of results, my advice is to study the net Google is casting for your search terms in Maps. If it is even somewhat wider than what the Local Finder yields, and there is an aspect of the business that would make it valuable to bring in customers from further afield, this might indicate that some strategic marketing activities could potentially strengthen your position in these unusual results.
For example, one of the more distantly-located attorneys in our example might work harder to get clients from Angels Camp to mention this town name in their Google-based reviews, or might publish some Google posts about Angels Camp clients looking for the best possible lawyer regardless of distance, or publish some website content on the same topic, or look to build some new relationships and links within this more distant community. All of this is very experimental, but quite intriguing to my mind. We’re in somewhat unfamiliar territory here, so don’t be afraid to try and test things!
As always, bear in mind that all local search rankings are fluid. For verticals which primarily rely on the narrowest user-to-business proximity ratios for the bulk of transactions, more remote visibility may have no value. A convenience store, for example, is unlikely to garner much interest from faraway searchers. But for many industries, any one of these three criteria could make a larger local ranking radius extremely welcome:
The business model is traditionally associated with traveling some distance to get to it, like hotels or attractions (thinking post-pandemic here).
Rarity of the goods or services being offered makes the business worth driving to from a longer distance. This is extremely common in rural areas with few nearby options.
The business has implemented digital shopping on its website due to the pandemic and would now like to sell to as many customers as possible in a wider region with either driver delivery or traditional shipping as the method of fulfillment.
If any of those scenarios fits a local brand you’re marketing, definitely look at Google Maps behavior for focus search phrases.
2. Flood Google with every possible detail about the local businesses you’re marketing
As Joy Hawkins mentioned, above, there can be many subtle differences between the elements Google displays within listings on their two platforms. Look at how hours are included in the Maps listing for this taco shop, but that they’re absent from the Finder. The truth is, Google changes the contents of the various local interfaces so often that even the experts are constantly asking themselves and one another if some element is new.
The good news is, you don’t need to spend a minute worrying about minutiae here if you make just 5 commitments:
Fill out every field you possibly can in the Google My Business dashboard
Add to this a modest investment in non-dashboard elements like Google Questions and Answers which exist on the Google Business Profile
Be sure your website is optimized for the terms you want to rank for
Earn publicity on the third-party websites Google uses as the “web results” references on your listings. I
I realize this is a tall order, but it’s also basic, good local search marketing and if you put in the work, Google will have plenty to surface about your locations, regardless of platform variables.
3. Study Google Maps with an eye to the future
Google Maps, as an entity, launched in 2005, with mobile app development spanning the next few years. The Local Finder, by contrast, has only been with us since 2015. Because local packs default to the Local Finder, it’s my impression that local SEO industry study has given the lion’s share of research to these interfaces, rather than to Google Maps.
I would suggest that 2021 is a good year to spend more time looking at Google Maps, interacting with it, and going down its rabbit holes into the weird walled garden Google continues to build into this massive interface. I recommend this, because I feel it’s only a matter of time before Google tidies up its piecemeal, multi-decade rollout of disconnected local interfaces via consolidation, and Maps has the history at Google to become the dominant version.
Summing up
We’ve learned today that Google Maps rankings are, on average, nearly 20% different than Local Finder rankings, that this may stem, in part, from unique view port ratios, that it’s possible Google may view the intent of users on the two platforms differently, and that there are demonstrable variables in the listing content Google displays when we look at two listings side-by-side. We’ve also looked at some scenarios in which verticals that could benefit from a wider consumer radius would be smart to study Google Maps in the year ahead.
I want to close with some encouragement for everyone participating in the grand experiment of Google’s mapping project. The above photo is of the Bedolina Map, which was engraved on a rock in the Italian alps sometime around 500 BC. It is one of the oldest-known topographic maps, plotting out pathways, agricultural fields, villages, and the people who lived there. Consider it the Street View of the Iron Age.
I’m sharing this image because it’s such a good reminder that your work as a local SEO linked to digital cartography is just one leg of a very long journey which, by nature, requires a willingness to function in an experimental environment. If you can communicate this state of permanent change to clients, it can decrease stress on both sides of your next Zoom meeting. Rankings rise and fall, and as we’ve seen, they even differ across closely-related platforms, making patience essential and a big-picture view of overall growth very grounding. Keep studying, and help us all out on the mapped path ahead by sharing what you learn with our community.
Looking to increase your general knowledge of local search marketing? Read The Essential Local SEO Strategy Guide
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
We condemn the horrific acts of hate and violence targeting the Asian American and Pacific Islander (AAPI) community, which culminated in the tragic mass shooting in Georgia on March 17th. We mourn the loss of life and grieve with the families that have been broken by this latest racist, misogynistic hate crime.
This is not an isolated incident. We must acknowledge the widespread examples of violence and prejudice, bigotry, and intolerance that have been building for some time. We've seen attacks on elders in the Asian community. Children face bullying from peers. There has been workplace discrimination, street harassment, violence, and vandalism. Since the beginning of the pandemic, hate crimes against Asians have increased tremendously. Anti-Asian racism is not new, but it's been fueled by dangerous false rhetoric surrounding COVID-19. I challenge myself and my community to recognize the painful history of anti-Asian racism, to learn and understand the experience of AAPI individuals, and to use the power and privilege we have to stand up to bigotry.
Why are we discussing this now?
To do the work of combating hate in every corner of our society, we need to hold conversations about these issues, loudly and often. At Moz, we have a platform that allows us to shine a light on the darkness we're facing. We have privilege that allows us to confront the uncomfortable. Silence allows hatred to flourish; discussion and accountability weeds it from the root.
What can we all do to combat AAPI hate and support the AAPI community?
Hatred shrinks from bravery. If you witness someone experiencing anti-Asian sentiment or discrimination, use bystander intervention training to inform your response. Intervene and educate friends and family that perpetuate harmful stereotypes, letting them know hatred cannot be tolerated. Seek out resources to educate yourself and share with your circle of influence. Show compassion and empathy to your AAPI friends, family, and coworkers, offering space before it's asked. Listen to and amplify AAPI voices. Find and patronize local AAPI-owned small businesses — Intentionalist is a fantastic tool to use here. Support organizations fighting to make the world a fairer, safer place for all — we'll share a few in the Resources section below.
Perhaps most importantly, have courage. We cannot allow hate to go unchecked. Be brave. Be loud. Say no to hate.
Resources Many thanks to Kim Saira and Annie Wu Henry for compiling resources and education on this topic.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
Marketers can get caught up in specific metrics, focusing on those data points that make you look good in reporting, but don’t help you understand your performance.
In this week’s episode of Whiteboard Friday, Dr. Pete discusses the vanity we bring to the metrics we track, and how to take a better, more realistic view of your results.
Click on the whiteboard image above to open a high resolution version in a new tab!
Video Transcription
Hi, everybody. Welcome to another edition of Whiteboard Friday. I'm Dr. Pete, the Marketing Scientist for Moz, and I want to talk to you today about vanity metrics.
So I think we all have an intuition of what that means, but what I want to discuss today is I think we get caught up in this being about specific metrics. To me, the problem isn't the metrics themselves. The problem is the vanity. So I want to talk about us and what we bring to metrics, and how to do better no matter what the metric is.
SEO metric funnel
So I want to start with this kind of simplistic SEO funnel of metrics, starting with ranking.
Ranking
Ranking via click-through rate delivers traffic. Traffic via conversion rate delivers leads or sales or conversions or whatever you want to call them, the money. Then beyond that, we might have some more advanced metrics, like lifetime value, that kind of get into revenue over time or profit over time. Naturally, over time we've moved down this funnel and kind of put our attention more at the bottom, at the bottom line and the dollars.
That makes sense. I think it's good that we've gotten away from metrics like hits. In the early days, when a page counted more because it had 200 images and 73 JavaScript files, that's not so great, right? We know now that's probably bad in some cases. But it's possible to hold that mirror up to any of these metrics and get caught up in the vanity.
I know we're used to this with rankings and traffic. We've all had customers that wanted to go after certain very specific head terms or vanity terms as we call them, that really weren't delivering results or maybe cost a lot or were very competitive.
Traffic
Traffic, okay, traffic is good. But if you've ever had a piece of viral content that went really big but ended up not driving any conversions because it had nothing to do with your site, you know that's not so great.
In fact, traffic by itself could be bad. You could be overloading your server. You could be stopping legitimate customers from buying. So bringing people to your site for no reason or the wrong people isn't that great.
Sales and lifetime value
So I know it's easy to look at this and say, "Okay, but come on, sales. The bottom line is the bottom line." Well, I'll give you an example.
Let's say you have a big sale and you set everything to 50% off, and you bring in a ton of new sales and a ton of revenue. But let's say I tell you that your profit margins were 20%. Is that a good thing? You just cost yourself a lot of money. Now maybe you had another agenda and you're hoping to bring them back, or there's a branding aspect. But by itself we don't know necessarily if that's a great thing.
Just making more revenue isn't so great. Even profit or something like lifetime value, this is an example based in real life, but I'm going to change it a little bit to protect the innocent. Let's say you were a small company and you owned some kind of an asset. You owned some intellectual property, or you owned a piece of physical property and you sold that one year at significant profit, big margins.
Then you look and you say, "Wow, this year we made 50% profits, and next year we're going to try to make 70% based on that number." That would be a really terrible idea because that was a one-time thing, and you're not taking that into account. This is a bit of a stretch. But it's possible even to take profit or something like lifetime value or EBITDA even out of context, and even though it's a more complex metric or it's farther down the funnel, you could miss something important about what that number really means.
The three Rs
So that's the first thing. Is this a real result? Is that number going up necessarily good by itself? Without the context, you can't know that. The second thing where I think we really need to look at the entire funnel and not get focused too far down is repairs, fixing what's broken.
So let's say you track sales. Sales are going great. Everything is going well. Everybody is happy. The dollar bills are coming in. Then it stops, or it starts to drop significantly. If you don't know what happened above this, you can't do anything to fix it.
So if you don't know that your traffic dropped, if you don't know that your click-through rate dropped, and let's say your traffic dropped, you don't know why it dropped, which pages, which keywords, what rankings were affected, did you have lower rankings, or did you have rankings on less keywords, you can't go back and fix this and figure out what happened. So tracking that bottom line number isn't enough.
At that point, that has become a vanity metric. That's become something that you're celebrating, but you're not really understanding how you got there. I think we're all aware of that to a point. Maybe we don't do it, but we know we should. But the other thing I miss I think sometimes and that we miss is something I'm going to refer to as replication.
Yes, I tried a little too hard to get three R's in here. But this is repeating success. If something works and you get a bunch of sales, even if it's high margin, you get profitable sales, but you don't know what you did, you don't know what really drove that, where did the traffic come from, what was the source of that, was it specific pieces of content, was it specific keywords, what campaign was that tied to, you can't replicate that success.
So it's not just about fixing something when it's broken and when the dollars start to dry up, but when things go well, not just celebrating, but going back and trying to work up the funnel and figuring out what you did right, because if you don't know what you did right, you can't do it again.
So three R's. Results, consider the context of the metric. Repairs, be able to work up the funnel and know what's broken. If things go well, replication. Be able to repeat your successes and hopefully do it again.
So again, vanity, it's not in the metric. It's in us. You can have vanity with any of these things. So don't get caught up in any one thing. Consider the whole funnel.
I hope you can avoid the mistakes, and I hope you can repeat your successes. Thanks a lot, and I'll see you next time. Bye-bye.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
The written content on your website serves to not only inform and entertain readers, but also to grab the attention of search engines to improve your organic rankings.
And while using SEO keywords in your content can help you get found by users, focusing solely on keyword density doesn’t cut it when it comes to creating SEO-friendly, reader-focused content.
This is where LSI keywords come in.
LSI keywords serve to add context to your content, making it easier to understand by search engines and readers alike. Want to write content that ranks and wows your readers? Learn how to use LSI keywords the right way.
What are LSI keywords?
Latent Semantic Indexing (LSI) keywords are terms that are conceptually related to the main keyword you’re targeting in your content. They help provide context to your content, making it easier for readers and search engines to understand what your content is about.
Latent semantic analysis
LSI keywords are based on the concept of latent semantic analysis, which is a technique for understanding natural language processing. In other words, it analyzes the relationship between one word and another in order to make sense of the overall content.
Search engine algorithms use latent semantic analysis to understand web content and ultimately determine what content best fits what the user is actually searching for when they use a certain keyword in their search.
Why are LSI keywords important for SEO?
The use of LSI keywords in your content helps search engines understand your content and therefore makes it easier for search engines to match your content to what users are searching for.
Exact keyword usage is less important than whether your overall content fits the user’s search query and the intention behind their search. After all, the goal of search engines is to showcase content that best matches what users are searching for and actually want to read.
LSI keywords are not synonyms
Using synonyms in your content can help add context to your content, but these are not the same as LSI keywords. For example, a synonym for the word “sofa” could be “couch”, but some LSI keywords for “couch” would be terms like “leather”, “comfortable”, “sleeper”, and “sectional”.
When users search for products, services, or information online, they are likely to add modifiers to their main search term in order to refine their search. A user might type something like “red leather sofa” or “large sleeper sofa”. These phrases still contain the primary keyword “sofa”, but with the addition of semantically-related terms.
How to find LSI keywords to use in your content
One of the best ways to find LSI keywords is to put yourself in the mind of someone who is searching for your primary keyword. What other details might they be searching for? What terms might they use to modify their search?
Doing a bit of brainstorming can help set your LSI keyword research off on the right track. Then, you can use a few of the methods below to identify additional LSI keywords, phrases, and modifiers to use in your content.
Google autocomplete
Use Google to search for your target keyword. In most cases, Google’s autocomplete feature will fill the search box with semantically-related terms and/or related keywords.
For the keyword “sofa”, we can see some related keywords (like “sofa vs couch”) as well as LSI keywords like “sofa [bed]”, “[corner] sofa”, and ‘[leather] sofa”.
Competitor analysis
Search for your target keyword and click on the first few competing pages or articles that rank highest in the search results. You can then use the find function to search the content for your primary keyword and identify LSI keywords that bookend that key term.
For example, a search for “digital marketing services” may yield several competitor service pages. You can then visit these pages, find the phrase “digital marketing services”, and see what semantically-related keywords are tied in with your target keyword.
Some examples might include:
“Customizable”
“Full-service”
“Results-driven”
“Comprehensive”
“Custom”
“Campaigns”
“Agency”
“Targeted”
“Effective”
You can later use these LSI keywords in your own content to add context and help search engines understand the types of services (or products) you offer.
LSI keyword tools
If conducting manual LSI keyword research isn’t your forte, you can also use designated LSI keyword tools. Tools like LSIGraph and UberSuggest are both options that enable you to find semantic keywords and related keywords to use in your content.
LSIGraph is a free LSI keyword tool that helps you “Generate LSI keywords Google loves”. Simply search for your target keyword and LSIGraph will come up with a list of terms you can consider using in your content.
In the image above, you can see how LSIGraph searched its database to come up with a slew of LSI keywords. Some examples include: “[reclining] sofa”, “sofa [designs]”, and “[discount] sofas”.
Content optimization tools
Some on-page optimization tools include LSI keyword analysis and suggestions directly within the content editor.
Surfer SEO is one tool that provides immediate LSI keyword recommendations for you to use in your content and analyzes the keyword density of your content in real-time.
Here we see that Surfer SEO makes additional keyword suggestions related to the primary term “rainboots”. These LSI keywords include: “little”, “pair”, “waterproof”, “hunter”, “rubber”, “men’s”, and so on.
Using LSI keywords to improve SEO
You can use any or all of the LSI keywords you identified during your research as long as they are applicable to the topic you are writing about and add value to your content. Using LSI keywords can help beef up your content, but not all of the terms you identify will relate to what you are writing about.
For example, if you sell women’s rain boots, including LSI terms like “men’s” or “masculine” may not tie in to what you’re offering. Use your best judgment in determining which terms should be included in your content.
In terms of using LSI keywords throughout your content, here are a few places you can add in these keywords to improve your SEO:
Title tags
Image alt text
Body content
H2 or H3 subheadings
H1 heading
Meta description
LSI keywords made simple
Identifying and using LSI keywords is made simple when you take a moment to consider what your target audience is searching for. They aren’t just searching for your primary keyword, but are likely using semantically-related terms to refine their search and find the exact service, product, or information they are searching for.
You can also use data-driven keyword research and content optimization tools to identify LSI keywords that are showing up in other high-ranking articles and web pages. Use these terms in your own content to improve your on-page SEO and attract more users to your website.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
One of the great things about doing SEO at an agency is that you're constantly working on different projects you might not have had the opportunity to explore before. Being an SEO agency-side allows you to see such a large variety of sites that it gives you a more holistic perspective on the algorithm, and to work with all kinds of unique problems and implementations.
This year, one of the most interesting projects that we worked on at Go Fish Digital revolved around helping a large media company break into Google’s Top Stories for major single-day events.
When doing competitor research for the project, we discovered that one way many sites appear to be doing this is through use of a schema type called LiveBlogPosting. This sent us down a pathway of fairly deep research into what this structured data type is, how sites are using it, and what impact it might have on Top Stories visibility.
Today, I’d like to share all of the findings we’ve made around this schema type, and draw conclusions about what this means for search moving forward.
Who does this apply to?
With regards to LiveBlogPosting schema, the most relevant types of sites will be sites where getting into Google’s Top Stories is a priority. These sites will generally be publishers that regularly post news coverage. Ideally AMP will already be implemented, as the vast majority of Top Stories URLs are AMP compatible (this is not required, however).
Why non-publisher sites should still care
Even if your site isn’t a publisher eligible for Top Stories results, the content of this article may still provide you with interesting takeaways. While you might not be able to directly implement the structured data at this point, I believe we can use the findings of this article to draw conclusions about where the search engines are potentially headed.
If Google is ranking articles that are updated with regular frequency and even providing rich-features for this content, this might be an indication that Google is trying to incentivize the indexation of more real-time content. This structured data may be an attempt to help Google “fill a gap” that it has in terms of providing real-time results to its users.
While it makes sense that “freshness” ranking factors would apply most to publishers, there could be interesting tests that other non-publishers can perform in order to measure whether there is a positive impact to your site’s content.
What is LiveBlogPosting schema?
The LiveBlogPosting schema type is structured data that allows you to signal to search engines that your content is being updated in real-time. This provides search engines with contextual signals that the page is receiving frequent updates for a certain period of time.
The LiveBlogPosting structured data can be found on schema.org as a subtype of “Article” structured data. The official definition from the site says it is: “A blog post intended to provide a rolling textual coverage of an ongoing event through continuous updates.”
Imagine a columnist watching a football game and creating a blog post about it. With every single play, the columnist updates the blog with what happened and the result of that play. Each time the columnist makes an update, the structured data also updates indicating that a recent addition has been made to the article.
Articles with LiveBlogPosting structured data will often appear in Google’s Top Stories feature. In the top left-hand corner of the thumbnail image, there will be a “Live” indicator to signal to users that live updates are getting made to the page.
Two Top Stories Results With The “Live” Tag
In the image above, you can see an example of two publishers (The Washington Post and CNN) that are implementing LiveBlogPosting schema on their pages for the term “coronavirus”. It’s likely that they’re utilizing this structured data type in order to significantly improve their Top Stories visibility.
Why is this Structured Data important?
So now you might be asking yourself, why is this schema even important? I certainly don’t have the resources available to have an editor continually publish updates to a piece of content throughout the day.
We’ve been monitoring Google’s usage of this structured data specifically for publishers. Stories with this structured data type appear to have significantly improved visibility in the SERPs, and we can see publishers aggressively utilizing it for large events.
For instance, the below screenshot shows you the mobile SERP for the query “us election” on November 3, 2020. Notice how four of the seven results in the carousel are utilizing LiveBlogPosting schema. Also, beneath this carousel, you can see the same CNN page is getting pulled into the organic results with the “Live” tag next to it:
Now let’s look at the same query for the day after the election, November 4, 2020. We still see that publishers heavily utilize this structured data type. In this result, five of the seven first Top Stories results use this structured data type.
In addition, CNN gets to double dip and claim an additional organic result with the same URL that’s already shown in Top Stories. This is another common result of LiveBlogPosting implementation.
In fact, this type of live blog post was one of CNN’s core strategies for ranking well for the US Election.
Here is how they implemented this strategy:
Create a new URL every day (to signal freshness)
Apply LiveBlogPosting schema and continually make updates to that URL
Ensure each update has its own dedicated timestamp
Below you can see some examples of URLs CNN posted during this event. Each day a new URL was posted with LiveBlogPosting schema attached:
https://www.cnn.com/politics/l... another telling result for “us election” on November 4, 2020. We can see that The New York Times is ranking in the #2 position on mobile for the term. While the ranking page isn’t a live blog post, we can see underneath the result is an AMP carousel. Their strategy was to live blog each individual state’s results:
It’s clear that publishers are heavily utilizing this schema type for extremely competitive news articles that are based around big events. Oftentimes, we’re seeing this strategy result in prominent visibility in Top Stories and even the organic results.
How do you implement LiveBlogPosting schema?
So you have a big event that you want to optimize around and are interested in implementing LiveBlogPosting schema. What should you do?
1. Get whitelisted
The first thing you’ll need to do is get whitelisted by Google. If you have a Google representative that’s in contact with your organization, I recommend reaching out to them. There isn’t a lot of information out there on this and we can even see that Google has previously removed help documentation for it. However, the form to request access to the Live Coverage Pilot is still available.
This makes sense, as Google might not want news sites with questionable credibility to access this feature. This is another indication that this feature is potentially very powerful if Google wants to limit how many sites can utilize it.
2. Technical implementation
Next, with the help of a developer, you’ll need to implement LiveBlogPosting structured data on your site. There are several key properties you’ll need to include such as:
articleBody: The full description of the blog update
datePublished: The time when the update was originally posted
dateModified: The time when the update was adjusted
To make this a little easier to conceptualize, below you can find an example of how CNN has implemented this on one of their live blogs. The example below features two “liveBlogUpdate” properties on their November 3, 2020 coverage of the election.
Case study
As I previously mentioned, many of these findings were discovered during research for a particular client who was interested in improving visibility for several large single-day events. Because of how agile the client is, they were actually able to get LiveBlogPosting structured data up and running on their site in a fairly short period of time. We then tested to see if this structured data would help improve visibility for very competitive “head” keywords during the day.
While we can’t share too much information about the particular wins we saw, we did see significant improvements in visibility for the competitive terms the live blog post was mapped to. When looking in Search Console, we can see lifts of between +200% and +600%+ improvements in YoY clicks and visibility for many of these terms. During our spot checks during the day, we often found the live blog post ranking in the 1-3 results (first carousel) in Top Stories. The implementation appeared to be a major success in improving visibility for this section of the SERPs.
Google vs. Twitter and the need for real-time updates
So the question then becomes, why would Google place so much emphasis on the LiveBlogPosting structured data type? Is it the fact that the page is likely going to have really in-depth content? Does it improve E-A-T in any way?
I would interpret that the success of this feature demonstrates one of the weaknesses of a search engine and how Google is trying to adjust accordingly. One of the primary issues with a search engine is that it’s much harder for it to be real-time. If “something” happens in the world, it’s going to take search engines a bit of time to deliver that information to users. The information not only needs to be published, but Google must then crawl, index, and rank that information.
However, by the time this happens, the news might already be readily available on platforms such as Twitter. One of the primary reasons that users might navigate away from Google to the Twitterverse is because users are seeking information that they want to know right now, and don’t feel like waiting 30 minutes to an hour for it to populate in Google News.
For instance, when I’m watching the Steelers and see one of our players have the misfortune of sustaining an injury, I don’t start to search Google hoping the answer will appear. Instead, I immediately jump to Twitter and start refreshing like crazy to see if a sports beat writer has posted any news about it.
What I believe Google is creating is a schema type that signals a page is in real-time. This gives Google the confidence to know that a trusted publisher has created a piece of content that should be crawled much more frequently and served to users, since the information is more likely to be up to date and accurate. By giving rich features and increased visibility to articles using this structured data, Google is further incentivizing the creation of real-time content that will retain searches on their platform.
This evidence also signals that sites indicating to search engines that content is fresh and regularly updated may be an increasingly important factor for the algorithm. When talking to Dan Hinckley, CTO of Go Fish Digital, he proposed that search engines might need to give preference to articles that have been updated more recently. Google might not be able to “trust” that older articles still have accurate information. Thus, ensuring content is updated may be important to a search engine’s confidence about the accuracy of the results.
Conclusion
You really never know what types of paths you’re going to go down as an SEO, and this was by far one of the most interesting ones during my time in the industry. Through researching just this one example, we not only figured out a piece of the Top Stories algorithm, but also gained insights into the future of the algorithm.
It’s entirely possible that Google will continue to incentivize and reward “real-time” content in an effort to better compete with platforms such as Twitter. I’ll be very interested to see any new research that’s done on LiveBlogPosting schema, or Google’s continual preference towards updated content.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!