Ranking on Google is not ranking in a vacuum. Ranking is outranking your competitors. When you've got very limited space on the first page of the SERPs, you need to be doing better than your competitors.
In today's Whiteboard Friday, Lidia Infante shows you her recommended strategies for successful SEO gap analysis.
Click on the whiteboard image above to open a high resolution version in a new tab!
Video Transcription
Howdy, Moz fans, and welcome to a new edition of Whiteboard Fridays. My name is Lidia Infante, and I'm the Senior SEO Manager at sanity.io. Today, I'm going to be talking to you about SEO gap analysis, and yes, I know it's a very unsexy topic, but bear with me because it's worth it.
SEO gap analysis takes us to the first principles of what we do in SEO because ranking on Google is not ranking in a vacuum. Ranking is outranking your competitors. You've got a very limited space on the first page of the SERPs, and you need to be doing better than your competitors to be able to rank there. That means, then, you need to know what your competitors are doing and how you're going to do it better.
Once you have your set of competitors ready, you're going to proceed to benchmark yourself against them, and we're going to be doing this across the three pillars of SEO.
So we're going to be looking at content, we're going to be looking at links, and we're going to be looking at tech SEO. We're going to look at how our competitors perform from each of those and how we compare.
Content
So when it comes to content, the very first thing that we want to look at is at the estimated traffic by type that our competitors and we have. So when I'm talking about traffic by type, what I mean is like: Are they getting branded traffic versus unbranded traffic, product traffic, editorial traffic? It's going to be very different depending on the vertical that you're in, so adapt it to make it yours. We're also going to be looking at the number of editorial URLs that they have and how much traffic these editorial URLs are getting each on average. And lastly, we're going to be looking at the number of keywords that they're ranking for. We're not going to be looking at all of the keywords. We're going to be aiming for the range of 1 to 30. Again, you can make this yours. You know your market better, and you know what's relevant, but that should narrow the entire pool to stuff that's a little more relevant to your competitors.
Links
Then, we're going to be looking at links. We're going to begin with link gap analysis. That is we're going to look at how many links your competitors have and how many referring domains are pointing to your competitors. Then, we're going to use this to measure link growth. We're going to look at how many links your competitors had 6 months ago or 12 months ago if your market is a little slower, and we're going to get a percentage of growth out of that. That's going to indicate to you whether your search market is very aggressive with link building and you need to make an effort to keep up or it's a little bit more relaxed. Then, we're going to be looking at branded search. So how many people are looking for your competitors' brands versus how many people are looking for your brand? That's going to indicate the level of brand awareness that you have within your target audience in comparison to your competitors.
And we're going to take it one step further, and we're going to be looking again at branded traffic. There should be a very, very correlated relation between branded search and branded traffic. If you're first for branded search, you should be first for branded traffic and so on. But if there isn't, it might be an indicator that you don't have content within your site that's responding to the users' queries about your brand. So that's definitely a very quick win that you could action right now.
Technical SEO
Lastly, we're going to be looking at tech SEO, and this is incredibly difficult to measure because the requirements in tech SEO vary from website to website, from vertical to vertical. I am personally in the SaaS market, so my requirements for tech SEO is essentially make it readable and make sure that JavaScript is not blocking anything, classic crawling and rendering issues, and that's about it. But if you're in e-commerce, you're likely dealing with faceted navigation. You're dealing with filter management, and it's a little bit more demanding. So the best way that I have found to measure tech SEO changes and performance is Core Web Vital scores. We're going to go on the Chrome UX Report on Data Studio, and we're going to look at the main three Core Web vitals, grab the percentage of good URLs according to Google, and then we're going to average them out into one score. Then we're going to be looking at page speed. You can do this with PageSpeed Insights, and we're going to be looking at the scores for mobile versus desktop. I don't average these out because I think they provide really useful information of what issues your industry is running into when it comes to mobile usability. And then lastly, we're going to do some manual checks. Take a look at the robots.txt, take a look at the sitemap, how they manage canonicalization, and that's going to inform you better on how you could outperform your competitors.
And if this seems very complicated, don't worry. I have provided a free template for you so that you can make it yours.
Thank you so much for watching my Whiteboard Friday. My name is Lidia Infante, and you can find me on Twitter @LidiaInfanteM. You can find me on my website at lidia-infante.com and see you soon.
Today, I’m doing a quick follow-up to the manual portion of our earlier study in an effort to quantify and illustrate this abrupt alteration.
A total sea change in local pack headers
Between July and November of 2022, 83% of our previously-queried local pack headers underwent a complete transformation of nomenclature. Only 17% of the local pack headers were still worded the same way in autumn as they had been in the summertime. Here is a small set of examples:
In our manual analysis of 60 queries in July, we encountered 40 unique local pack headers - a tremendous variety. Now, all specificity is gone. For all of our queries, headings have been reduced to just 3 types: in-store availability, places, and businesses.
Entity relationships remain mysterious
What hasn’t changed is my sense that the logic underpinning which businesses receive which local pack header remains rather odd. In the original study, we noted the mystery of why a query like “karate” fell under the heading of “martial arts school” but a query for “tai chi” got a unique “tai chi heading”, or why “adopt dog” results were headed “animal rescue services” but “adopt bunny” got a pack labeled “adopt bunny”. The curious entity relationships continue on, even in this new, genericized local pack header scenario. For example, why is my search for “tacos” (which formerly brought up a pack labeled “Mexican restaurants”, now labeled this:
But my search for “oil change” gets this header:
Is there something about a Mexican restaurant that makes it more of a “place” and an oil change spot that makes it more of a “business”? I don’t follow the logic. Meanwhile, why are service area businesses, as shown in my search for “high weed mowing” being labeled “places”?
Surely high weed mowing is not a place…unless it is a philosophical one. Yet I saw many SABs labeled this way instead of as “businesses”, which would seem a more rational label, given Google’s historic distinction between physical premises and go-to-client models. There are many instances like this of the labeling not making much horse sense, and with the new absence of more specific wording, it feels like local pack headers are likely to convey less meaning and be more easily overlooked now.
Why has Google done this and does it matter to your local search marketing?
Clearly, Google decided to streamline their classifications. There may be more than three total local pack header types, but I have yet to see them. Hotel packs continue to have their own headings, but they have always been a different animal:
In general, Google experiments with whatever they think will move users about within their system, and perhaps they felt the varied local pack headers were more of a distraction than an aid to interactivity with the local packs. We can’t know for sure, nor can we say how long this change will remain in place, because Google could bring back the diverse headings the day after I publish this column!
As to whether this matters to your local search campaigns, unfortunately, the generic headers do obscure former clues to the mind of Google that might have been useful in your SEO. I previously suggested that local businesses might want to incorporate the varied local pack terms into the optimization of the website tags and text, but in the new scenario, it is likely to be pointless to optimize anything for “places”, “businesses”, or “in-store availability”. It’s a given that your company is some kind of place or business if you’re creating a Google Business Profile for it. And, your best bet for featuring that you carry certain products is to publish them on your listing and consider whether you want to opt into programs like Pointy.
In sum, this change is not a huge deal, but I’m a bit sorry to see the little clues of the diversified headers vanish from sight. Meanwhile, there’s another local pack trend going on right now that you should definitely be paying attention to…
A precipitous drop in overall local pack presence
In our original study, Google did not return a local pack for 18% of our manual July queries. By November, the picture had significantly changed. A startling 42% of our queries suddenly no longer displayed a local pack. This is right in line with Andrew Shotland’s documentation of a 42.3% drop from peak local pack display between August and October. Mozcast, pictured above, captured a drop from 39.6% of queries returning local packs on October 24th to just 25.1% on October 25th. The number has remained in the low-to-mid 20s in the ensuing weeks. It’s enough of a downward slope to give one pause.
Because I’m convinced of the need for economic localism as critical to healing the climate and society, I would personally like Google to return local packs for all commercial queries so that searchers can always see the nearest resource for purchasing whatever they need, but if Google is reducing the number of queries for which they deliver local results, I have to try to understand their thinking.
To do that, I have to remember that the presence of a local pack is a signal that Google believes a query has a local intent. Likely, they often get this right, but I can think of times when a local result has appeared for a search term that doesn’t seem to me to be obviously, inherently local. For example, in the study Dr. Pete and I conducted, we saw Google not just returning a local pack for the keyword “pickles” but even giving it its own local pack header:
If I search for pickles, am I definitely looking for pickles near me, or could I be looking for recipes, articles about the nutritional value of pickles, the history of pickles, something else? How high is Google’s confidence that vague searches like these should be fulfilled with a local result?
After looking at a number of searches like these in the context of intent, my current thinking is this: for some reason unknown to us, Google is dialing back presumed local intent. Ever since Google made the user the centroid of search and began showing us nearby results almost by default for countless queries, we users became trained not to have to add many (or any) modifiers to our search language to prompt Google to lay out our local options for us. We could be quite lazy in our searches and still get local results.
In the new context of a reduced number of searches generating local packs, though, we will have to rehabituate ourselves to writing more detailed queries to get to what we want if Google no longer thinks our simple search for “pickles” implies “pickles near me”. I almost get the feeling that Google wants us to start being more specific again because its confidence level about what constitutes a local search has suffered some kind of unknown challenge.
It’s also worth throwing into our thinking what our friends over at NearMedia.co have pointed out:
“The Local Pack's future is unclear. EU's no "self-preferencing"DMAtakes effect in 2023. The pendingAICOAhas a similar language.”
It could be that Google’s confidence is being shaken in a variety of ways, including by regulatory rulings, and local SEOs should always expect change. For now, though, local businesses may be experiencing some drop in their local pack traffic and CTR. On the other hand, if Google is getting it right, there may be no significant loss. If your business was formerly showing up in a local pack for a query that didn’t actually have a local intent, you likely weren’t getting those clicks anyway because a local result wasn’t what the searcher was looking for to begin with.
That being said, I am seeing examples in which I feel Google is definitely getting it wrong. For instance, my former searches for articles of furniture all brought up local packs with headings like “accent chairs” or “lamps”. Now, Google is returning no local pack for some of these searches and is instead plugging an enormous display of remote, corporate shopping options. There are still furniture stores near me, but Google is now hiding them, and that disappoints me greatly:
So here’s today’s word to the wise: keep working on the organic optimization of your website and the publication of helpful content. Both will underpin your key local pack rankings, and as we learned from our recent large-scale local business review survey, 51% of consumers are going to end up on your site as their next step after reading reviews on your listings. 2023 will be a good year to invest in the warm and inclusive welcome your site is offering people, and the investment will also stand you in good stead however local pack elements like headers, or even local packs, themselves, wax and wane.
Despite the resources they can invest in web development, large e-commerce websites still struggle with SEO-friendly ways of using JavaScript.
And, even when 98% of all websites use JavaScript, it’s still common that Google has problems indexing pages using JavaScript. While it's okay to use it on your website in general, remember that JavaScript requires extra computing resources to be processed into HTML code understandable by bots.
At the same time, new JavaScript frameworks and technologies are constantly arising. To give your JavaScript pages the best chance of indexing, you'll need to learn how to optimize it for the sake of your website's visibility in the SERPs.
Why is unoptimized JavaScript dangerous for your e-commerce?
By leaving JavaScript unoptimized, you risk your content not getting crawled and indexed by Google. And in the e-commerce industry, that translates to losing significant revenue, because products are impossible to find via search engines.
It’s likely that your e-commerce website uses dynamic elements that are pleasant for users, such as product carousels or tabbed product descriptions. This JavaScript-generated content very often is not accessible to bots. Googlebot cannot click or scroll, so it may not access all those dynamic elements.
Consider how many of your e-commerce website users visit the site via mobile devices. JavaScript is slower to load so, the longer it takes to load, the worse your website’s performance and user experience becomes. If Google realizes that it takes too long to load JavaScript resources, it may skip them when rendering your website in the future.
Top 4 JavaScript SEO mistakes on e-commerce websites
Now, let’s look at some top mistakes when using JavaScript for e-commerce, and examples of websites that avoid them.
1. Page navigation relying on JavaScript
Crawlers don’t act the same way users do on a website ‒ they can’t scroll or click to see your products. Bots must follow links throughout your website structure to understand and access all your important pages fully. Otherwise, using only JavaScript-based navigation may make bots see products just on the first page of pagination.
Guilty: Nike.com
Nike.com uses infinite scrolling to load more products on its category pages. And because of that, Nike risks its loaded content not getting indexed.
For the sake of testing, I entered one of their category pages and scrolled down to choose a product triggered by scrolling. Then, I used the “site:” command to check if the URL is indexed in Google. And as you can see on a screenshot below, this URL is impossible to find on Google:
Of course, Google can still reach your products through sitemaps. However, finding your content in any other way than through links makes it harder for Googlebot to understand your site structure and dependencies between the pages.
To make it even more apparent to you, think about all the products that are visible only when you scroll for them on Nike.com. If there’s no link for bots to follow, they will see only 24 products on a given category page. Of course, for the sake of users, Nike can’t serve all of its products on one viewport. But still, there are better ways of optimizing infinite scrolling to be both comfortable for users and accessible for bots.
Winner: Douglas.de
Unlike Nike, Douglas.de uses a more SEO-friendly way of serving its content on category pages.
They provide bots with page navigation based on <a href> links to enable crawling and indexing of the next paginated pages. As you can see in the source code below, there’s a link to the second page of pagination included:
Moreover, the paginated navigation may be even more user-friendly than infinite scrolling. The numbered list of category pages may be easier to follow and navigate, especially on large e-commerce websites. Just think how long the viewport would be on Douglas.de if they used infinite scrolling on the page below:
2. Generating links to product carousels with JavaScript
Product carousels with related items are one of the essential e-commerce website features, and they are equally important from both the user and business perspectives. Using them can help businesses increase their revenue as they serve related products that users may be potentially interested in. But if those sections over-rely on JavaScript, they may lead to crawling and indexing issues.
Guilty: Otto.de
I analyzed one of Otto.de’s product pages to identify if it includes JavaScript-generated elements. I used the What Would JavaScript Do (WWJD) tool that shows screenshots of what a page looks like with JavaScript enabled and disabled.
Test results clearly show that Otto.de relies on JavaScript to serve related and recommended product carousels on its website. And from the screenshot below, it’s clear that those sections are invisible with JavaScript disabled:
How may it affect the website’s indexing? When Googlebot lacks resources to render JavaScript-injected links, the product carousels can’t be found and then indexed.
Let’s check if that’s the case here. Again, I used the “site:” command and typed the title of one of Otto.de’s product carousels:
As you can see, Google couldn’t find that product carousel in its index. And the fact that Google can’t see that element means that accessing additional products will be more complex. Also, if you prevent crawlers from reaching your product carousels, you’ll make it more difficult for them tounderstand the relationship between your pages.
Winner: Target.com
In the case of Target.com’s product page, I used the Quick JavaScript Switcher extension to disable all JavaScript-generated elements. I paid particular attention to the “More to consider” and “Similar items” carousels and how they look with JavaScript enabled and disabled.
As shown below, disabling JavaScript changed the way the product carousels look for users. But has anything changed from the bots' perspective?
To find out, check what the HTML version of the page looks like for bots by analyzing the cache version.
When scrolling, you’ll see that the links to related products can also be found in its cache. If you see them here, it means bots don’t struggle to find them, either.
However, keep in mind that the links to the exact products you can see in the cache may differ from the ones on the live version of the page. It’s normal for the products in the carousels to rotate, so you don’t need to worry about discrepancies in specific links.
But what exactly does Target.com do differently? They take advantage of dynamic rendering. They serve the initial HTML, and the links to products in the carousels as the static HTML bots can process.
However, you must remember that dynamic rendering adds an extra layer of complexity that may quickly get out of hand with a large website. I recently wrote an article about dynamic rendering that’s a must-read if you are considering this solution.
Also, the fact that crawlers can access the product carousels doesn’t guarantee these products will get indexed. However, it will significantly help them flow through the site structure and understand the dependencies between your pages.
3. Blocking important JavaScript files in robots.txt
Blocking JavaScript for crawlers in robots.txt by mistake may lead to severe indexing issues. If Google can’t access and process your important resources, how is it supposed to index your content?
Guilty: Jdl-brakes.com
It’s impossible to fully evaluate a website without a proper site crawl. But looking at its robots.txt file can already allow you to identify any critical content that’s blocked.
This is the case with the robots.txt file of Jdl-brakes.com. As you can see below, they block the /js/ path with the Disallow directive. It makes all internally hosted JavaScript files (or at least the important ones) invisible to all search engine bots.
This disallow directive misuse may result in rendering problems on your entire website.
To check if it applies in this case, I used Google’s Mobile-Friendly Test. This tool can help you navigate rendering issues by giving you insight into the rendered source code and the screenshot of a rendered page on mobile.
I headed to the “More info” section to check if any page resources couldn’t be loaded. Using the example of one of the product pages on Jdl-brakes.com, you may see it needs a specific JavaScript file to get fully rendered. Unfortunately, it can’t happen because the whole /js/ folder is blocked in its robots.txt.
But let’s find out if those rendering problems affected the website’s indexing. I used the “site:” command to check if the main content (product description) of the analyzed page is indexed on Google. As you can see, no results were found:
This is an interesting case where Google could reach the website's main content but didn’t index it. Why? Because Jdl-brakes.com blocks its JavaScript, Google can’t properly see the layout of the page. And even though crawlers can access the main content, it’s impossible for them to understand where that content belongs in the page’s layout.
Let’s take a look at the Screenshot tab in the Mobile-Friendly Test. This is how crawlers see the page’s layout when Jdl-brakes.com blocks their access to CSS and JavaScript resources. It looks pretty different from what you can see in your browser, right?
The layout is essential for Google to understand the context of your page. If you’d like to know more about this crossroads of web technology and layout, I highly recommend looking into a new field of technical SEO called rendering SEO.
Winner: Lidl.de
Lidl.de proves that a well-organized robots.txt file can help you control your website’s crawling. The crucial thing is to use the disallow directive consciously.
Although Lidl.de blocks a single JavaScript file with the Disallow directive /cc.js*, it seems it doesn’t affect the website’s rendering process. The important thing to note here is that they block only a single JavaScript file that doesn’t influence other URL paths on a website. As a result, all other JavaScript and CSS resources they use should remain accessible to crawlers.
Having a large e-commerce website, you may easily lose track of all the added directives. Always include as many path fragments of a URL you want to block from crawling as possible. It will help you avoid blocking some crucial pages by mistake.
4. JavaScript removing main content from a website
If you use unoptimized JavaScript to serve the main content on your website, such as product descriptions, you block crawlers from seeing the most important information on your pages. As a result, your potential customers looking for specific details about your products may not find such content on Google.
As you can see above, the product description section disappeared with JavaScript disabled. I decided to use the “site:” command to check if Google could index this content. I copied the fragment of the product description I saw on the page with JavaScript enabled. However, Google didn’t show the exact product page I was looking for.
Will users get obsessed with finding that particular product via Walmart.com? They may, but they can also head to any other store selling this item instead.
The example of Walmart.com proves that main content depending on JavaScript to load makes it more difficult for crawlers to find and display your valuable information. However, it doesn’t necessarily mean they should eliminate all JavaScript-generated elements on their website.
To fix this problem, Walmart has two solutions:
Implementing dynamic rendering (prerendering) which is, in most cases, the easiest from an implementation standpoint.
Implementing server-side rendering. This is the solution that will solve the problems we are observing at Walmart.com without serving different content to Google and users (as in the case of dynamic rendering). In most cases, server-side rendering also helps with web performance issues on lower-end devices, as all of your JavaScript is being rendered by your servers before it reaches the client's device.
Let’s have a look at the JavaScript implementation that’s done right.
Winner: IKEA.com
IKEA proves that you can present your main content in a way that is accessible for bots and interactive for users.
When browsing IKEA.com’s product pages, their product descriptions are served behind clickable panels. When you click on them, they dynamically appear on the right-hand side of the viewport.
Although users need to click to see product details, Ikea also serves that crucial part of its pages even with JavaScript off:
This way of presenting crucial content should make both users and bots happy. From the crawlers’ perspective, serving product descriptions that don’t rely on JavaScript makes them easy to access. Consequently, the content can be found on Google.
Wrapping up
JavaScript doesn’t have to cause issues, if you know how to use it properly. As an absolute must-do, you need to follow the best practices of indexing. It may allow you to avoid basic JavaScript SEO mistakes that can significantly hinder your website’s visibility on Google.
Take care of your indexing pipeline and check if:
You allow Google access to your JavaScript resources,
Google can access and render your JavaScript-generated content. Focus on the crucial elements of your e-commerce site, such as product carousels or product descriptions,
With recent shake-ups to the Google algorithm, Lily Ray joins us for this week’s episode to walk you three of the most important types of search engine updates that can affect your SEO strategies.
Click on the whiteboard image above to open a high resolution version in a new tab!
Video Transcription
Hey, everyone. I'm Lily Ray, and today I'm going to be talking about a few different types of Google updates.
Helpful content update
So we're going to start with the helpful content update, which was announced and rolled out in August of 2022. The helpful content update introduced a new concept in the SEO space, which is a sitewide classifier that will be identifying unhelpful content. This is content that Google decided or determined is primarily written for search engines and is not exactly helpful for users.
This new sitewide classifier that they're using with the helpful content update will be applied to sites that Google believes is doing this type of thing at scale. So if the majority of the content on the website is considered unhelpful, it's written primarily for search engines and not for users, the helpful content update classifier can be applied to those sites, which can have the impact of affecting the rankings for the whole site. So it's not just one or two pages. It's potentially the entire site, including even if the website has some content that's actually helpful.
So this was introduced in about mid-2022, and Google has explained that it's going to be using machine learning with the helpful content update classifier, which means that the classifier is learning and growing and evolving over time. As it begins to understand different patterns and different signals about sites that do provide unhelpful content, it can continue to impact those sites over time.
So while they told us the update rolled out in August, and it lasted about two weeks, and it concluded, we also know that Google will likely be leveraging the helpful content update classifier all the time or in future updates. They told us if it's a big, significant change to how they use this update, they'll let us know, but otherwise, we should assume that it's kind of there operating in the background. So this was a new development for 2022.
Product reviews update
The product reviews update, there have been a variety of them. They started in 2021, and this was also a new type of update from Google in which Google is telling us that if you're a website that provides product review content, you need to meet a certain criteria for content quality that they're looking for. The backstory behind this and the reason that I believe Google rolled out these product review updates is because there are many, many websites that have reviews of products, that have affiliate links, that are making money through SEO, through having these affiliate websites, but they don't add a lot of value. They don't tell you a lot of insights about the product that's different than what Google has already seen before. Google has received a lot of feedback from users that it's not particularly helpful when they read a product review that's just saying the same thing the manufacturer said about the products or that other sites have already said about the products.
So there's been a variety of different product review updates because I believe that they're refining this set of algorithms to basically elevate the best product reviews on the internet. Google has told us that they should be written by experts or enthusiasts that know the products very well. So people that are obsessed with tech devices, like smart watches or TVs or whatever, they need to prove that they've spent a lot of time analyzing these products, that they have an obsession with it, maybe they studied it, maybe they have pictures of themselves using it, anything that gives the user and search engines evidence that they've actually spent a lot of time with the product.
This is another very important concept from these updates. Google has specifically said, if you're providing product reviews, we need evidence. We need photos of you using the product. We need videos of you using it. We need anything that shows us that you're not just rehashing what everybody else has already said online. We need proof that you've actually spent the time doing it.
So a lot of sites are starting to adapt their product review strategy to meet Google's expectations of what makes a good product review. As a result of this, almost every product review update that rolls out, you'll see a lot of volatility in the search results because some product review sites are winning from these updates, some are losing. Then there are other sites that are being impacted by these ranking changes, such as e-commerce websites, who might see gains or losses in rankings because maybe the product review site that Google was previously ranking was affected by the update, so the e-commerce site wins out a little bit more. This has been a big change for a lot of sites in this category. We've seen a lot of ranking volatility with product review sites.
Core updates
The third type of update that we're all probably very familiar with and has existed for a very long time are Google's core updates. These are nothing new, but the nature of them changes over time. So they pretty much happen quarterly. That's not like 100% true for every year, but it's pretty much every few months that Google will roll out a big core update. They've started to basically just name them after the month. So you might have like the September 22 core update, for example. What makes these tricky is that Google doesn't give us a lot of specificity each time they're rolled out about what changed.
They almost always reference back to the same article that says what site owners should know about Google core updates. That article gives 25 questions or so that the reader or the content creator should read with regard to what makes a good page, a good website. Does the website demonstrate E-A-T? Does the website have good quality content? These are all the questions that you should consider if you've been affected by core updates. Even if you haven't been affected, you should read them because it positions you well to do well when the next core update is rolled out.
Another concept that a lot of people don't understand about core updates is that they often operate on a sitewide level, similar to the helpful content update, which means if Google has determined a large-scale pattern of either great quality content or not good quality content, or perhaps a lack of E-A-T, expertise, authority, and trust in certain areas, a core update can impact the rankings of almost all your content at scale. So that's not necessarily to say that there's one individual article that dropped in rankings because that article is bad. You could actually just be impacted by the core update as a whole because Google decided that your site, in general, shouldn't be ranking as well as it is. So people don't always understand that core updates operate in a sitewide fashion.
Content quality is extremely important during core updates. So if you read Google's questions about the core update, almost all of them tie back to: How much is this website meeting the expectations of users? How much does the content offer something unique that I couldn't get from other people's websites? Is the spelling good? Is the grammar good? Is the usability good? All of this points back to quality.
Technical SEO is also part of content quality. If your website is easy for users and search engines to crawl through and to navigate without terrible page speeds or a bad user interface or things like that, this all factors into their quality evaluations. So it's not just content. It's also technical SEO. It's also performance, usability, website navigation. All these things factor into content quality.
Then intent is the last point I want to make because one thing that I've noticed with my core update analyses is that Google tends to be getting better at understanding user intent. That's not always to say somebody typed "I want to go to this store," like that's a pretty clear intent. When you type a keyword like "dogs," there's a lot of different intents that the user might be looking for. They might be looking to adopt a dog. They might be looking to feed a dog. They might be looking to take a dog on a walk. There are so many different things. Google has so much data that they understand the intent better behind every keyword.
When they launch a core update, you often see that the types of results that are ranking will change. So you might see a dictionary website start to rank during a core update. So let's say the example is dogs. After a core update happens, perhaps a dictionary takes the number one position. That's because Google determined most users want to define what the word "dog" means. If that happens, it's very hard to say that your site did something right or wrong. It's just that Google got better at understanding intent. So that's very important to understand with core updates. It doesn't always mean your site did anything wrong. It could just be that Google is getting smarter.
So with all of that, these updates will probably continue to happen going forward, so you should get a good understanding of how they work, and best of luck to you in your rankings.
Common sense is a useful asset, and as it turns out, it’s a fairly reliable guide when it comes to navigating the big world of online local business reputation. However, for the very first time, thanks to the recent report, The Impact of Local Business Reviews on Consumer Behavior, I was able to test my intuition against original, hard data revealing the habits of real review readers, review writers, and successful owner responses.
I highly recommend reading the full survey analysis, but today, I want to distill that mass of data down into three simple descriptions that emerged through the considerable work of analysis. These three descriptions codify dominant traits, characteristics and behaviors. They are meant to help you envision both the public and practices in an approachable manner, with the proviso that some people and industries will certainly fall outside these norms. For the bulk of local businesses, however, it’s my hope that this synthesis enables you to form a useful mental picture of who and what you’re working with when it comes to growing and managing your reputation.
Review readers are:
Habituated, very trusting unless faced with obvious signals of spam or low quality, much more trusting of other customers than of brands, still highly reliant on real world WOM recommendations, eager for a substantial amount of recent sentiment including negative sentiment, extremely forgiving when problems are resolved, and just one step away from interacting directly with your brand.
The data:
Review reading is now a given; 96% of the working age public will read reviews this year to navigate their local landscape. 56% of review readers are highly active daily or weekly readers. Even less active review readers (31%) will turn to reviews monthly or multiple times per year to get local business information.
Reviewers spend the majority of their time reading Google-based reviews, but they cite at least a dozen other places where they are regularly reading reviews.
With 86% of consumers citing reviews as either the most important or somewhat important signal of whether a business can be trusted, reviews are the most influential sales copy review readers will encounter. In fact, only 11% of consumers say they trust what a business says about itself more than they trust what customers say. 83% of review readers trust reviews as much or more than they did 3 years ago.
When choosing between businesses, review readers evaluate the following elements in order of importance: star rating, text content, recency, overall number of reviews, and the presence of owner responses.
Review readers are not as demanding as you might think. Only 13% of review readers require a perfect 5-star rating in order to choose a business. In fact, 44% cite flawless ratings as suspicious. 85% will consider a business with an overall 3 to 4-star rating.
Review readers filter for recent and negative sentiment first.
Review readers want a substantial amount of reading material. 70% will look at 5-20 reviews before considering a business.
Review readers’ trust can be lost at a glance. When a local business reviews itself or has suspect profiles reviewing it, or when its star rating or review count is notably low compared to competitors’, trust is eroded and review readers may look elsewhere.
Reviews exist on platforms over which businesses have only partial control, but a review readers’ next step lands them back in the brand’s own ball court most of the time, with a combined 91% of readers ending up on the website, at the place of business, or contacting the business directly as their next step. In other words, reviews have added to, but not replaced, traditional shopping behaviors.
The tradition of your brand’s good name being on people’s lips also hasn’t changed. 67% of review readers cite the real-world recommendations of friends and family as being their top alternative resource to reading reviews.
Review writers are:
Civic-minded, appreciative, often self-motivated but more frequently in need of prompting, prone to forget to write when they are busy, highly likely to review you if asked via email, text, or face-to-face, active on multiple review platforms, deeply offended by rude service, bad products and incorrect online local business information, very willing to update what they’ve written and give a business a second chance when a complaint is resolved, and a key source of both sales and quality control.
The data:
Writing reviews is already a way of life for 41% of your customers who write reviews on a daily, weekly or monthly basis. An additional 44% who will write reviews several times a year may need to be asked, prompted and reminded.
66% spend most of their time writing Google-based reviews, but review writers list at least a dozen other platforms where many spend time leaving sentiment.
Review writers say 65% of the negative reviews they write stem from bad/rude customer service. 63% cite a bad product, 52% cite false or incorrect online business info on assets like local business listings, 38% cite low-quality work on a job, 28% cite the failure of the business to resolve complaints in-person, and 28% cite inadequate safety protocols.
73% of review writers are civic-minded, leaving sentiment to benefit their community, 63% write to express appreciation to local businesses, and 38% write to tell a local business that it needs to improve.
39% of review writers haven’t been directly asked to write a review in the past 5 years. If asked, 85% will always, usually or at least sometimes write a review. Just 4% never write reviews in response to requests.
54% of review writers like to be approached via email, 45% prefer person-to-person, and 29% prefer texting.
38% of review writers simply forget to review your business when they have free time. 30% find the review writing process too confusing, 26% don’t believe the business will care enough to read what is written, and 19% are not being directly asked to write a review.
Successful owner responses should:
Happen within a two-hour to two-day time frame to please most reviewers, resolve stated complaints, avoid any type of acrimony, offer thanks for positive feedback and apologies for negative experiences, and be written with exceptional care because they influence 90% of customers to a moderate or extreme degree.
The data:
Owner responses influence 90% of customers to a moderate or extreme degree.
60% of customers expect a response to their review within 2 days or less; 11% expect a response within 2 hours, 21% expect a response within 24 hours, and 28% expect a response within 48 hours; 24% say they expect a reply within a week.
54% of customers will definitely avoid a business that is failing to provide a solution to a problem, 46% will definitely avoid a business with an owner who argues with customers in reviews, 47% of consumers will definitely avoid the business when an owner response offers no apology.
Only 40% of customers have come to expect thanks for positive reviews. 64% of customers expect a response to negative reviews.
67% of negative reviewers had an improved opinion of a brand when the owner responded well. 62% of negative reviewers would give a business a second chance after an owner response solves their problem. 63% of consumers will update their negative review or low-star rating once an owner response resolves their complaint.
In conclusion
Any local business which is founded on a customer-centric and employee-centric model already has a built-in advantage when it comes to managing the offline experiences that form the online brand narrative. Shoppers and staff simply want to be treated fairly and well. Local companies that meet these criteria in-store are capable of utilizing the same skills online, where digital sentiment has become like the front porch on a general store – a meeting, greeting, and helping spot for the community.
Local business owners and their marketers may need to invest in a few new tools to hang out on that porch effectively - think of them as the awning or wood stove you install to facilitate maximum comfort for everybody. But the skills that bring these tools to life are the ones the best local entrepreneurs already know - respect, attentiveness, accountability, empathy, responsiveness. Now we have the data to prove that the common sense approach of treating everyone well is actually very good business.
These days, Google algorithm updates seem to come in two main flavors. There’s very specific updates — like the Page Experience Update or Mobile-Friendly Update — which tend to be announced well in advance, provide very specific information on how the ranking factor will work, and finally arrive as a slight anti-climax. I’ve spoken before about the dynamic with these updates. They are obviously intended to manipulate the industry, and I think there is also a degree to which they are a bluff.
This post is not about those updates, though, it is about the other flavor. The other flavor of updates is the opposite: they are announced when they are already happening or have happened, they come with incredibly vague and repetitive guidance, and can often have cataclysmic impact for affected sites.
Coreschach tests
Since March 2018, Google has taken to calling these sudden, vague cataclysms “Core Updates”, and the type really gained notoriety with the advent of “Medic” (an industry nickname, not an official Google label), in August 2018. The advice from Google and the industry alike has evolved gradually over time in response to changing Quality Rater guidelines, varying from the exceptionally banal (“make good content”) to the specific but clutching at straws (“have a great about-us page”). To be clear, none of this is bad advice, but compared to the likes of the Page Experience update, or even the likes of Panda and Penguin, it demonstrates an extremely woolly industry picture of what these updates actually promote or penalize. To a degree, I suspect Core Updates and the accompanying era of “EAT” (Expertise, Authoritativeness, and Trust) have become a bit of a Rorschach test. How does Google measure these things, after all? Links? Knowledge graphs? Subjective page quality? All the above? Whatever you want to see?
If I am being somewhat facetious there, it is born out of frustration. As I say, (almost) none of the speculation, or the advice it results in, is actually bad. Yes, you should have good content written by genuinely expert authors. Yes, SEOs should care about links. Yes, you should aim to leave searchers satisfied. But if these trite vagaries are what it takes to win in Core Updates, why do sites that do all these things better than anyone, lose as often as they win? Why does almost no site win every time? Why does one update often seem to undo another?
Roller coaster rides
This is not just how I feel about it as a disgruntled SEO — this is what the data shows. Looking at sites affected by Core Updates since and including Medic in MozCast, the vast majority have mixed results.
Meanwhile, some of the most authoritative original content publishing sites in the world actually have a pretty rocky ride through Core Updates.
I should caveat: this is in the MozCast corpus only, not the general performance of Reuters. But still, these are real rankings, and each bar represents a Core Update where they have gone up or down. (Mostly, down.) They are not the only ones enjoying a bumpy ride, either.
The reality is that pictures like this are very common, and it’s not just spammy medical products like you might expect. So why is it that almost all sites, whether they be authoritative or not, sometimes win, and sometimes lose?
The return of the refresh
SEOs don’t talk about data refreshes anymore. This term was last part of the regular SEO vocabulary in perhaps 2012.
Weather report: Penguin data refresh coming today. 0.3% of English queries noticeably affected. Details: http://t.co/Esbi2ilX
This was the idea that major ranking fluctuation was sometimes caused by algorithm updates, but sometimes simply by data being refreshed within the existing algorithm — particularly if this data was too costly or complex to update in real time. I would guess most SEOs today assume that all ranking data is updated in real time.
“Content that was impacted by one might not recover—assuming improvements have been made—until the next broad core update is released.”
Sounds a bit like a data refresh, doesn’t it? And this has some interesting implications for the ranking fluctuations we see around a Core Update.
If your search competitor makes a bunch of improvements to their site, then when a Core Update comes round, under this model, you will suddenly drop. This is no indictment of your own site, it’s just that SEO is often a zero sum game, and suddenly a bunch of improvements to other sites are being recognized at once. And if they go up, someone must come down.
This kind of explanation sits easily with the observed reality of tremendously authoritative sites suffering random fluctuation.
Test & learn
The other missing piece of this puzzle is that Google acknowledges its updates as tests:
This sounds, at face value, like it is incompatible with the refresh model implied by the quote in the previous section. But, not necessarily — the tests and updates referred to could in fact be happening between Core Updates. Then the update itself simply refreshes the data and takes in these algorithmic changes at the same time. Or, both kinds of update could happen at once. Either way, it adds to a picture where you shouldn’t expect your rankings to improve during a Core Update just because your website is authoritative, or more authoritative than it was before. It’s not you, it’s them.
What does this mean for you?
The biggest implication of thinking about Core Updates as refreshes is that you should, essentially, not care about immediate before/after analysis. There is a strong chance that you will revert to mean between updates. Indeed, many sites that lose in updates nonetheless grow overall.
The below chart is the one from earlier in this post, showing the impact of each Core Update on the visibility of www.reuters.com (again — only among MozCast corpus keywords, not representative of their total traffic). Except, this chart also has a line showing how the total visibility nonetheless grew despite these negative shocks. In other words, they more than recovered from each shock, between shocks.
Under a refresh model, this is somewhat to be expected. Whatever short term learning the algorithm does is rewarding this site, but the refreshes push it back to an underlying algorithm, which is less generous. (Some would say that that short term learning could be driven by user behavior data, but that’s another argument!)
The other notable implication is that you cannot necessarily judge the impact of an SEO change or tweak in the short term. Indeed, causal analysis in this world is incredibly difficult. If your traffic goes up before a Core Update, will you keep that gain after the update? If it goes up, or even just holds steady, through the update, which change caused that? Presumably you made many, and equally relevantly, so did your competitors.
Experience
Does this understanding of Core Updates resonate with your experience? It is, after all, only a theory. Hit us up on Twitter, we’d love to hear your thoughts!
Typically, when SEOs think about on-page optimizations, they’re thinking about core places to include their target keywords within their content. But how can you take your on-page optimizations to the next level and get beyond some of those basic tactics? In today’s episode of Whiteboard Friday, Chris Long shows you how.
Click on the whiteboard image above to open a high resolution version in a new tab!
Video Transcription
Howdy, Moz fans. I'm Chris Long with Go Fish Digital, and today we're going to talk about advanced on-page optimizations. Commonly in SEO, when we think about on-page optimizations, we're typically thinking about core places to include the keywords, such as the title, the H1, the URL within the content. But some people might be wondering, how can you take your on-page optimizations to the next level and get beyond some of those basic tactics? So that's what I want to cover today.
Entities
So one of the best ways I found to shift away from the keyword mindset is actually to shift to more of an entity mindset. So, for an example, if you're going to optimize a page for the term "retire early," instead of using the term "retire early" a bunch of times on the page, you could use tools like IBM Watson or Google Natural Language. Both of those have public-facing tools that you can run any text document you want through. And if you ran it from a result like Investopedia, you might see that "retire early" comes up with a strategy or entity such as Vicki Robin and Joe Dominguez, two of the top authors about "retire early."
As well, when you take a look at competitor pages and have more of an entity mindset in mind, instead of just thinking about how many times they're using a keyword on the page, you're thinking more strategically about common topics and themes you want to integrate within your own website content.
E-A-T
Another great way to take your on-page optimization to the next level is this concept of E-A-T — expertise, authoritativeness, and trustworthiness. One of the best ways to improve the expertise of your site content is just to simply look at your author biographies. A lot of sites still get this wrong. And when you look at author biographies, you should always be thinking about, hey, where can I highlight my years of experience, my education, my previous experience, my thought leadership directly within your author biographies to better highlight your expertise to both Google and users.
As well, another thing I love to think about with on-page optimizations is this concept of information gain scores. It's one of my favorite patents, analyzed by Bill Slawski, where he talks about the fact that Google looks to reward content that adds to the search results and doesn't just repeat what's already out there. So think about where you can leverage your own unique expertise, data, and insights to benefit from this concept of information gain scores.
Another great way to improve the E-A-T of your site's content is to actually cite sources. The Wirecutter is phenomenal at doing this. Any time they cite an individual fact, they actually cite where they got that fact and link to external, trusted, accredited sources to verify where they're finding that information from. Another great way to improve the trustworthiness of your content and take your on-page optimizations to the next level.
Freshness
Another strategy that I think is highly, highly underrated is this concept of freshness. We've actually run tests on our own site, and we pretty consistently see that when we do things like update timestamps or just refresh content, we see noticeable upticks in both rankings and visibility and traffic. And I think this makes sense from kind of multiple perspectives when you really start to think about it. From a trustworthiness standpoint, if Google thinks the content is outdated, well, it's hard for it to trust that the information is actually accurate within the article. As well, from a competitive standpoint, it's very hard for Google to compete in terms of real-time results. That's why users might go to platforms like Twitter instead of Google. However, in recent years, Google is making a push toward to include live blog-posting-type URLs in top stories. I think they're trying to incentivize publishers to update their content in real time to set the expectation that users can get real-time information on Google instead of just Twitter.
Historical competitor changes
Another great way of thinking about your on-page optimizations is this concept of historical competitor changes. Oftentimes, when we think about our on-page optimizations, we're only thinking about what competitors are doing in the given moment, but we're not telling the story of how they've changed their on-page optimizations in order to get to that point. So you can do this type of analysis for really competitive queries. What I like to do is find a strong competitor that's actually improved in the rankings in recent years and then take that page and actually run it back through the Wayback Machine, and see which on-page changes have been made over time, what content they're adding. What are they removing, and what are they keeping the same? And that can help you better isolate what the most prominent on-page changes competitors have made have been.
Another great strategy to use is to use a text diff compare tool. You can actually take an old version of text and then compare that against the current version of text, run that through a tool, and the tool will actually highlight all of the changes competitors are making. That makes it very easy for you to find what on-page strategies your competitors are utilizing.
Keyword segmentation
The final aspect of advanced on-page optimizations I want to talk about is this concept of keyword segmentation. We segment our traffic data in Google Analytics all the time, but we don't segment our keyword data in the same way. So using tools like STAT, we can actually create keyword segments any time we do some type of on-page optimization. If we update entities, if we update freshness, if we update EAT, we can create keyword segments in all of those different instances. And then, over time, we can compare the segments against each other and measure what the most important ones have been. That will actually give you better data about what type of on-page optimizations work best for your specific sites.
So, hopefully, that's been helpful. Hopefully, you'll walk out of here with some more strategies and concrete takeaways. Now you can improve your on-page optimizations and take them to the next level. Thanks a lot Moz fans.