Monday, March 2, 2020

2020 Google Search Survey: How Much Do Users Trust Their Search Results?

Posted by LilyRayNYC

While Google’s mission has always been to surface high-quality content, over the past few years the company has worked especially hard to ensure that its search results are also consistently accurate, credible, and trustworthy.

Reducing false and misleading information has been a top priority for Google since concerns over misinformation surfaced during the 2016 US presidential election. The search giant is investing huge sums of money and brain power into organizing the ever-increasing amounts of content on the web in a way that prioritizes accuracy and credibility.

In a 30-page whitepaper published last year, Google delineates specifically how it fights against bad actors and misinformation across Google Search, News, Youtube, Ads, and other Google products.

In this whitepaper, Google explains how Knowledge Panels — a common organic search feature — are part of its initiative to show “context and diversity of perspectives to form their own views.” With Knowledge Panel results, Google provides answers to queries with content displayed directly in its organic search results (often without including a link to a corresponding organic result), potentially eliminating the need for users to click through to a website to find an answer to their query. While this feature benefits users by answering their questions even more quickly, it brings with it the danger of providing quick answers that might be misleading or incorrect.

Another feature with this issue is Featured Snippets, where Google pulls website content directly into the search results. Google maintains specific policies for Featured Snippets, prohibiting the display of content that is sexually explicit, hateful, violent, dangerous, or in violation of expert consensus on civic, medical, scientific, or historical topics. However, this doesn’t mean the content included in Featured Snippets is always entirely accurate.

According to data pulled by Dr. Pete Meyers, based on a sample set of 10,000 keywords, Google has increased the frequency with which it displays Featured Snippets as part of the search results. In the beginning of 2018, Google displayed Featured Snippets in approximately 12% of search results; in early 2020, that number hovers around 16%.

Google has also rolled out several core algorithm updates in the past two years, with the stated goal of “delivering on [their] mission to present relevant and authoritative content to searchers.” What makes these recent algorithm updates particularly interesting is how much E-A-T (expertise, authoritativeness, and trustworthiness) appears to be playing a role in website performance, particularly for YMYL (your money, your life) websites.

As a result of Google’s dedication to combating misinformation and fake news, we could reasonably expect searchers to agree that Google has improved in its ability to surface credible and trusted content. But does the average searcher actually feel that way? At Path Interactive, we conducted a survey to find out how users feel about the information they encounter in Google’s organic results.

About our survey respondents and methodology

Out of 1,100 respondents, 70% of live in the United States, 21% in India, and 5% in Europe. 63% of respondents are between the ages of 18 and 35, and 17% are over the age of 46. All respondent data is self-reported.

For all questions involving specific search results or types of SERP features, respondents were provided with screenshots of those features. For questions related to levels of trustworthiness or the extent to which the respondent agreed with the statement, respondents were presented with answers on a scale of 1-5.

Our findings

Trustworthiness in the medical, political, financial, and legal categories

Given how much fluctuation we’ve seen in the YMYL category of Google with recent algorithm updates, we thought it would be interesting to ask respondents how much they trust the medical, political, financial, and legal information they find on Google.

We started by asking respondents about the extent to which they have made important financial, legal, or medical decisions based on information they found in organic search. The majority (51%) of respondents indicated that they “very frequently” or “often” make important life decisions based on Google information, while 39% make important legal decisions, and 46% make important medical decisions. Only 10-13% of respondents indicated that they never make these types of important life decisions based on the information they’ve found on Google.

Medical searches

As it relates to medical searches, 72% of users agree or strongly agree that Google has improved at showing accurate medical results over time.

Breaking down these responses by age, a few interesting patterns emerge:

  • The youngest searchers (ages 18-25) are 94% more likely than the oldest searchers (65+) to strongly believe that Google’s medical results have improved over time.
  • 75% of the youngest searchers (ages 18-25) agree or strongly agree that Google has improved in showing accurate medical searches over time, whereas only 54% of the oldest searchers (65+) feel the same way.
  • Searchers ages 46-64 are the most likely to disagree that Google’s medical results are improving over time.

Next, we wanted to know if Google’s emphasis on surfacing medical content from trusted medical publications — such as WebMD and the Mayo Clinic — is resonating with its users. One outcome of recent core algorithm updates is that Google’s algorithms appear to be deprioritizing content that contradicts scientific and medical consensus (consistently described as a negative quality indicator throughout their Search Quality Guidelines).

The majority (66%) of respondents agree that it is very important to them that Google surfaces content from highly trusted medical websites. However, 14% indicated they would rather not see these results, and another 14% indicated they’d rather see more diverse results, such as content from natural medicine websites. These numbers suggest that more than a quarter of respondents may be unsatisfied with Google’s current health initiatives aimed at surfacing medical content from a set of acclaimed partners who support the scientific consensus.

We asked survey respondents about Symptom Cards, in which information related to medical symptoms or specific medical conditions is surfaced directly within the search results.

Examples of Symptom Cards. Source: https://blog.google/products/search/im-feeling-yucky-searching-for-symptoms/

Our question aimed to gather how much searchers felt the content within Symptom Cards can be trusted.

The vast majority (76%) of respondents indicated they trust or strongly trust the content within Symptom Cards.

When looking at the responses by age, younger searchers once again reveal that they are much more likely than older searchers to strongly trust the medical content found within Google. In fact, the youngest bracket of searchers (ages 18-25) are 138% more likely than the oldest searchers (65+) to strongly trust the medical content found in Symptom Cards.

News and political searches

The majority of respondents (61%) agree or strongly agree that Google has improved at showing high-quality, trustworthy news and political content over time. Only 13% disagree or strongly disagree with this statement.

Breaking the same question down by age reveals interesting trends:

  • The majority (67%) of the youngest searchers (ages 18-25) agree that the quality of Google’s news and political content has improved over time, whereas the majority (61%) of the oldest age group (65+) only somewhat agrees or disagrees.
  • The youngest searchers (ages 18-25) are 250% more likely than the oldest searchers to strongly agree that the quality of news and political content on Google is improving over time.

Misinformation

Given Google’s emphasis on combating misinformation in its search results, we also wanted to ask respondents about the extent to which they feel they still encounter dangerous or highly untrustworthy information on Google.

Interestingly, the vast majority of respondents (70%) feel that they have encountered misinformation on Google at least sometimes, although 29% indicate they rarely or never see misinformation in the results.

Segmenting the responses by age groups reveals a clear pattern that the older the searcher, the more likely they are to indicate that they have seen misinformation in Google’s search results. In fact, the oldest searchers (65+) are 138% more likely than the youngest searchers (18-25) to say they’ve encountered misinformation on Google either often or very frequently.

Throughout the responses to all questions related to YMYL topics such as health, politics, and news, a consistent pattern emerged that the youngest searchers appear to have more trust in the content Google displays for these queries, and that older searchers are more skeptical.

This aligns with our findings from a similar survey we conducted last year, which found that younger searchers were more likely to take much of the content displayed directly in the SERP at face value, whereas older searchers were more likely to browse deeper into the organic results to find answers to their queries.

This information is alarming, especially given another question we posed asking about the extent to which searchers believe the information they find on Google influences their political opinions and outlook on the world.

The question revealed some interesting trends related to the oldest searchers: according to the results, the oldest searchers (65+) are 450% more likely than the youngest searchers to strongly disagree that information they find on Google influences their worldview.

However, the oldest searchers are also most likely to agree with this statement; 11% of respondents ages 65+ strongly agree that Google information influences their worldview. On both ends of the spectrum, the oldest searchers appear to hold stronger opinions about the extent to which Google influences their political opinions and outlook than respondents from other age brackets.

Featured Snippets and the Knowledge Graph

We also wanted to understand the extent to which respondents found the content contained within Featured Snippets to be trustworthy, and to segment those responses by age brackets. As with the other scale-based questions, respondents were asked to indicate how much they trusted these features on a scale of 1-5 (Likert scale).

According to the results, the youngest searchers (ages 18-25) are 100% more likely than the oldest searchers (ages 65+) to find the content within Featured Snippets to be very trustworthy. This aligns with a similar discovery we found in our survey from last year: “The youngest searchers (13–18) are 220 percent more likely than the oldest searchers (70–100) to consider their question answered without clicking on the snippet (or any) result.”

For Knowledge Graph results, the results are less conclusive when segmented by age. 95% of respondents across all age groups find the Knowledge Panel results to be at least “trustworthy.”

Conclusion: Young users trust search results more than older users

In general, the majority of survey respondents appear to trust the information they find on Google — both in terms of the results themselves, as well as the content they find within SERP features such as the Knowledge Panel and Featured Snippets. However, there still appears to be a small subset of searchers who are dissatisfied with Google’s results. This subset consists of mostly older searchers who appear to be more skeptical about taking Google’s information at face value, especially for YMYL queries.

Across almost all survey questions, there is a clear pattern that the youngest searchers tend to trust the information they find on Google more so than the older respondents. This aligns with a similar survey we conducted last year, which indicated that younger searchers were more likely to accept the content in Featured Snippets and Knowledge Panels without needing to click on additional results on Google.

It is unclear whether younger searchers trust information from Google more because the information itself has improved, or because they are generally more trusting of information they find online. These results may also be due to older searchers not having grown up with the ability to rely on internet search engines to answer their questions. Either way, the results raise an interesting question about the future of information online: will searchers become less skeptical of online information over time?


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Friday, February 28, 2020

The Rules of Link Building - Best of Whiteboard Friday

Posted by BritneyMuller

Are you building links the right way? Or are you still subscribing to outdated practices? Britney Muller clarifies which link building tactics still matter and which are a waste of time (or downright harmful) in one of our very favorite classic episodes of Whiteboard Friday.

The Rules of Link Building

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Happy Friday, Moz fans! Welcome to another edition of Whiteboard Friday. Today we are going over the rules of link building. It's no secret that links are one of the top three ranking factors in Google and can greatly benefit your website. But there is a little confusion around what's okay to do as far as links and what's not. So hopefully, this helps clear some of that up.

The Dos

All right. So what are the dos? What do you want to be doing? First and most importantly is just to...

I. Determine the value of that link. So aside from ranking potential, what kind of value will that link bring to your site? Is it potential traffic? Is it relevancy? Is it authority? Just start to weigh out your options and determine what's really of value for your site. Our own tool, Moz Link Explorer, can 

II. Local listings still do very well. These local business citations are on a bunch of different platforms, and services like Moz Local or Yext can get you up and running a little bit quicker. They tend to show Google that this business is indeed located where it says it is. It has consistent business information — the name, address, phone number, you name it. But something that isn't really talked about all that often is that some of these local listings never get indexed by Google. If you think about it, Yellowpages.com is probably populating thousands of new listings a day. Why would Google want to index all of those?

So if you're doing business listings, an age-old thing that local SEOs have been doing for a while is create a page on your site that says where you can find us online. Link to those local listings to help Google get that indexed, and it sort of has this boomerang-like effect on your site. So hope that helps. If that's confusing, I can clarify down below. Just wanted to include it because I think it's important.

III. Unlinked brand mentions. One of the easiest ways you can get a link is by figuring out who is mentioning your brand or your company and not linking to it. Let's say this article publishes about how awesome SEO companies are and they mention Moz, and they don't link to us. That's an easy way to reach out and say, "Hey, would you mind adding a link? It would be really helpful."

IV. Reclaiming broken links is also a really great way to kind of get back some of your links in a short amount of time and little to no effort. What does this mean? This means that you had a link from a site that now your page currently 404s. So they were sending people to your site for a specific page that you've since deleted or updated somewhere else. Whatever that might be, you want to make sure that you 301 this broken link on your site so that it pushes the authority elsewhere. Definitely a great thing to do anyway.

V. HARO (Help a Reporter Out). Reporters will notify you of any questions or information they're seeking for an article via this email service. So not only is it just good general PR, but it's a great opportunity for you to get a link. I like to think of link building as really good PR anyway. It's like digital PR. So this just takes it to the next level.

VI. Just be awesome. Be cool. Sponsor awesome things. I guarantee any one of you watching likely has incredible local charities or amazing nonprofits in your space that could use the sponsorship, however big or small that might be. But that also gives you an opportunity to get a link. So something to definitely consider.

VII. Ask/Outreach. There's nothing wrong with asking. There's nothing wrong with outreach, especially when done well. I know that link building outreach in general kind of gets a bad rap because the response rate is so painfully low. I think, on average, it's around 4% to 7%, which is painful. But you can get that higher if you're a little bit more strategic about it or if you outreach to people you already currently know. There's a ton of resources available to help you do this better, so definitely check those out. We can link to some of those below.

VIII. COBC (create original badass content). We hear lots of people talk about this. When it comes to link building, it's like, "Link building is dead. Just create great content and people will naturally link to you. It's brilliant." It is brilliant, but I also think that there is something to be said about having a healthy mix. There's this idea of link building and then link earning. But there's a really perfect sweet spot in the middle where you really do get the most bang for your buck.

The Don'ts

All right. So what not to do. The don'ts of today's link building world are...

I. Don't ask for specific anchor text. All of these things appear so spammy. The late Eric Ward talked about this and was a big advocate for never asking for anchor text. He said websites should be linked to however they see fit. That's going to look more natural. Google is going to consider it to be more organic, and it will help your site in the long run. So that's more of a suggestion. These other ones are definitely big no-no's.

II. Don't buy or sell links that pass PageRank. You can buy or sell links that have a no-follow attached, which attributes that this is paid-for, whether it be an advertisement or you don't trust it. So definitely looking into those and understanding how that works.

III. Hidden links. We used to do this back in the day, the ridiculous white link on a white background. They were totally hidden, but crawlers would pick them up. Don't do that. That's so old and will not work anymore. Google is getting so much smarter at understanding these things.

IV. Low-quality directory links. Same with low-quality directory links. We remember those where it was just loads and loads of links and text and a random auto insurance link in there. You want to steer clear of those.

V. Site-wide links also look very spammy. Site-wide being whether it's a footer link or a top-level navigation link, you definitely don't want to go after those. They can appear really, really spammy. Avoid those.

VI. Comment links with over-optimized anchor link text, specifically, you want to avoid. Again, it's just like any of these others. It looks spammy. It's not going to help you long-term. Again, what's the value of that overall? So avoid that.

VII. Abusing guest posts. You definitely don't want to do this. You don't want to guest post purely just for a link. However, I am still a huge advocate, as I know many others out there are, of guest posting and providing value. Whether there be a link or not, I think there is still a ton of value in guest posting. So don't get rid of that altogether, but definitely don't target it for potential link building opportunities.

VIII. Automated tools used to create links on all sorts of websites. ScrapeBox is an infamous one that would create the comment links on all sorts of blogs. You don't want to do that.

IX. Link schemes, private link networks, and private blog networks. This is where you really get into trouble as well. Google will penalize or de-index you altogether. It looks so, so spammy, and you want to avoid this.

X. Link exchange. This is in the same vein as the link exchanges, where back in the day you used to submit a website to a link exchange and they wouldn't grant you that link until you also linked to them. Super silly. This stuff does not work anymore, but there are tons of opportunities and quick wins for you to gain links naturally and more authoritatively.

So hopefully, this helps clear up some of the confusion. One question I would love to ask all of you is: To disavow or to not disavow? I have heard back-and-forth conversations on either side on this. Does the disavow file still work? Does it not? What are your thoughts? Please let me know down below in the comments.

Thank you so much for tuning in to this edition of Whiteboard Friday. I will see you all soon. Thanks.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, February 26, 2020

How Low Can #1 Go? (2020 Edition)

Posted by Dr-Pete

Being #1 on Google isn't what it used to be. Back in 2013, we analyzed 10,000 searches and found out that the average #1 ranking began at 375 pixels (px) down the page. The worst case scenario, a search for "Disney stock," pushed #1 all the way down to 976px.

A lot has changed in seven years, including an explosion of rich SERP (Search Engine Results Page) features, like Featured Snippets, local packs, and video carousels. It feels like the plight of #1 is only getting worse. So, we decided to run the numbers again (over the same searches) and see if the data matches our perceptions. Is the #1 listing on Google being pushed even farther down the page?

I try to let the numbers speak for themselves, but before we dig into a lot of stats, here's one that legitimately shocked me. In 2020, over 1,600 (16.6%) of the searches we analyzed had #1 positions that were worse than the worst-case scenario in 2013. Let's dig into a few of these ...

What's the worst-case for #1?

Data is great, but sometimes it takes the visuals to really understand what's going on. Here's our big "winner" for 2020, a search for "lollipop" — the #1 ranking came in at an incredible 2,938px down. I've annotated the #1 position, along with the 1,000px and 2,000px marks ...

At 2,938px, the 2020 winner comes in at just over three times 2013's worst-case scenario. You may have noticed that the line is slightly above the organic link. For the sake of consistency and to be able to replicate the data later, we chose to use the HTML/CSS container position. This hits about halfway between the organic link and the URL breadcrumbs (which recently moved above the link). This is a slightly more conservative measure than our 2013 study.

You may also have noticed that this result contains a large-format video result, which really dominates page-one real estate. In fact, five of our top 10 lowest #1 results in 2020 contained large-format videos. Here's the top contender without a large-format video, coming in at fourth place overall (a search for "vacuum cleaners") ...

Before the traditional #1 organic position, we have shopping results, a research carousel, a local pack, People Also Ask results, and a top products carousel with a massive vertical footprint. This is a relentlessly commercial result. While only a portion of it is direct advertising, most of the focus of the page above the organic results is on people looking to buy a vacuum.

What about the big picture?

It's easy — and more than a little entertaining — to cherry-pick the worst-case scenarios, so let's look at the data across all 10,000 results. In 2013, we only looked at the #1 position, but we've expanded our analysis in 2020 to consider all page-one organic positions. Here's the breakdown ...

The only direct comparison to 2013 is the position #1 row, and you can see that every metric increased, some substantially. If you look at the maximum Y-position by rank, you'll notice that it peaks around #7 and then begins to decrease. This is easier to illustrate in a chart ...

To understand this phenomenon, you have to realize that certain SERP features, like Top Stories and video carousels, take the place of a page-one organic result. At the same time, those features tend to be longer (vertically) than a typical organic result. So, a page with 10 traditional organic results will in many cases be shorter than a page with multiple rich SERP features.

What's the worst-case overall?

Let's dig into that seven-result page-one bucket and look at the worst-case organic position across all of the SERPs in the study, a #7 organic ranking coming in at 4,487px ...

Congratulations, you're finally done scrolling. This SERP has seven traditional organic positions (including one with FAQ links), plus an incredible seven rich features and a full seven ads (three are below the final result). Note that this page shows the older ad and organic design, which Google is still testing, so the position is measured as just above the link.

How much do ads matter?

Since our 2013 study (in early 2016), Google removed right-hand column ads on desktop and increased the maximum number of top-left ads from three to four. One notable point about ads is that they have prime placement over both organic results and SERP features. So, how does this impact organic Y-positions? Here's a breakdown ...

Not surprisingly, the mean and median increase as ad-count increases – on average, the more ads there are, the lower the #1 organic position is. So why does the maximum Y-position of #1 decrease with ad-count? This is because SERP features are tied closely to search intent, and results with more ads tend to be more commercial. This naturally rules out other features.

For example, while 1,270 SERPs on February 12 in our 10,000-SERP data set had four ads on top, and 1,584 had featured snippets, only 16 had both (just 1% of SERPs with featured snippets). Featured snippets naturally reflect informational intent (in other words, they provide answers), whereas the presence of four ads signals strong commercial intent.

Here's the worst-case #1 position for a SERP with four ads on top in our data set ...

The college results are a fairly rare feature, and local packs often appear on commercial results (as anyone who wants to buy something is looking for a place to buy it). Even with four ads, though, this result comes in significantly higher than our overall worst-case #1 position. While ads certainly push down organic results, they also tend to preclude other rich SERP features.

What about featured snippets?

In early 2014, a year after our original study, Google launched featured snippets, promoted results that combine organic links with answers extracted from featured pages. For example, Google can tell you that I am both a human who works for Moz and a Dr. Pepper knock-off available at Target ...

While featured snippets are technically considered organic, they can impact click-through rates (CTR) and the extracted text naturally pushes down the organic link. On the other hand, Featured Snippets tend to appear above other rich SERP features (except for ads, of course). So, what's the worst-case scenario for a #1 result inside a featured snippet in our data set?

Ads are still pushing this result down, and the bullet list extracted from the page takes up a fair amount of space, but the absence of other SERP features above the featured snippet puts this in a much better position than our overall worst-case scenario. This is an interesting example, as the "According to mashable.com ..." text is linked to Mashable (but not considered the #1 result), but the images are all linked to more Google searches.

Overall in our study, the average Y-position of #1 results with featured snippets was 99px lower/worse (704px) than traditional #1 results (605px), suggesting a net disadvantage in most cases. In some cases, multiple SERP features can appear between the featured snippet and the #2 organic result. Here's an example where the #1 and #2 result are 1,342px apart ...

In cases like this, it's a strategic advantage to work for the featured snippet, as there's likely a substantial drop-off in clicks from #1 to #2. Featured snippets are going to continue to evolve, and examples like this show how critical it is to understand the entire landscape of your search results.

When is #2 not worth it?

Another interesting case that's evolved quite a bit since 2013 is brand searches, or as Google is more likely to call them, "dominant intent" searches. Here's a SERP for the company Mattress Firm ...

While the #1 result has solid placement, the #2 result is pushed all the way down to 2,848px. Note that the #1 position has a search box plus six full site-links below it, taking up a massive amount of real estate. Even the brand's ad has site-links. Below #1 is a local pack, People Also Ask results, Twitter results from the brand's account, heavily branded image results, and then a product refinement carousel (which leads to more Google searches).

There are only five total, traditional organic results on this page, and they're made up of the company's website, the company's Facebook page, the company's YouTube channel, a Wikipedia page about the company, and a news article about the company's 2018 bankruptcy filing.

This isn't just about vertical position — unless you're Mattress Firm, trying to compete on this search really doesn't make much sense. They essentially own page one, and this is a situation we're seeing more and more frequently for searches with clear dominant intent (i.e. most searchers are looking for a specific entity).

What's a search marketer to do?

Search is changing, and change can certainly be scary. There's no question that the SERP of 2020 is very different in some ways than the SERP of 2013, and traditional organic results are just one piece of a much larger picture. Realistically, as search marketers, we have to adapt — either that, or find a new career. I hear alpaca farming is nice.

I think there are three critical things to remember. First, the lion's share of search traffic still comes from traditional organic results. Second, many rich features are really the evolution of vertical results, like news, videos, and images, that still have an organic component. In other words, these are results that we can potentially create content for and rank in, even if they're not the ten blue links we traditionally think of as organic search.

Finally, it's important to realize that many SERP features are driven by searcher intent and we need to target intent more strategically. Take the branded example above — it may be depressing that the #2 organic result is pushed down so far, but ask yourself a simple question. What's the value of ranking for "mattress firm" if you're not Mattress Firm? Even if you're a direct competitor, you're flying in the face of searchers with a very clear brand intent. Your effort is better spent on product searches, consumer questions, and other searches likely to support your own brand and sales.

If you're the 11th person in line at the grocery checkout and the line next to you has no people, do you stand around complaining about how person #2, #7, and #9 aren't as deserving of groceries as you are? No, you change lines. If you're being pushed too far down the results, maybe it's time to seek out different results where your goals and searcher goals are better aligned.

Brief notes on methodology

Not to get too deep in the weeds, but a couple of notes on our methodology. These results were based on a fixed set of 10,000 keywords that we track daily as part of the MozCast research project. All of the data in this study is based on page-one, Google.com, US, desktop results. While the keywords in this data set are distributed across a wide range of topics and industries, the set skews toward more competitive "head" terms. All of the data and images in this post were captured on February 12, 2020. Ironically, this blog post is over 26,000 pixels long. If you're still reading, thank you, and may God have mercy on your soul.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Tuesday, February 25, 2020

Are H1 Tags Necessary for Ranking? [SEO Experiment]

Posted by Cyrus-Shepard

In earlier days of search marketing, SEOs often heard the same two best practices repeated so many times it became implanted in our brains:

  1. Wrap the title of your page in H1 tags
  2. Use one — and only one — H1 tag per page

These suggestions appeared in audits, SEO tools, and was the source of constant head shaking. Conversations would go like this:

"Silly CNN. The headline on that page is an H2. That's not right!"
"Sure, but is it hurting them?"
"No idea, actually."

Over time, SEOs started to abandon these ideas, and the strict concept of using a single H1 was replaced by "large text near the top of the page."

Google grew better at content analysis and understanding how the pieces of the page fit together. Given how often publishers make mistakes with HTML markup, it makes sense that they would try to figure it out for themselves.

The question comes up so often, Google's John Muller addressed it in a Webmaster Hangout:

"You can use H1 tags as often as you want on a page. There's no limit — neither upper nor lower bound.
H1 elements are a great way to give more structure to a page so that users and search engines can understand which parts of a page are kind of under different headings, so I would use them in the proper way on a page.
And especially with HTML5, having multiple H1 elements on a page is completely normal and kind of expected. So it's not something that you need to worry about. And some SEO tools flag this as an issue and say like 'oh you don't have any H1 tag' or 'you have two H1 tags.' From our point of view, that's not a critical issue. From a usability point of view, maybe it makes sense to improve that. So, it's not that I would completely ignore those suggestions, but I wouldn't see it as a critical issue.
Your site can do perfectly fine with no H1 tags or with five H1 tags."

Despite these assertions from one of Google's most trusted authorities, many SEOs remained skeptical, wanting to "trust but verify" instead.

So of course, we decided to test it... with science!

Craig Bradford of Distilled noticed that the Moz Blog — this very one — used H2s for headlines instead of H1s (a quirk of our CMS).

H2 Header
h1 SEO Test Experiment

We devised a 50/50 split test of our titles using the newly branded SearchPilot (formerly DistilledODN). Half of our blog titles would be changed to H1s, and half kept as H2. We would then measure any difference in organic traffic between the two groups.

After eight weeks, the results were in:

To the uninitiated, these charts can be a little hard to decipher. Rida Abidi of Distilled broke down the data for us like this:

Change breakdown - inconclusive
  • Predicted uplift: 6.2% (est. 6,200 monthly organic sessions)
  • We are 95% confident that the monthly increase in organic sessions is between:
    • Top: 13,800
    • Bottom: -4,100
The results of this test were inconclusive in terms of organic traffic, therefore we recommend rolling it back.

Result: Changing our H2s to H1s made no statistically significant difference

Confirming their statements, Google's algorithms didn't seem to care if we used H1s or H2s for our titles. Presumably, we'd see the same result if we used H3s, H4s, or no heading tags at all.

It should be noted that our titles still:

  • Used a large font
  • Sat at the top of each article
  • Were unambiguous and likely easy for Google to figure out

Does this settle the debate? Should SEOs throw caution to the wind and throw away all those H1 recommendations?

No, not completely...

Why you should still use H1s

Despite the fact that Google seems to be able to figure out the vast majority of titles one way or another, there are several good reasons to keep using H1s as an SEO best practice.

Georgy Nguyen made some excellent points in an article over at Search Engine Land, which I'll try to summarize and add to here.

1. H1s help accessibility

Screen reading technology can use H1s to help users navigate your content, both in display and the ability to search.

2. Google may use H1s in place of title tags

In some rare instances — such as when Google can't find or process your title tag — they may choose to extract a title from some other element of your page. Oftentimes, this can be an H1.

3. Heading use is correlated with higher rankings

Nearly every SEO correlation study we've ever seen has shown a small but positive correlation between higher rankings and the use of headings on a page, such as this most recent one from SEMrush, which looked at H2s and H3s.

To be clear, there's no evidence that headings in and of themselves are a Google ranking factor. But headings, like Structured Data, can provide context and meaning to a page.

As John Mueller said on Twitter:

What's it all mean? While it's a good idea to keep adhering to H1 "best practices" for a number of reasons, Google will more than likely figure things out — as our experiment showed — if you fail to follow strict H1 guidelines.

Regardless, you should likely:

  1. Organize your content with hierarchical headings — ideally H1, H2s, H3s, etc.
  2. Use a large font headline at the top of your content. In other words, make it easy for Google, screen readers, and other machines or people reading your content to figure out the headline.
  3. If you have a CMS or technical limitations that prevent you from using strict H1s and SEO best practices, do your best and don't sweat the small stuff.

Real-world SEO — for better or worse — can be messy. Fortunately, it can also be flexible.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Monday, February 24, 2020

Spot Zero is Gone — Here's What We Know After 30 Days

Posted by PJ_Howland

As you are probably aware by now, recent updates have changed the world of search optimization. On January 22nd Google, in its infinite wisdom, decided that the URL that has earned the featured snippet in a SERP would not have the additional spot in that SERP. This also means that from now on the featured snippet will be the true spot-one position.

Rather than rehash what’s been so eloquently discussed already, I’ll direct you to Dr. Pete’s post if you need a refresher on what this means for you and for Moz.

30 days is enough to call out trends, not all of the answers

I’ve been in SEO long enough to know that when there’s a massive shake-up (like the removal of spot zero), bosses and clients want to know what that means for the business. In situations like this, SEOs responses are limited to 1) what they can see in their own accounts, and 2) what others are reporting online.

A single 30-day period isn’t enough time to observe concrete trends and provide definitive suggestions for what every SEO should do. But it is enough time to give voice to the breakout trends that are worth observing as time goes on. The only way for SEOs to come out on top is by sharing the trends they are seeing with each other. Without each other’s data and theories, we’ll all be left to see only what’s right in front of us — which is often not the entire picture.

So in an effort to further the discussion on the post-spot-zero world, we at 97th Floor set out to uncover the trends under our nose, by looking at nearly 3,000 before-and-after examples of featured snippets since January 22nd.

The data and methodology

I know we all want to just see the insights (which you’re welcome to skip to anyway), but it's worth spending a minute explaining the loose methodology that yielded the findings.

The two major tools used here were Google Search Console and STAT. While there’s more traffic data in Google Analytics than GSC, we’re limited in seeing the traffic driven by actual keywords, being limited by page-wide traffic. For this reason, we used GSC to get the click-through rates of specific keywords on specific pages. This pairs nicely with STAT's data to give us a daily pinpoint of both Google Rank and Google Base Rank for the keywords at hand.

While there are loads of keywords to look at, we found that small-volume keywords — anything under 5,000 global MSV (with some minor exceptions) — produced findings that didn’t have enough data behind them to claim statistical significance. So, all of the keywords analyzed had over 5,000 global monthly searches, as reported by STAT.

It’s also important to note that all the difficulty scores come from Moz.

Obviously we were only interested in SERPs that had an existing featured snippet serving to ensure we had an accurate before-and-after picture, which narrows down the number of keywords again. When all was said and done, the final batch of keywords analyzed was 2,773.

We applied basic formulas to determine which keywords were telling clear stories. That led us to intimately analyze about 100 keywords by hand, sometimes multiple hours looking at a single keyword, or rather a single SERP over a 30-day period. The findings reported below come from these 100 qualitative keyword analyses.

Oh, and this may go without saying, but I’m doing my best to protect 97th Floor’s client’s data, so I won’t be giving anything incriminating away as to which websites my screenshots are attached to. 97th Floor has access to hundreds of client GSC accounts and we track keywords in STAT for nearly every one of them.

Put plainly, I’m dedicated to sharing the best data and insight, but not at the expense of our clients’ privacy.

The findings... not what I expected

Yes, I was among the list of SEOs that said for the first time ever SEOs might actually need to consider shooting for spot 2 instead of spot 1.

I still don’t think I was wrong (as the data below shows), but after this data analysis I’ve come to find that it’s a more nuanced story than the quick and dirty results we all want from a study like this.

The best way to unfold the mystery from the spot-zero demotion is to call out the individual findings from this study as individual lessons learned. So, in no particular order, here’s the findings.

Longtime snippet winners are seeing CTR and traffic drops

While the post-spot-zero world may seem exciting for SEOs that have been gunning for a high-volume snippet spot for years, the websites who have held powerful snippet positions indefinitely are seeing fewer clicks.

The keyword below represents a page we built years ago for a client that has held the snippet almost exclusively since launch. The keyword has a global search volume of 74,000 and a difficulty of 58, not to mention an average CPC of $38.25. Suffice it to say that this is quite a lucrative keyword and position for our client.

We parsed out the CTR of this single keyword directing to this single page on Google Search Console for two weeks prior to the January 22d announcement and two weeks following it. I’d love to go back farther than two weeks, but if we did, we would have crept into New Years traffic numbers, which would have muddled the data.

As you can see, the impressions and average position remained nearly identical for these two periods. But CTR and subsequent clicks decreased dramatically in the two weeks immediately following the January 22nd spot-zero termination.

If this trend continues for the rest of 2020, this single keyword snippet changeup will result in a drop of 9,880 clicks in 2020. Again, that’s just a single keyword, not all of the keywords this page represents. When you incorporate average CPC into this equation that amounts to $377,910 in lost clicks (if those were paid clicks).

Sure, this is an exaggerated situation due to the volume of the keyword and inflated CPC, but the principle uncovered over and over in this research remains the same: Brands that have held the featured snippet position for long periods of time are seeing lower CTRs and traffic as a direct result of the spot-zero shakeup.

When a double snippet is present, CTR on the first snippet tanks

Nearly as elusive as the yeti or Bigfoot, the double snippet found in its natural habitat is rare.

Sure this might be expected; when there are two results that are both featured snippets, the first one gets fewer clicks. But the raw numbers left us with our jaws on the floor. In every instance we encountered this phenomenon we discovered that spot one (the #1 featured snippet) loses more than 50% of its CTR when the second snippet is introduced.

This 40,500 global MSV keyword was the sole featured snippet controller on Monday, and on Tuesday the SERP remained untouched (aside from the second snippet being introduced).

This small change brought our client’s CTR to its knees from a respectable 9.2% to a crippling 2.9%.

When you look at how this keyword performed the rest of the week, the trend continues to follow suit.

Monday and Wednesday are single snippet days, while Tuesday, Thursday, and Friday brought the double snippet.

Easy come, easy go (not a true Spot 1)

There’s been a great deal of speculation on this fact, but now I can confirm that ranking for a featured snippet doesn’t come the same way as ranking for a true spot 1. In the case below, you can see a client of ours dancing around spots 5 and 6 before taking a snippet. Similarly when they lose the snippet, they fall back to the original position.

Situations like this were all too common. Most of the time we see URLs losing the snippet to other URLs. Other times, Google removes the snippet entirely only to bring it back the following day.

If you’re wondering what the CTR reporting on GSC was for the above screenshot, I’ve attached that below. But don’t geek out too quickly; the findings aren’t terribly insightful. Which is insightful in itself.

This keyword has 22,200 global volume and a keyword difficulty of 44. The SERP gets significant traffic, so you would think that findings would be more obvious.

If there’s something to take away from situations like this, here it is: Earning the snippet doesn’t inherently mean CTRs will improve beyond what you would be getting in a below-the-fold position.

Seeing CTR bumps below the fold

Much of the data addressed to this point either speaks of sites that either have featured snippets or lost them, but what about the sites that haven’t had a snippet before or after this shakeup?

If that describes your situation, you can throw yourself a tiny celebration (emphasis on the tiny), because the data is suggesting that your URLs could be getting a slight CTR bump.

The example below shows a 74,000 global MSV keyword with a difficulty that has hovered between spots 5 and 7 for the week preceding and the week following January 22nd.

The screenshot from STAT shows that this keyword has clearly remained below the fold and fairly consistent. If anything, it ranked worse after January 22nd.

The click-through rate improved the week following January 22nd from 3% to 3.7%. Perhaps not enough to warrant any celebration for those that are below the fold, as this small increase was typical across many mid-first-page positions.

“People Also Ask” boxes are here to steal your snippet CTR

Perhaps this information isn’t new when considering the fact that PAA boxes are just one more place that can lead users down a rabbit hole of information that isn’t about your URL.

On virtually every single SERP (in fact, we didn’t find an instance where this wasn’t true), the presence of a PAA box drops the CTR of both the snippet and the standard results.

The negative effects of the PAA box appearing in your SERP are mitigated when the PAA box doesn’t serve immediately below the featured snippet. It’s rare, but there are situations where the “People Also Ask” box serves lower in the SERP, like this example below.

If your takeaway here is to create more pages that answer questions showing up in relevant PAA boxes, take a moment to digest the fact that we rarely saw instances of clicks when our clients showed up in PAA boxes.

In this case, we have a client that ranks for two out of the first four answers in a high-volume SERP (22,000 global monthly searches), but didn’t see a single click — at least none to speak of from GSC:

While its counterpart page, which served in spot 6 consistently, is at least getting some kind of click-through rate:

If there’s a lesson to be learned here, it’s that ranking below the fold on page one is better than getting into the PAA box (in the terms of clicks anyway).

So, what is the takeaway?

As you can tell, the findings are a bit all over the place. However, the main takeaway that I keep coming back to is this: Clickability matters more than it ever has.

As I was crunching this data, I was constantly reminded of a phrase our EVP of Operations, Paxton Gray, is famous for saying:

“Know your SERPs.”

This stands truer today than it did in 2014 when I first heard him say it.

As I reflected on this pool of frustrating data, I was reminded of Jeff Bezo’s remarks in his 2017 Amazon Shareholder’s letter:

“One thing I love about customers is that they are divinely discontent. Their expectations are never static — they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied. People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’.”

And then it hit me: Google wasn’t built for SEOs; it’s built for users. Google’s job is our job, giving the users the best content. At 97th Floor our credo is: we make the internet a better place. Sounds a little corny, but we stand by it. Every page we build, every ad we run, every interactive we build, and every PDF we publish for our clients needs to make the internet a better place. And while it’s challenging for us watching Google’s updates take clicks from our clients, we recognize that it’s for the user. This is just one more step in the elegant dance we perform with Google.

I remember a day when spots 1, 2, and 3 were consistently getting CTRs in the double digits. And today, we celebrate if we can get spot 1 over 10% CTR. Heck, I‘ll even take an 8% for a featured snippet after running this research!

SEO today is more than just putting your keyword in a title and pushing some links to a page. SERP features can have a more direct effect on your clicks than your own page optimizations. But that doesn’t mean SEO is out of our control — not by a long shot. SEOs will pull through, we always do, but we need to share our learnings with each other. Transparency makes the internet a better place after all.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!