Wednesday, February 23, 2022

Sneak Peek: The Initial MozCon 2022 Speaker Lineup

Have you been waiting on the edge of your seat for MozCon to start rolling out the lineup of speakers? Well, my friend, today’s the day so hold on tight!

We’re thrilled to announce the first 15 extraordinary speakers that will be taking the MozCon 2022 stage in Seattle this July (in alphabetical order).

Meet the speakers

Amalia Fowler (she/her)

Founder, Good AF Consulting
@AmaliaEFowler

Amalia is the Founder of Good AF Consulting, a Vancouver, BC consulting firm where she helps companies build great marketing teams. She's also a marketing instructor at the British Columbia Institute of Technology and creator of The Wholehearted Manager, a newsletter for people who believe in leading with their heart, and that all employees deserve a safe space to work. Amalia has been in search marketing for eight years, with her work focused on helping businesses understand marketing strategy, mentoring new marketers, building exceptional marketing teams and advocating for more ethical and transparent practices in our space. Her favorite saying is "It Depends," because context is everything.

Amanda Milligan (she/her)

Head of Marketing, Stacker
@millanda | @stacker

Amanda Milligan is the Head of Marketing at Stacker Studio. With a degree in journalism and a decade in content marketing, she’s spent her career helping brands harness the intersection of content and SEO.

Andy Crestodina (he/him)

Co-founder / CMO, Orbit Media Studios
@crestodina | @orbiteers

Andy Crestodina is the co-founder and CMO of Orbit Media, an award-winning 50-person digital agency in Chicago. Over the past 20 years, Andy has provided digital marketing advice to over 1,000 businesses.

Areej AbuAli (she/her)

Head of SEO, Papier (areejabuali.com)
@areej_abuali | @TechSEOWomen

Areej is Head of SEO at Papier where she focuses on all things technical and on-site SEO. She is the founder of the global Women in Tech SEO community and has been in the digital marketing industry for over eight years.

Crystal Carter (she/her)

Head of SEO Comms, Wix
@CrystalontheWeb

Crystal Carter is an SEO and digital marketer with over 15 years experience working with clients like Disney, Tomy, Kikkoman, and more. She shares insights in leading publications like Google Webmaster Central, The Moz Blog, Search Engine Land, and Women in Tech SEO.

Hannah Smith (she/her)

Founder, Worderist
@hannah_bo_banna

With more than 15 years in the SEO industry, Hannah's creative work has won multiple awards, and she's worked with a range of companies including the BBC, Dyson, Expedia, MailChimp, and Salesforce.

Jackie Chu (she/her)

SEO Lead, Intelligence, Uber
@jackiecchu

Jackie Chu is currently the SEO Lead, Intelligence for Uber, driving analytics and tooling for the SEO teams globally. She has deep experience in technical SEO, content SEO, ASO and international SEO spanning both B2B and B2C industries.

Joe Hall (he/him)

SEO Consultant & Principal Analyst, Hall Analysis
@joehall

Joe Hall is an executive SEO consultant focused on analyzing and informing the digital marketing strategies of select clients through in-depth data analysis and SEO audits.

Lidia Infante (she/her)

Senior SEO Manager, BigCommerce (lidia-infante.com)
@LidiaInfanteM | @bigcommerce

Born and raised in Barcelona, Lidia has been working in SEO for over eight years. She's been helping businesses in e-commerce, media and B2B reach their audiences on search across European markets, the US and Australia. She has leveraged her BSc in Psychology and Master's in Digital Business to drive organic growth for e-commerce sites, media, and SaaS.

Lily Ray (she/her)

Senior Director, SEO & Head of Organic Research, Amsive Digital
@lilyraynyc | @amsive_digital

Lily Ray is the Sr. Director, SEO & Head of Organic Research at Amsive Digital, where she provides strategic leadership for the agency’s SEO client programs. Lily began her SEO career in 2010 in a fast-paced start-up environment and moved quickly into the agency world, where she helped grow and establish an award-winning SEO department that delivered high impact work for a fast-growing list of notable clients, including Fortune 500 companies.

Noah Learner (he/him)

Product Director, Two Octobers
@noahlearner | @twooctobers

Noah is a technical marketer, nicknamed the Kraken, who is happiest building SEO tools, automations, data pipelines and communities. When not in the lab, he loves skiing, fly fishing, camping with his family, and walking his dog, Shadow.

Paddy Moogan (he/him)

Co-Founder, Aira
@paddymoogan | @airadigital

Paddy is co-founder of Aira, a digital marketing agency based in the UK and delivering work across SEO, paid media, content marketing, and digital PR. He has been working in SEO since around 2004 when he got bored studying for his law degree.

Dr. Pete Meyers (he/him)

Marketing Scientist, Moz
@dr_pete | @moz

Dr. Pete is Marketing Scientist for Moz, where he works with the marketing and data science teams on product research and data-driven content.

Tom Capper (he/him)

Senior Search Scientist, Moz
@thcapper | @moz

Tom heads up the Search Science team at Moz, providing research and insight for Moz's next generation of tools. Previously, he led the London consulting team for SEO agency Distilled, and worked as a chef in a roadside grill.

Wil Reynolds (he/him)

Founder & Vice President of Innovation, Seer Interactive
@wilreynolds | @SeerInteractive

Wil has been leading the charge to leverage “Big Data” to break down silos between SEO, PPC, and traditional marketing — pulling together data from various sources to see the big picture.

Stay tuned for more MozCon updates!

And we’re just getting warmed up! We’ve got lots more incredible speakers to reveal in the coming weeks including our community speaker lineup, but don’t wait to snap your ticket as early bird savings are only available for a limited time and once they’re gone, they’re gone for good.

Monday, February 21, 2022

The Professional’s Guide to SEO: Link Building Sneak Preview

We’re developing a brand-new SEO guide for you — The Professional’s Guide to SEO, designed as your next-step resource once you’re comfortable with the baseline provided by the Beginner’s Guide to SEO.

When we think of link building, one of the first people we think of is our friend Paddy Moogan at Aira, the architect of the Beginner’s Guide to Link Building, MozCon alumni speaker, and industry thought leader. That’s why we’re jazzed to announce that the link building chapter of our upcoming Professional’s Guide to SEO is authored by none other than the fine folks at Aira. And today we’re sharing an excerpt of that very chapter.

Dive in for a glimpse at our newest guide, and as always, give us a shout on Twitter if you have any thoughts or recommendations — we love hearing from you!

Link Building & Link Earning Tactics

Measuring link building

In this section, we explore approaches to measuring links and understanding which ones may make the biggest difference to traffic and rankings, along with business outcomes such as revenue.

Once your link building strategy is set and you’re underway, you need to measure your work so that you can understand the effectiveness of what you’re doing and demonstrate its value. There are a few ways that you can measure link building, which we’ll cover below, along with the pros and cons of each one.

Volume of links

You can use various tools (such as Link Explorer) to measure the volume of links to a domain and then measure this number over time. You can also manually measure the links that you build via your own outreach and record these over time.

The advantage of this measure is that it’s very easy to do and is usually a direct consequence of the work that you are doing. For example, if you engage in a tactic such as broken link building, you can see exactly how many links you were able to build as a result of that work. This will allow you to understand which activities are most worthwhile and effective at generating volumes of links.

There is a downside to this form of measurement. Not all links are created equal, and the raw volume of links isn’t very useful without important context such as the authority and relevance of those links.

One high-quality, authoritative, relevant link could do more for your site than a hundred low-quality links. If you only measure link volume on its own, then the hundred low-quality links may be seen as successful and effective.

Volume of links can be a useful metric as long as you layer it with other context, which we’ll talk about below.

Quality of links

“Quality” is generally quite subjective, but when it comes to link building, there are some good ways to understand and measure the quality of a link. The two core ways are:

  1. Relevance of a link

  2. Authority of a link

Let’s talk about relevance first.

Link relevance

Relevance isn’t binary. You can’t just look at a link and say off the bat whether it’s relevant or not. For example, imagine you get a link to your website from another in the same industry and which contains content similar to yours. It sounds relevant at the surface, but what if the link is pointing at a piece of content you’ve created that’s on a completely different topic — is that still a relevant link?

What if you somehow get a link from NASA to your coffee beans website? Space exploration isn’t exactly relevant to the grinders and beans that you sell, is it?!

Situations like these make relevance pretty difficult to measure, especially at scale. But it’s possible to do by using your own experience, instincts, and by asking a few basic questions when looking at a link:

  • If the link is on another website, how likely is it that a potential customer for your products would see that link?

  • If the link is to a piece of content that you’ve created, would that content resonate with your ideal customer?

  • If someone at your company who has zero knowledge of SEO and link building saw the link, would they be happy with it?

Questions like these can help you judge whether a link is relevant to your business or not. Despite it not being the most objective measure and hard to scale, factoring relevance into your link building measurement at some point is vital. This is because the more relevant links are, the more likely they are to put you in front of potential customers and to generate real traffic.

Link authority

When it comes to authority, there are many third-party scoring methods that aim to replicate Google PageRank. Domain Authority and Page Authority are proprietary metrics created by Moz. They can give you an approximate idea of how much value a particular link has. It’s not foolproof, and no third-party metric can truly replicate Google PageRank, but they can be good enough to give you a rough idea of how much authority a link has.

Using third-party metrics is a good thing, but you should always keep in mind that these metrics are not used by Google themselves. This means that they’re naturally limited in their usefulness and, similar to link volume, they shouldn’t be used without context.

Rankings and organic search traffic

Ideally, the link building that you carry out should have a positive effect on your organic search rankings and as a consequence, your traffic as well. As we learned earlier, there are many signals that Google uses to determine organic search rankings, but building the right kinds of links is a strong one and should help you rank better.

However, it can be hard to measure the direct effect of the links you build on organic search rankings and traffic. It’s not as simple as being able to say “we built 10 links and our traffic went up by 20%.” Despite understanding some of the signals, none of us knows exactly how the Google algorithm works and even if we did, every website and industry is different.

Building ten links for a brand-new coffee bean website is likely to have a bigger impact than building 10 links for an established website that sells car insurance.

This makes our lives harder as SEOs, but not impossible. It’s also easier if you’re able to control or at least have some influence over other important areas of SEO, such as technical SEO or content.

The main thing that many of us can do, regardless of how much control we have, is to measure organic search rankings and traffic objectively over time and look for consistent improvement. Check your rankings with tools such as Moz Pro or Google Search Console and use Google Analytics to check traffic.

Referral traffic

Another way to measure the effectiveness of link building is to look at how much traffic is sent via the links that you’ve built.

It’s easy to get caught up in measurement of rankings and organic traffic and forget that you can generate traffic directly from links, too. This happens when someone sees your link and then clicks on it, resulting in a visit to your website.

Now, not every link that you build will send traffic — and that’s okay. However, one aspect of your approach should include building links that do indeed lead to referral traffic. They may come as a result of launching a great piece of content or building a new, innovative product feature that gets picked up by industry experts.

Setting expectations is important here because, as mentioned, not every link will send lots of traffic. For example, if you get a link on a page because of a great resource that you’ve created, but your link is amongst hundreds of other links on the external website, the chances of someone clicking on yours is significantly reduced.

With that said, it’s a fantastic metric to use and pretty simple if you utilize analytics tools such as Google Analytics.

Want more news about the Professional’s Guide to SEO? Don’t miss any of our future sneak peeks — make sure you’re signed up for Moz Blog email updates!

Keep me updated!

Friday, February 18, 2022

Core Web Vitals: What Next?

The promised page experience updates from Google that caused such a stir last year are far from being done, so Tom takes a look at where we are now and what happens next for the algorithm’s infamous Core Web Vitals.

whiteboard with Core Web Vitals diagrams

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Happy Friday, Moz fans, and welcome to another Whiteboard Friday. This week's video is about Core Web Vitals. Before you immediately pull that or pause the video, press Back, something like that, no, we haven't got stuck in time. This isn't a video from 2020. This is looking forwards. I'm not going to cover what the basic metrics are or how they work, that kind of thing in this video. There is a very good Whiteboard Friday about a year ago from Cyrus about all of those things, which hopefully will be linked below. What this video is going to be looking at is where we are now and what happens next, because the page experience updates from Google are very much not done. They are still coming. This is still ongoing. This is probably going to get more important over time, not less, even though the hype has kind of subsided a little bit.

Historical context

So, firstly, I want to look at some of the historical context in terms of how we got where we are. So I've got this timeline on this side of the board. You can see in May 2020, which is nearly two years ago now, Google first announced this. This is an extraordinary long time really in SEO and Google update terms. But they announced it, and then we had these two delays and it felt like it was taking forever. I think there are some important implications here because my theory is that — I've written about this before and again, hopefully, that will also be linked  below — but my theory is that the reason for the delays was that too few pages would have been getting a boost if they had rolled out when they originally intended to, partly because too few sites had actually improved their performance and partly because Google is getting data from Chrome, the Chrome user experience or CrUX data. It's from real users using Chrome.

For lots of pages for a long time, including now really, they didn't really have a good enough sample size to draw conclusions. The coverage is not incredible. So because of that, initially when they had even less data, they were in an even worse position to roll out something. They don't want to make a change to their algorithm that rewards a small number of pages disproportionately, because that would just distort their results. It will make their results worse for users, which is not what they're aiming for with their own metrics.

So because of these delays, we were sort of held up until June last year. But what I've just explained, this system of only having enough sample size for more heavily visited pages, this is important for webmasters, not just Google. We'll come back to it later when we talk about what's going to happen next I think, but this is why whenever we display Core Web Vitals data in Moz Pro and whenever we talk about it publicly, we encourage you to look at your highest traffic or most important pages or your highest ranking pages, that kind of thing, rather than just looking at your slowest pages or something like that. You need to prioritize and triage. So we encourage you to sort by traffic and look at that alongside performance or something like that.

So anyway, June 2021, we did start having this rollout, and it was all rolled out within two or three months. But it wasn't quite what we expected or what we were told to expect. 

What happened after the rollout? 

In the initial FAQ and the initial documentation from from Google, they talked about sites getting a boost if they passed a certain threshold for all three of the new metrics they were introducing. Although they kind of started to become more ambiguous about that over time, that definitely isn't what happened with the rollout.

So we track this with MozCast data. So between the start and the end of when Google said they were rolling it out, we looked at the pages ranking top 20 in MozCast that had passes for zero, one, two, or three of the metrics against the thresholds that Google published. 

Hand drawing of average ranking across sites that passed between 0 and all 3 core web vital metrics.

Now one thing that's worth noticing about this chart, before you even look at it anymore closely, is that all of these lines trend downwards, and that's because of what I was talking about with the sample sizes increasing, with Google getting data on more pages over time. So as they got more pages, they started incorporating more low traffic or in other words low ranking pages into the CrUX data, and that meant that the average rank of a page that has CrUX data will go down, because when we first started looking at this, even though this is top 20 rankings for competitive keywords, only about 30% of them even had CrUX data in the first place when we first looked at this. It's gone up a lot since then. So it now includes more low ranking pages. So that's why there's this sort of general downwards shift.

So the thing to notice here is the pages passing all three thresholds, these are the ones that Google said were going to get a big boost, and these went down by 0.2, which is about the same as the pages that were passing one or two thresholds. So I'm going to go out on a limb and say that that was just the general fit caused by incorporating more pages into CrUX data. 

The really noticeable thing was the pages that passed zero. The pages that passed zero thresholds, they went down by 1.1. They went down by 1.1 positions. So instead of it being pass all three and get a boost, it's more like pass zero and get a penalty. Or you could rephrase that positively and say the exact same thing, as pass one and get a boost relative to these ones that are falling off the cliff and dropping over one ranking position.

So there was a big impact it seems from the rollout, but not necessarily the one that we were told to expect, which is interesting. I suspect that's because Google perhaps was more confident about the data on the sites performing very badly than about the data on the sites performing very well.

What happens next? 

Desktop rollout

Now, in terms of what happens next, I think this is relevant because in February and March, probably as you're watching this video, Google have said they're going to be rolling out this same expect page experience update on desktop. So we assume it will work the same way. So what you've seen here on a smartphone only, this will be replicated on desktop at the start of this year. So you'll probably see something very similar with very poorly performing sites. If you're already watching this video, you probably have little or no time to get this fixed or they'll see a ranking drop, which if maybe that's one of your competitors, that could be good news.

But I don't think it will stop there. There are two other things I expect to happen. 

Increased impact

So one is you might remember with HTTPS updates and particularly with Mobilegeddon, we expected this really big seismic change. But what actually happened was when the update rolled out, it was very toned down. Not much noticeable shifted. But then, over time, Google sort of quietly turned up the wick. These days, we would all expect a very mobile-unfriendly site to perform very poorly in search, even though the initial impact of that algorithm update was very minor. I think something similar will happen here. The slower sites will feel a bigger and bigger penalty gradually building. I don't mean like a manual penalty, but a bigger disadvantage gradually building over time, until in a few years' time we would all intuitively understand that a site that doesn't pass three thresholds or something is going to perform horribly.

New metrics

The last change I'm expecting to see, which Google hinted about initially, is new metrics. So they initially said that they would probably update this annually. You can already see on web.dev that Google is talking about a couple of new metrics. Those are smoothness and responsiveness. So smoothness is to do with the sort of frames per second of animations on the page. So when you're scrolling up and down the page, is it more like a slideshow or a sort of fluid video? Then responsiveness is how quickly the page interacts or responds to your interactions. So we already have one of the current metrics is first input delay, but, as it says in the name, that's only the first input. So I'm expecting this to care more about things that happen further into your browsing experience on that page.

So these are things I think you have to think about going forwards through 2022 and beyond for Core Web Vitals. I think the main lesson to take away is you don't want to over-focus on the three metrics we have now, because if you just leave your page that's currently having a terrible user experience but somehow sort of wiggling its way through these three metrics, that's only going to punish you in the long run. It will be like the old-school link builders that are just constantly getting penalized as they find their way around every new update rather than finding a more sustainable technique. I think you have to do the same. You have to aim for a genuinely good user experience or this isn't going to work out for you.

Anyway, that's all for today. Hope the rest of your Friday is enjoyable. See you next time.

Video transcription by Speechpad.com

Wednesday, February 16, 2022

Understanding the Google Ads Auction & Why Ad Rank Is Important

There are 3.5 billion searches on Google every day, and 84% of people use Google at least three times per day to search for information.

When there is a search query on Google, Google Ads runs a quick auction to determine which ads will show for that search query, and what the ad positions should be. This ad auction is repeated every time an ad is eligible to appear for a search term out of the billions searched each day.

To determine if an ad is eligible to be shown in the Google search results, and what the position of the ad will be, Google uses a value called Ad Rank. If an ad does not meet the Ad Rank thresholds, it will not be shown. Ad Rank also determines the CPC (cost per click) that the advertiser pays for a click on their ad.

In this post, I cover the main factors that are used by Google to determine the Ad Rank of an ad during the auction, and what those factors mean for your ad strategy.

Understanding the Google Ads auction

How the Google Ads auction works

When a user makes a search query, Google Ads runs a split-second auction of all the ads whose keywords are relevant to it. This will determine which ads are eligible to be shown, their ad position relative to competing ads, and the CPC that the advertiser will pay for a click on their ad.

When setting up Google Ads pay per click (PPC) marketing campaigns, advertisers identify which keywords they want to bid on and set their max CPC bid. The advertiser also sets up ad groups with keywords and creates related ads.

When there is a search query, the Google Ads auction begins. Here is the auction process according to Google:

  • For every search query, Google Ads finds all the ads whose keywords are relevant to the search terms.

  • The system ignores ads that are not eligible for that location and any disapproved ads.

  • The remaining ads will be evaluated based on their Ad Rank. The Ad Rank is based on the max CPC bid, ad quality, Ad Rank thresholds, search context and the ad extensions and formats used.

The eligible ads that won the auction are shown on the SERP based on their Ad Rank.

The layout of the Google search results page changes constantly. Currently, Google shows three ads above the organic search results and three ads below the search results on each search page. Depending on the popularity of the search term, and the number of qualified ads, ads may be shown on multiple search pages for the search term.

Here is an example of a search for “eye doctors Dallas” that shows three eligible Google Ads above the organic search results:

Figure 1: Ads shown above organic search results for a search query
Figure 1: Ads shown above organic search results for a search query

What is Ad Rank?

The ad with the highest Ad Rank will be shown in the top position of the search results page for a relevant search term. This is followed by the ad with the second highest Ad Rank and so on. Ads that do not meet the Ad Rank eligibility requirements will not be shown on Google.

Ad Rank calculation

Ad Rank = Max CPC Bid x Quality Score plus additional factors like the impact of ad extensions and ad formats, Ad Rank thresholds, search context, and competitiveness of auction.

Thus, spending more does not necessarily guarantee you the best Ad Rank. Here is an example of basic Ad Rank calculations for four advertisers competing for ad positions in the Google Search Results:

Figure 2: Calculating Ad Rank
Figure 2: Calculating Ad Rank

As seen in the example, Advertiser 1 had a lower max CPC bid than the other three advertisers, but was able to qualify for the top ad position because their quality score was high. Advertiser 4, in contrast, had the highest max CPC bid but the lowest quality score, and ended up in the lowest ad position.

Why should you care about your Ad Rank?

Google sets minimum Ad Rank thresholds that will determine if an ad is shown at all on Google.

In the example in Figure 2 above, there are four advertisers competing for an auction with Ad Ranks of 24, 20, 12, 8. If the minimum Ad Rank to show above the organic search results is 20, only Advertisers 1 and 2 will show above the search results. If the minimum Ad Rank to show below the search results is 10, only advertiser 3 will show below the search results. Advertiser 4 will not meet the minimum Ad Rank thresholds, and their ads will not be shown on Google at all.

Advertisers compete to have their ad shown in the top-most position on the SERP since that leads to a higher clickthrough rate (CTR) and results in more leads. Ad CTR changes considerably depending on your ad position.

The average CTR across all ads on Google Ads is 3.17% in search. But that CTR ranges considerably depending on industry and position, with a “good” CTR for position 1 being 6% or higher.

Even those minor differences in percentage can equate to thousands of clicks more for higher-ranked ads.

With that in mind, let’s dig into the two main factors determining your Ad Rank a bit more.

What is CPC?

Cost per click (CPC) is the price you pay per click on your ads in your pay-per-click (PPC) marketing campaigns.

When you set up a Google Ads PPC campaign, you set the max CPC bid for the keywords in your account. The max CPC bid can be set up at the keyword level or at the ad group level:

  • The maximum CPC is the maximum amount that you’re willing to pay for a click on your ads.

  • The actual CPC is the final amount you’re charged for a click on your ad. Your Actual CPC is determined at the time of the auction and may be less than the max CPC amount.

  • The average CPC is the average amount you’re charged for a click on your ads.

While CPC costs can vary depending on your industry, the average CPC in Google Ads is $2.69 for search and $0.63 for display.

CPC pricing is also called PPC or pay-per-click. Hence, Google Ads is called PPC or pay-per-click advertising.

How Ad Rank affects actual CPC

Ad Rank also affects the actual CPC you pay for a click on your ads.

Google Ads uses a second-price auction system. The actual CPC you pay is calculated at the time of auction based on your Quality Score and the Ad Rank of the advertiser below you, plus $0.01. Because the auction is dynamic, the actual CPC can vary with each auction.

Google does not disclose the details of how they calculate the Average CPC for Google Ads. According to Search Engine Land, the Actual CPC you pay for a click on your ad is determined at the time of the auction by the following formula:

Actual CPC = (Ad rank of Advertiser below/Your Quality score) + $0.01

Figure 3: Calculating Actual CPC values at each ad position
Figure 3: Calculating Actual CPC values at each ad position

What is Quality Score?

The Quality Score is a diagnostic tool that is used to estimate the overall quality of your ad compared to other advertisers.

Ads and landing pages that are considered more relevant and useful to the search query get a higher Quality Score. This helps to ensure that more useful ads are shown at a higher position on the SERP.

Quality Score is measured on a scale of 1-10, and is available for every keyword. It is based on historical impressions for exact searches of your keyword.

Three factors that determine Quality Score

Quality Score is calculated based on the performance of three main factors:

Expected CTR

The expected CTR is a prediction of the ad clickthrough rate when the ad is shown on Google. Expected CTR projections are based on user CTR, which helps to decide which ads will perform best when shown for a search query.

CTR is the number of clicks your ad receives divided by the number of times your ad is shown: CTR=clicks/impressions.

Landing page experience

The landing page experience measures how relevant and useful your website landing page is to the person who clicked on the ad.

Ad Relevance

Ad relevance measures how well your ad matches the user’s search intent. It ensures that only the most useful ads are shown for every search query, and prevents ads that are unrelated to the product or service from being shown for a search query.

Each of the three Quality Score factors is given a rating of “Above Average”, “Average” or “Below Average”.

In addition to the three factors above, Google considers additional factors during the real-time auction such as the type of device used, location of the user, time of day, impact of ad extensions, and more.

How to check your Quality Score in Google Ads

Google Ads provides four Quality Score status columns at the keyword level to check Quality Score:

  • Quality score

  • Landing page experience

  • Expected CTR

  • Ad relevance

To check your Quality Score in your Google Ads account:

1. Log in to your Google Ads account

2. Click on “Keywords” in the left menu

3. Click on the “Columns” icon in the upper right corner of the table

4. Click on “Modify columns for keywords” and scroll to the Quality Score section. Add the following components to your table metrics (see Figure 4):

  • Quality score

  • Landing page experience

  • Expected CTR

  • Ad relevance

Figure 4: Modify columns for keywords to add Quality Score columns
Figure 4: Modify columns for keywords to add Quality Score columns

5. Click Apply

6. Once these columns are added, scroll to the right on each keyword in the table to check the Quality Score and its components (see Figure 5).

Figure 5: Quality Score status columns in the keywords table
Figure 5: Quality Score status columns in the keywords table

7. If there is a “-“ in the Quality score column, it means that there are not enough searches that exactly match your keywords to determine the Quality Score for that keyword.

For information on improving your Quality Score, read these tips from Google.

Conclusion

The Google Ads auction is a real-time auction that is triggered with every search on Google to determine which ads will be shown for that search term, and in what position. The Ad Rank and Quality Score of the ads are important factors in the ad auction and help to determine whether an ad is eligible to be shown on Google. By improving the individual components of Ad Rank and Quality Score, you can improve the eligibility and rank of your ads.

Monday, February 14, 2022

Time to Shine: MozCon 2022 Community Speaker Pitches Now Open

MozCon is heading back to Seattle this July, and we’re excited to announce the return of our annual call for up-and-coming community speakers!

Every year, we take great pride in reserving space on our stage for new voices. Are you the person that everyone in your network looks to for digital marketing advice? Perhaps you’ve been honing your voice on podcasts or blogs, all the while dreaming of stepping onto the big stage to share your innovative ideas? Now’s your chance to submit your pitch for the opportunity to join industry leaders on the MozCon stage in front of 1,500 of your peers. (No pressure!)

Not sure what a community speaker is?

At MozCon, we have a speaker selection committee that identifies practitioners at the top of their professional field with a mean speaking game. But these sessions are by invite only, and we know the community is bursting at the seams with hidden gems ready to share groundbreaking research, hot tips, and SEO tests that drive results.

Cue our MozCon community speaker program! We reserve six 15-minute community speaking slots throughout our three-day conference. We encourage anyone in the SEO community to submit their best and most exciting presentation ideas for MozCon. Not only are these sessions incredibly well-received by our attendees, but they’re also a fantastic way to get your foot in the door when it comes to the SEO speaking circuit.

Interested in pitching your own idea? Read on for everything you need to know:

How to submit

To submit a pitch, fill out the community speaker submission form below. Only submit one talk! We want the one you’re most passionate about.

Talks must be related to online marketing and be a topic you can cover in 15 minutes. Submissions close on Friday, February 25th at 5pm PDT — no exceptions!

If chosen, you’ll be required to present your talk July 11-13, 2022 at MozCon in Seattle, WA. Incomplete submissions will not be considered, and all decisions are final. All speakers must adhere to the MozCon Code of Conduct and follow our Covid-19 health protocols including providing proof of full vaccination.

Submit my pitch!

If you submit a pitch, you’ll hear back from us regardless of your acceptance status, so please be patient until you hear back — we’ll work hard to make our decisions as quickly as we can! Please note that due to the volume of submissions we typically receive, we’re unable to provide specific feedback on individual applications.

What do speakers receive?

As a community speaker you will receive:

  • 15 minutes on the MozCon stage for a keynote-style presentation

  • A free ticket to MozCon (we can issue a refund or transfer if you’ve already purchased yours)

  • The complete MozCon 2022 bundle of speaker videos

  • Travel and accommodations during MozCon

  • Support and feedback as you build your final presentation deck to make sure you deliver the talk of your life on our stage.

  • And a few more surprises…

How we select our speakers

We have an internal committee of experienced Mozzers that review every pitch. We analyze each topic to make sure there’s no overlap and to confirm that it’s a good fit for our audience.

Next, we look at the entirety of the pitch to help us get a comprehensive idea of what to expect from your talk on the MozCon stage and how it might be received by the audience. This is where links to previous decks, content, and videos of past presentations is helpful (but isn’t required).

Here’s how to make your pitch stand out:

  • Keep your pitch focused on digital marketing. SEO topics are great but we also love topics that compliment or sit adjacent to SEO. The more actionable the pitch, the better.

  • Be focused and concise. What value does your talk provide? We want to hear the actual takeaways our audience will be learning about and why it’s important — not just a vague reference to them. Remember, we receive a ton of pitches, so the more clearly you can explain the tactical steps and learning objectives for the audience, the better you’ll stand out.

  • Do your research! Review the topics presented at past MozCons, on the Moz Blog and in our Whiteboard Friday videos for a sense of what resonates with our audience— we’re looking for sessions that are new and that round out our agenda to add to the stage.

  • Brush up on how to prepare for speaking.

  • No pitches will be evaluated in advance, so please don’t ask :)

  • Using social media to lobby your pitch won’t help. Instead, put your time and energy into the actual pitch itself!

  • Linking to a previous example of a slide deck or presentation isn’t required, but it does help the committee.

Leading up to MozCon

If your pitch is selected, the MozCon team is here to support you along the way. It’s our goal to make sure this is your best talk to date, whether it’s your first time under those bright stage lights or you’re a seasoned speaker who feels perfectly at home in front of a big crowd. We’ll answer any questions you may have and work with you to deliver a talk you’ll be proud of. Here are just a handful of ways that we’re here to help:

  • Topic refinement

  • Helping with your session title and description

  • Reviewing any session outlines and drafts

  • Providing plenty of tips around best practices — specifically with the MozCon stage and audience in mind

  • Comprehensive show guide

  • Being available to listen to you practice your talk

  • Reviewing your final deck

  • A full stage tour on the Sunday before MozCon to meet our A/V crew, see your presentation on the big screen, and get a feel for the show

  • An amazing 15-person A/V team to support your presentation every second it’s on the big screen and beyond

We’ve got our fingers crossed for you. Good luck!

Submit my pitch!

Friday, February 11, 2022

Daily SEO Fix: SEO Reporting — More Specific Use Cases

When rolling out an SEO strategy, an important step is to establish goals you’re able to report on regularly. With Custom Reports in Moz Pro, you have the ability to set up automatic reports to send to key stakeholders regarding the goals you are seeking to achieve. In the previous Daily Fix edition we talked through the basics of creating reports. This time, we’ll dive deeper into some specific use cases that may help you and your team better relay critical information regarding your SEO goals and objectives.

Use Case: Rankings Opportunities

Staying on top of rankings opportunities and how your site stacks up against your competitor may be a critical piece in your SEO strategic puzzle.

In this video, Varad will walk through how you can further supplement the existing Rankings Opportunities template. This report will offer ongoing insight into possible opportunities to tackle next.

Use Case: Competitive Analysis

Setting up a competitive analysis report can help you answer questions from clients and team members regarding how they are performing next to their competitors. This is a common question when working in SEO and having a report ready to go can help set you up for success!

In this next video, Meghan will go over the Custom Report template for competitive analysis and how you can further segment data to bolster your reporting efforts.

If you need help determining who your competitors are, be sure to check out the True Competitor tool.

Use Case: Site Audit

If you are in charge of the technical SEO for your site, or your client’s site, you will likely be asked to regularly report on what issues you’ve resolved and what you’re currently working on. Moz’s Custom Report templates offer a variety of options for Site Audit reports which can help you stay on top of issues and better communicate with interested parties. You can even add in data from the Performance Metrics part of Site Crawl which reports on a tracked site’s Core Web Vitals performance.

Emilie will go over the different options available for Site Audit reports and how to add in that Core Web Vitals data.

Use Case: Traffic Reporting

If increasing organic traffic to your site or to a specific page on your site is one of your goals, it can be helpful to have a report set up to regularly send you updates in a digestible format. With Google Analytics connected to your Moz Pro Campaign, you can create traffic-related reports in the Custom Reports section.

In this next video, Christy will go over which modules you can add to your report so you’re able to keep tabs on organic traffic right alongside your other Campaign data.

Extra Tips & Tricks for Designing Reports

Although this next video isn’t technically a use case, we couldn’t resist adding one more video for you with extra tips and tricks for designing your report. The cherry on top, if you will! In this last video, Varad will go over some ways you can further customize your Custom Reports to elevate them to the next level.

Wednesday, February 9, 2022

Top Stories 7-Pack Tops the SERPs

Back in December of 2021, Google launched a redesigned version of Top Stories on desktop that got relatively little notice. At first glance, it appeared that Google introduced a two-column design, such as this Top Stories pack for “Nerf”:

Over time, SEOs spotted a rarer but more interesting variety, the 7-pack. Here’s one for “snow” (a topic very much on my mind in Chicago as I’m writing this post):

Beyond the redesign itself, this 7-pack occupies a huge amount of screen real-estate, especially compared to previous Top Stories lists and carousels that were limited to three stories.

Should we panic yet?

It’s easy to focus on the most extreme examples, but how often is this 7-pack variety actually occurring? Across the MozCast 10,000-keyword daily tracking set on February 3rd, we captured 2,121 page-one SERPs with Top Stories. Here’s the breakdown by story count:

In our data set, the 7-pack is pretty rare (<1%), with a bit under half (44%) of Top Stories packs containing four stories. Interestingly, there is a design break between three and four stories. Top Stories packs with three or fewer stories are presented in list format, like this one from a search for “dog breeds”:

Top Stories packs with four or more stories (on desktop) seem to switch to the newer, two-column format. While we don’t currently have data on the CTR impact, it will be interesting to see how the two formats impact CTRs and other searcher behaviors.

Is news a search intent?

While the 7-pack is still relatively rare, it represents a qualitatively different SERP — one where news is not just a SERP feature but looks more like a dominant intent for that search. Consider the fuller SERP context of my search for “snow”:

Sorry for the vertical scroll, but these are just the features before the #1 organic result. Obviously, weather SERPs have some unique features, but there’s also a 7-pack of Top Stories, Twitter results, and the new “Local news” pack (launched in December), all suggesting time-sensitive, news-style intent. This is a search where even the most evergreen informational content isn’t going to compete.

Note that, because the news itself is always changing, even the presence of Top Stories packs is very dynamic. Their presence across SERPs follows a cycle that peaks around Wednesday or Thursday and falls off into Sunday and Monday. Some searches may shift intent only on special occasions. For example, consider my search for “groundhog” on February 2nd:

This was not a search for “Groundhog Day” — simply for “groundhog” the animal. Outside of the holiday timeline, this SERP is very likely to be informational. While these dramatic shifts are somewhat unusual, it’s important to remember that search intent is not a static concept.

Is Google testing the waters?

As always, Google giveth and Google taketh away. These Top Stories packs could increase, disappear, or evolve into something entirely new. I do think that Google is testing how searchers interact with news results and trying to separate news as a part of a SERP (when multiple types of content are useful) versus news as a primary intent.

For now, it’s worth monitoring your own results to see where news content may be outshining informational content. In 2022, organic SEO is as much about the searches you don’t pursue as the ones you do and putting your time and money where the ROI makes the most sense.