Tuesday, September 8, 2020

Identifying Advanced GSC Search Performance Patterns (and What to Do About Them)

Posted by izzismith

Google Search Console is by far the most used device in the SEO’s toolkit. Not only does it provide us with the closest understanding we can have of Googlebot’s behavior and perception of our domain properties (in terms of indexability, site usability, and more), but it also allows us to assess the search KPIs that we work so rigorously to improve. GSC is free, secure, easy to implement, and it’s home to the purest form of your search performance KPI data. Sounds perfect, right?

However, the lack of capability for analyzing those KPIs on larger scales means we can often miss crucial points that indicate our pages’ true performance. Being limited to 1,000 rows of data per request and restricted filtering makes data refinement and growth discovery tedious (or close to impossible).

SEOs love Google Search Console — it has the perfect data — but sadly, it’s not the perfect tool for interpreting that data.

FYI: there’s an API

In order to start getting as much out of GSC as possible, one option is to use an API that increases the request amount to 25,000 lines per pull. The wonderful Aleyda Solis built an actionable Google Data Studio report using an API that’s very easy to set up and configure to your needs.

You can also use something out of the box. In this post, the examples use Ryte Search Success because it makes it much easier, faster, and more efficient to work with that kind of data at scale.

We use Search Success for multiple projects on a daily basis, whether we’re assisting a client with a specific topic or we’re carrying out optimizations for our own domains. So, naturally we come across many patterns that give a higher indication of what’s taking place on the SERPs.

However you use GSC search performance data, you can turn it into a masterpiece that ensures you get the most out of your search performance metrics! To help you get started with that, I’ll demonstrate some advanced and, frankly, exciting patterns that I’ve come across often while analyzing search performance data.

So, without further ado, let’s get to it.

Core Updates got you down?

When we analyze core updates, it always looks the same. Below you can see one of the clearest examples of a core update. On May 6, 2020, there is a dramatic fall in impressions and clicks, but what is really important to focus on is the steep drop in the number of ranking keywords.

The amount of ranking keywords is an important KPI, because it helps you determine if a site is steadily increasing its reach and content relevancy. Additionally, you can relate it with search volumes and trends over time.

Within this project, we found hundreds of cases that look exactly like the examples below: lucrative terms were climbing up pages two and three (while Google perceives ranking relevance) before finally making it up to the top 10 to be tested.

There is a corresponding uplift in impressions, yet the click-through-rate for this important keyword remained at a measly 0.2%. Out of 125K searches, the page only received 273 clicks. That’s clearly not enough for this domain to stay in the top 10, so during the Core Update rollout, Google demoted these significant underperformers.

The next example is very similar, yet we see a higher altitude on page one due to the fact that there’s a lower amount of impressions. Google will likely aim to get statistically relevant results, so the fewer impressions a keyword has, the longer the tests need to occur. As you can see, 41 clicks out of 69K impressions shows that no searcher was clicking through to the site via this commercial keyword, and thus they fell back to pages two and three.

This is a typical Core Update pattern that we’ve witnessed hundreds of times. It shows us that Google is clearly looking for these patterns, too, in order to find what might be irrelevant for their users, and what can kiss goodbye to page one after an update.

Aim to pass those “Top 10 Tests” with flying colors

We can never know for sure when Google will roll out a Core Update, nor can we ever be fully confident of what results in a demotion. However, we should always try to rapidly detect these telltale signs and react before a Core Update has even been thought of.

Make sure you have a process in place that deals with discovering subpar CTRs, and leverage tactics like snippet copy testing and Rich Results or Featured Snippet generation, which will aim to exceed Google’s CTR expectations and secure your top 10 positions.

Of course, we also witness these classic “Top 10 Tests” outside of Google’s Core Updates!

This next example is from our own beloved en.ryte.com subdomain, which aims to drive leads to our services and is home to our vast online marketing wiki and magazine, so it naturally earns traffic for many informational-intent queries.

Here is the ranking performance for the keyword “bing” which is a typical navigational query with tons of impressions (that’s quite a few Google users that are searching for Bing!). We can view the top 10 tests clearly when the light blue spikes show a corresponding uplift in impressions.

Whereas that looks like a juicy amount of impressions to lure over to our site, in reality nobody is clicking through to us because searchers want to navigate to bing.com and not to our informational Wiki article. This is a clear case of split searcher intent, where Google may surface varying intent documents to try and cater to those outside of their assumptions. Of course, the CTR of 0% proves that this page has no value for anyone, and we were demoted.

Interestingly enough, this position loss cost us a heck load of impressions. This caused a huge drop in “visibility” and therefore made it look like we had dramatically been hit by the January Core Update. Upon closer inspection, we found that we had just lost this and similar navigational queries like “gmail” that made the overall KPI drop seem worse than it was. Due to the lack of impact this will have on our engaged clicks, these are dropped rankings that we certainly won’t lose sleep over.

Aiming to rank high for these high search volume terms with an intent you’re unable to cater to is only useful for optimizing for “visibility indexes”. Ask yourself if it’s worth your precious time to focus on these, because of course you’re not going to bring valuable clicks to your pages with them.

Don’t waste time chasing high volume queries that won’t benefit your business goals

In my SEO career, I’ve sometimes gone down the wrong path of spending time optimizing for juicy-looking keywords with oodles of search volume. More often than not, these rankings yielded little value in terms of traffic quality simply because I wasn’t assessing the searcher intent properly.

These days, before investing my time, I try to better interpret which of those terms will bring my business value. Will the keyword bring me any clicks? Will those clickers remain on my website to achieve something significant (i.e. is there a relevant goal in mind?), or am I chasing these rankings for the sake of a vanity metric? Always evaluate what impact this high ranking will bring your business, and adjust your strategies accordingly.

The next example is for the term “SERP”, which is highly informational and likely only carried out to learn what the acronym stands for. For such a query, we wouldn’t expect an overwhelming number of clicks, yet we attempted to utilize better snippet copy in order to turn answer intent into research intent, and therefore drive more visits.

However, it didn’t exactly work out. We got pre-qualified on page two, then tested on page one (you can see the corresponding uplift in impressions below), but we failed to meet the expectations with a poor CTR of 0.1%, and were dropped back down.

Again, we weren’t sobbing into our fine Bavarian beers about the loss. There are plenty more worthwhile, traffic-driving topics out there that deserve our attention.

Always be on the lookout for those CTR underperformers

Something that we were glad to act on was the “meta keywords'' wiki article. Before we have a moment of silence for the fact that “meta keywords” is still heavily searched for, notice how we dramatically jumped up from page four to page one at the very left side of the chart. We were unaware of this keyword’s movement, and therefore its plain snippet was seldomly clicked and we fell back down.

After some months, the page one ranking resurfaced, and this time we took action after coming across it in our CTR Underperformer Report. The snippet was addressed to target that of the searcher’s intent, and the page was enhanced in parallel to give a better direct answer to the main focus questions.

Not only did this have a positive impact on our CTR, but we even gained the Featured Snippet. It’s super important to identify these top 10 tests in time, so that you can still act and do something to remain prominent in the top 10.

We identified this and many other undernourished queries using the CTR Underperformer Report. It maps out all the CTRs from queries, and reports on where we would have expected a higher number of clicks for that keyword’s intent, impressions, and position (much like Google’s models likely aim to do, too). We use this report extensively to identify cases where we deserve more traffic, and in order to ensure we stay in the top 10 or get pushed up even higher.

Quantify the importance of Featured Snippets

Speaking of Featured Snippets, the diagram below demonstrates what it can look like when you’re lucky enough to be in the placement vs. when you don’t have it. The keyword “reset iphone” from a client’s tech blog had a CTR of 20% with the Featured Snippet, while without the Featured Snippet it was at a sad 3%. It can be game changing to win a relevant Featured Snippet due to the major impact it can have on your incoming traffic.

Featured Snippets can sometimes have a bad reputation, due to the risk that they could drive a lower CTR than a standard result, especially when triggered for queries with higher informational intent. Try to remember that Featured Snippets can display your brand more prominently, and can be a great sign of trust to the average searcher. Even if users were satisfied on the SERP, the Featured Snippet can therefore provide worthwhile secondary benefits such as better brand awareness and potentially higher conversions via that trust factor.

Want to find some quick Featured Snippet opportunities for which you need only repurpose existing content? Filter your GSC queries using question and comparison modifiers to find those Featured-Snippet-worthy keywords you can go out and steal quickly.

You’re top 10 material — now what?

Another one of our keywords, “Web Architecture”, is a great example of why it’s so crucial to keep discovering new topics as well as underperforming content. We found this specific term was struggling a while ago during ongoing topic research and set out to apply enhancements to push its ranking up to the top 10. You can see the telltale cases of Google figuring out the purpose, quality, and relevance of this freshly renewed document while it climbs up to page one.

We fared well in each of our tests. For example, at positions 10-8, we managed to get a 5.7% CTR. which is good for such a spot.

After passing that test, we got moved up higher to positions 4-7, where we struck a successful 13% CTR. A couple of weeks later we reached an average position of 3.2 with a tasty CTR of 18.7%, and after some time we even bagged the Featured Snippet.

This took just three months from identifying the opportunity to climbing the ranks and getting the Featured Snippet.

Of course, it’s not just about CTR, it’s about the long click: Google’s main metric that’s indicative of a site providing the best possible result for their search users. How many long clicks are there in comparison to medium clicks, to short clicks, and how often are you the last click to demonstrate that search intent is successfully fulfilled? We checked in Google Analytics and out of 30K impressions, people spend an average of five minutes on this page, so it’s a great example of a positive long click.

Optimize answers, not just pages

It’s not about pages, it’s about individual pieces of information and their corresponding answers that set out to satisfy queries.

In the next diagram, you can actually see Google adjusting the keywords that specific pages are ranking for. This URL ranks for a whopping 1,548 keywords, but pulling a couple of the significant ones for a detailed individual analysis helps us track Google’s decision making a lot better.

When comparing these two keywords, you can see that Google promoted the stronger performer on page one, and then pushed the weaker one down. The strong difference in CTR was caused by the fact that the snippet was only really geared towards a portion of its ranking keywords, which led to Google adjusting the rankings. It’s not always about a snippet being bad, but about other snippets being better, and whether the query might deserve a better piece of information in place of the snippet.

Remember, website quality and technical SEO are still critical

One thing we always like to stress is that you shouldn’t always judge your data too quickly, because there could be underlying technical errors that are getting you down (such as botched migrations, mixed ranking signals, blocked assets, and so on).

The case below illustrates perfectly why it’s so much better to analyze this data with a tool like Ryte, because with GSC you will see only a small portion of what’s taking place, and with a very top-level view. You want to be able to compare individual pages that are ranking for your keyword to reveal what’s actually at the root of the problem.

You’re probably quite shocked by this dramatic drop, because before the dip this was a high-performing keyword with a great CTR and a long reign in position one.

This keyword was in position one with a CTR of 90%, but then the domain added a noindex directive to the page (facepalm). So, Google replaced that number one ranking URL with their subdomain, which was already ranking number two. However, the subdomain homepage wasn’t the ideal location for the query, as searchers couldn’t find the correct information right away.

But it got even worse, because then they decided to 301 redirect that subdomain homepage to the top level domain homepage, so now Google was forced to initially rank a generic page that clearly didn’t have the correct information to satisfy that specific query. As you can see, they then fell completely from that top position, as it was irrelevant, and Google couldn’t retrieve the correct page for the job.

Something similar happened in this next example. The result in position one for a very juicy term with a fantastic CTR suddenly returned a 404, so Google started to rank a different page from that same domain instead, which was associated with a slightly similar but inexact topic. This again wasn’t the correct fit for the query, so the overall performance declined.

This is why it’s so important to look not just at the overall data, but to dig deeper — especially if there’s multiple pages ranking for a keyword — so that you can see exactly what’s happening.

Got spam?

The final point is not exactly a pattern to consider, but more a wise lesson to wrap up everything I’ve explored in this post.

At scale, Google is testing pages in the top 10 results in order to find the best placement based on that performance. With this in mind, why can’t we ask people to go to the SERPs, click on our results, and reap the tasty benefits of that improved position? Or better yet, why don’t we automate this continually for all of our top-10-tested queries?

Of course, this approach is heavily spammy, against guidelines, and something against which Google can easily safeguard. You don’t have to test this either, because Marcus (being the inquisitive SEO he is!) already did.

One of his own domains on job advertisements ranks for the focus keyword of “job adverts”, and as you can imagine, this is a highly competitive term that requires a lot of effort to score. It was ranking at position 6.6 and had a decent CTR, but he wanted to optimize it even further and climb those SERPs to position one.

He artificially cranked up his CTR using clever methods that ended up earning a “very credible” 36% CTR in position nine. Soon in position 10, he had a CTR of 56.6%, at which point Google started to catch wind of the spammy manipulation and punted him down the SERPs. Lesson learned.

Of course, this was an experiment to understand at which point Google would detect spammy behavior. I wouldn’t encourage carrying out such tactics for personal gain, because it’s in the best interests of your website’s health and status to focus on the quality of your clicks. Even if this test was working well and rankings improved, over time your visitors may not resonate with your content, and Google might recall that that lower position was initially in place for a reason. It’s an ongoing cycle.

I encourage you to reach your results organically. Leverage the power of snippet optimization in parallel with ongoing domain and content improvements to not only increase the quantity and quality of your clicks, but the very experiences on your website that make an impact to your long-term SEO and business growth.

Conclusion

To summarize, don’t forget that GSC search performance data gives you the best insight into your website’s true performance. Rank trackers are ideal for competitor research and SERP snapshots, but the position data is only one absolute ranking from one set variable like location and device. Use your own GSC data for intrinsic pattern analyses, diagnostics, and growth discovery.

But with great data, comes great responsibilities. Make sure you’re finding and understanding the patterns you need to be aware of, such as struggling top 10 tests, underperforming snippets, technical faults, and anything else that deprives you of the success you work so hard to achieve.


Ready for more?

You'll uncover even more SEO goodness from Izzi and our other MozCon speakers in the MozCon 2020 video bundle. At this year's special low price of $129, this is invaluable content you can access again and again throughout the year to inspire and ignite your SEO strategy:

  • 21 full-length videos from some of the brightest minds in digital marketing
  • Instant downloads and streaming to your computer, tablet, or mobile device
  • Downloadable slide decks for presentations

Get my MozCon 2020 video bundle


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Monday, September 7, 2020

Emerald Green Our Lady Of Guadalupe Rosary

 Come Closer To The Love Of Jesus And Our Mother As You Pray The Rosary With This Lovely Emerald Green Our Lady Of Guadalupe Rosary. You Will Love It. Limited Stock. Order Now.

Saturday, September 5, 2020

Mother Teresa Quote T-Shirt Save $11.01

 Catholic Shirts Limited Edition Mother Teresa T-Shirt. Order Now!

SPECIFICATIONS
  • 6.1-ounce, 100% cotton
  • Double-needle neck, sleeves and hem; Roomy Unisex Fit
  • Ash is 99% cotton, 1% poly; Sport Grey is 90% cotton, 10% poly; Dark Heather is 50% cotton, 50% polyester
  • Made by Gildan

Friday, September 4, 2020

Red Rose Garden Rosary with Free Rosary Pouch by Risen Rosaries

 With This Incredible Red Rose Garden Rosary.

This magnificently exquisite rosary features gorgeous 10 mm intricately detailed crimson red polymer clay rose beads with dainty emerald green leaves. Features beautiful image of Our Lady of Guadalupe Centerpiece. Approximate length is 80 cm (31.5 in).

Christian Catholic Rosary Shop Exclusive. You will not find this exact rosary anywhere else.

Order now!

How to Create 10x Content — Best of Whiteboard Friday

Posted by randfish

Have you ever tried to create 10x content? It's not easy, is it? Knowing how and where to start can often be the biggest obstacle you'll face. In this oldie-but-goodie episode of Whiteboard Friday, Rand Fishkin talks about how you can develop your own 10x content to help your brand stand out.

How to Create 10x Content Whiteboard

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're chatting about how to create 10x content.

Now, for those of you who might need a refresher or who haven't seen previous Whiteboard Fridays where we've talked about 10x content, this is the idea that, because of content saturation, content overload, the idea that there's just so much in our streams and standing out is so hard, we can't just say, "Hey, I want to be as good as the top 10 people in the search results for this particular keyword term or phrase." We have to say, "How can I create something 10 times better than what any of these folks are currently doing?" That's how we stand out.

What is 10x content?

10x content is content that is 10 times better than the highest ranking result for a given keyword(s). Here are 119 Examples of 10x Content.

Criteria for 10x content:

  • It has to have great UI and UX on any device.
  • That content is generally a combination of high quality, trustworthy, it's useful, interesting, and remarkable. It doesn't have to be all of those but some combination of them.
  • It's got to be considerably different in scope and in detail from other works that are serving the same visitor or user intent.
  • It's got to create an emotional response. I want to feel awe. I want to feel surprise. I want to feel joy, anticipation, or admiration for that piece of content in order for it to be considered 10x.
  • It has to solve a problem or answer a question by providing comprehensive, accurate, exceptional information or resources.
  • It's got to deliver content in a unique, remarkable, typically unexpectedly pleasurable style or medium.

If you hit all of these things, you probably have yourself a piece of 10x content. It's just very hard to do. That's what we're talking about today. What's a process by which we can get to checking off all these boxes?

Step 1 - Gain deep insight.

So let's start here. First off, when you have an issue, let's say you've got a piece of content that you know you want to create, a topic you know you're going to address that topic. We can talk about how to get to that topic in a future Whiteboard Friday, and we've had some in the past certainly around keyword research and choosing topics and that sort of thing. But if I know the topic, I need to first gain a deep, deep insight into the core of why people are interested in this subject.

So for example, let's do something simple, something we're all familiar with.

"I wonder what the most highly-rated new movies are out there." Essentially this is, "Well, okay, how do we get into this person's brain and try and answer the core of their question?" They're essentially asking, "Okay, how do I figure out . . . help me decide what to watch."

That could have a bunch of angles to it. It could be about user ratings, or it could be maybe about awards. Maybe it's about popularity. What are the most popular movies out there? It could be meta ratings. Maybe this person wants to see an aggregated list of all the data out there. It could be editorial or critic ratings. There's a bunch of angles there.

Step 2 - We have to get unique.

We know that uniqueness, being exceptional, not the same as everyone else but different from everyone else out there, is really important.

So as we brainstorm different ways that we might address the core of this user's problem, we might say, "All right, movie ratings, could we do a round-up?"

Well, that already exists at places like Metacritic. They sort of aggregate everything and then put it all together and tell us what critics versus audiences think across many, many different websites. So that's already been done.

Awards versus popularity, again, it's already been done in a number of places that do comparisons of here's the ones that had the highest box office versus here's the ones that won certain types of awards. Well, okay, so that's not particularly unique.

What about critics versus audiences? Again, this is done basically on every different website. Everyone shows me user ratings versus critic ratings.

What about by availability? Well, there's actually a bunch of sites that do this now where they show you this is on Netflix, this is on Hulu, this is on Amazon, this you can watch on Comcast or on demand, this you can see on YouTube. All right, so that's not unique either.

What about which ratings can I trust? Hang on a tick. That might not exist yet. That's a great, unique insight into this problem, because one of the challenges that I have when I want to say, "What should I decide to watch," is who should I trust and who should I believe. Can I go to Fandango or Amazon or Metacritic or Netflix? Whose ratings are actually trustworthy?

Well, now we've got something unique, and now we've got that core insight, that unique angle on it.

Step 3 - Uncover powerful methods to provide an answer.

Now we want to uncover a powerful, hard-to-replicate, high-quality method to provide an answer to that question.

In this case, that could be, "Well, you know what? We can do a statistical analysis." We get a sample set big enough, enough films, maybe 150 movies or so from the last year. We take a look at the ratings that each service provides, and we see if we can find patterns, patterns like: Who's high and low? Do some have different genre preferences? Which one is trustworthy? Does one correlate with awards and critics? Which ones are outliers? All of these are actually trying to get to the "which one can I trust" question.

I think we can answer that if we do this statistical analysis. It's a pain in the butt.

We have to go to all these sites. We have to collect all the data. We have to put it into a statistical model. We then have to run our model. We have to make sure that we have a big enough sample set. We've got to see what our correlations are. We have to check for outliers and distributions and all this kind of stuff. But once we do that and once we show our methodology, now all we have to do is...

Step 4 - Find a unique, powerful, exceptional way to present this content.

In fact, FiveThirtyEight.com did exactly this.

They took this statistical analysis. They looked at all of these different sites, Fandango and IMDB users versus critics versus Metacritic versus Rotten Tomatoes and a number of other sites. Then they had this one graph that shows essentially the star rating averages across I think it was 146 different films, which was the sample set that they determined was accurate enough.

Now they've created this piece of 10x content, and they've answered this unique take on the question, "Which rating service can I trust?" The answer is, "Don't trust Fandango," basically. But you can see more in there. Metacritic is pretty good. A couple of the other ones are decent.

Step 5 - Expect that you're going to do this 5 to 10 times before you have one hit.

The only way to get good at this, the only way to get good is experimentation and practice. You do this over and over again, and you start to develop a sixth sense for how you can uncover that unique element, how you can present it in a unique fashion, and how you can make it sing on the Web.

All right, everyone, I look forward to hearing your thoughts on 10x content. If you have any examples you'd like to share with us, please feel free to do so in the comments. No problem linking out. That's just fine. We will see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com



Interested in building your own content strategy? Don't have a lot of time to spare? We collaborated with HubSpot Academy on their free Content Strategy course — check out the video to build a strong foundation of knowledge and equip yourself with actionable tools to get started!

Check out the free Content Strategy course!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, September 2, 2020

How Your Brand Can Earn Media Coverage on NBC News, USA Today, CNBC, and More

Posted by amandamilligan

As you might imagine, it’s not easy to get your brand name mentioned in top media outlets.

But if you put in the work to engage in content marketing + digital PR, the benefits are massive:

  • High-quality backlinks to your site
  • A significant boost in brand awareness
  • An increase in your brand’s authority
  • Improved relationships with writers who loved your content

I’ll explain how you can earn this type of coverage and its corresponding benefits for your brand.

Step 1: Create newsworthy content

You probably have an instinctual sense of what qualifies as news, but some of the key newsworthy elements are timeliness, proximity, and significance.

Timeliness is tough. Hard news is usually covered by media outlets automatically anyway. However, there’s a way to create news — and it’s through data journalism.

By doing your own research, conducting your own studies, running your own surveys, and performing your own analyses, you’re effectively creating news by offering brand new stories.

For example, for our client Porch, we used data from the U.S. Census Bureau’s American FactFinder, Yelp, and Zillow to determine which cities are the best for young families.

This project is inherently location-based, which adds the proximity element as well. But even if your content isn’t location-based, explore whether you can take your data and localize it so that you cover multiple geographic areas. (Then, you can pitch local news in addition to national news!)

Significance is also an excellent element to keep in mind, especially during the ideation stage. It basically means: How many people are impacted by this news, and to what degree?

This is especially important if you’re aiming for national news publications, as they tend to have a wide audience. In this case, there are plenty of young families across the country, and CNBC saw that it could connect with this demographic.

When you combine all of these newsworthy elements, you can increase your chances of getting respectable news publications interested.

Step 2: Design and package the content for clarity

You need to present your data in a clear and compelling way. Easier said than done, though, right?

Here are common design pitfalls to watch out for:

  • Over-designing. Instead, experiment with simplistic styles that match your branding and take more creative liberties with headers and where the data naturally lends itself to imagery.
  • Over-branding. If you have your logo on all of the images, it might be a bit too much branding for some publishers. However, if you have a really authoritative brand, it can add authority to the content, too. Test both versions to see what works best for you.
  • Over-labeling. Include all of the text and labels you need to make things clear, but don’t have too much repetition. The more there is to read, the more time it’ll take to understand what’s happening on the graph.

Finally, don’t be afraid to add the most interesting insights or context as callouts to the images. That way people can identify the most pertinent information immediately while still having more to explore if they want the full story.

Take, for example, one of the graphics we created for BestVPN for a project that got coverage on The Motley Fool, USA Today, Nasdaq and more. We don’t assume people will read text in an article to get relevant information, so we put it right on the image.

Here’s another example of a project image we created for Influence.co.

We included the callout at the bottom of the image and featured it in our pitch emails (more on that later) because we knew it was a compelling data point. Lo and behold, it became the headline for the Bustle coverage we secured.

Note: It’s entirely possible a news publication won’t run your images. That’s totally fine! Creating the images is still worth it, because they help everyone grasp your project more quickly (including writers), and when well done, they convey a sense of authority.

When you have all of your data visualized, we recommended creating a write-up that goes along with it. One objective of the article is to explain why you executed the project in the first place. What were you trying to discover? How is this information useful to your audience?

The other objective is to provide more color to the data. What are the implications of your findings? What could it mean to readers, and how can they apply the new knowledge to their lives, if applicable?

Include quotes from experts when appropriate, as this will be useful to publication writers as well.

Step 3: Write personalized pitches

I could create an entirely separate article about how to properly pitch top-tier publishers. But for our purposes, I do want to address two of the most important elements:

Treat writers like people

“You did something PR people never do — but should. Looked at my Twitter feed and made it personal. Nicely done!” — CNBC writer

Building real connections with people takes time and effort. If you’re going to pitch a writer, you need to do the following:

  • Read their past work and fully understand their beat
  • Understand how your work matches their beat
  • Check out their social profiles to learn more about them as people

Some still swear by the templated approach. While it might work sometimes, we’ve found that because writers’ inboxes continue to be inundated with pitches, reaching out to them in a more personalized manner can not only increase our chances of getting emails opened, but also getting a genuinely appreciative response.

So, start your email with a personal connection. Reach out about something you have in common or something about them you admire. It will go a long way!

Include a list of the most relevant insights

“Wow these findings are super interesting and surprising. I will for sure include if I go ahead with this piece.” — The Wall Street Journal writer

Never assume a writer is going to click through to your project and read the entire thing before deciding if they want to cover it. In the pitch email, you need to spell out exactly what you think is the most interesting part about the project for their readers.

The key word being their readers. Sure, overall you probably have a few main takeaways in mind that are compelling, but there’s often nuance in which specific takeaways will be the most relevant to particular publishers.

We’ve seen this so many times, and it’s reflected in the resulting headlines. For example, for a project we created called Generational Knowledge Gaps, we surveyed nearly 1,000 people about their proficiency in hands-on tasks. Look at the news headlines on REALTOR Magazine and ZDNet, respectively:

While REALTOR Magazine went with a headline that captures the general spirit of the project, ZDNet’s is more honed in on what matters for their readers: the tech side of things. If we’d pitched to them the same way we’d pitched to REALTOR, they might not have covered the project at all.

So, after a personalization, include bullet points that say what the key data points are for their particular audience, wrap up the email with a question of whether they’re interested, and send it off.

Conclusion

It’s not an easy process to get the attention of top writers. You have to take time to develop high-quality content — it takes us at least a month — and then strategically promote it, which can also take at least another month to get as much coverage as you can. However, this investment can have major payoff, as you’ll be earning unparalleled brand awareness and high-value backlinks.


To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.

Take the Survey

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Tuesday, September 1, 2020

Page Authority 2.0 Is Coming This Month: What’s Changing and Why

Posted by rjonesx.

Hey folks,

I'm Russ Jones, Adjunct Search Scientist with Moz, and I'm proud to announce that this month we’ll be releasing a terrific update to our metric, Page Authority (PA).

Although Page Authority hasn't attracted the same attention as its sibling metric Domain Authority, PA has always correlated with SERPs much better than DA, serving as a strong predictor of ranking. While PA has always fluctuated with changes in the link graph, we’re introducing a whole new method of deriving the score.

Learn More About Page Authority 2.0

What's changing

Long gone are the days of just counting backlinks a couple of ways and hoping they correlate well with SERPs. As Moz tends to do, we’re pioneering a new manner of calculating Page Authority to produce superior results. Here are some of the ways we’re changing things up:

The training set

In the past, we used SERPs alone to train the Page Authority model. While this method was simple and direct, it left much to be desired. Our first step in addressing the new Page Authority is redefining the training set altogether.

Instead of modeling Page Authority based on one page's ability to outrank another page, we now train based on the cumulative value of a page based on a number of metrics including search traffic and CPC. While this is a bit of an oversimplification of what’s going on, this methodology allows us to better compare pages that don't appear in the SERPs together.

For example, imagine Page A is on one topic and Page B is on another topic. Historically, our model wouldn't get to compare these two pages because they never appear on the same SERP. This new methodology provides an abstract value to each page, such that they can be compared with any other page by the machine-learned model.

The re-training set

One of the biggest problems in building metrics is not what the models see, but what the models don't see.

Think about this for a minute: what types of URLs don't show up in the SERPs that the model will use to produce Page Authority? Well, for starters, there won't be many images or other binary files. There also won't be penalized pages. In order to address this problem, we now use a common solution of running the model, identifying outliers (high PA URLs which do not in fact have any search value), and then feeding those URLs back into the training set. We can then re-run the model such that it learns from its own mistakes. This can be repeated as many times as is necessary to reduce the number of outliers.

Ripping off the Band-Aid

Moz is always cognizant of the impact the changes to our metrics might have on our customers. There is a trade-off between continuity and accuracy. With Page Authority, we’re focusing on accuracy. This may cause larger-than-normal shifts in your Page Authority, so it’s more important than ever to think about Page Authority with respect to your competitors, not as a standalone number.

What actions should we take?

Communicate with stakeholders, team members, and clients about the update

Just like our upgrade to Domain Authority, some users will likely be surprised by changes in their PA. Make sure they understand that the new PA will be more accurate (and more useful!) and that the most important measurement is relative to their competitors. We won't release a Page Authority which isn't better than the previous version, so even if the results are disappointing, understand that you now have better insight than ever before into the performance of your pages in the SERPs.

Use PA as a relative metric, like DA

Page Authority is intrinsically comparative. A PA of 70 means nothing unless you know the PA of your competitors. It could be high enough to allow you to rank for every keyword you like, or it could be terribly low because your competitors are Wikipedia and Quora. The first thing you should do when analyzing the Page Authority of any URL is set it in the proper context of its competitor's URLs.

Expect PA to keep pace with Google

Just as we announced with Domain Authority, we’re not going to launch the new PA and just let it go. Our intent is to continue to improve upon the model as we discover new and better features and models. This volatility will mostly affect pages with unnatural link profiles, but we would rather stay up-to-date with Google's algorithms even if it means a bit of a bumpy ride.

When is it launching?

We’ll be rolling out the new Page Authority on September 30, 2020. Between now and then, we encourage you to explore our resources to help you prepare and facilitate conversations with clients and team members. Following the launch of the new PA, I’ll also be hosting a webinar on October 15 to discuss how to leverage the metric. We’re so excited about the new and improved PA and hope you’re looking forward to this update too.

If you have any questions, please comment below, reach out to me on Twitter @rjonesx, or email me at russ@moz.com.

To get prepared and learn more about the upcoming change to Page Authority, be sure to dig into our helpful resources:

Visit the PA Resource Center

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!