Posted by MiriamEllis
[Estimated read time: 6 minutes]
Why proper onboarding matters
Imagine getting three months in on a Local SEO contract before realizing that your clientâs storefront is really his cousinâs garage. From which he runs two other âlegitâ businesses he never mentioned. Or that he neglected to mention the reviews he bought last year. Worse yet, he doesnât even know that buying reviews is a bad thing.
The story is equally bad if youâre diligently working to build quality unique content around a Chicago clientâs business in Wicker Park but then realize their address (and customer base) is actually in neighboring Avondale.
What you donât know will hurt you. And your clients.
A hallmark of the professional Local SEO department or agency is its dedication to getting off on the right foot with a new client by getting their data beautifully documented for the whole team from the start. At various times throughout the life of the contract, your teammates and staff from complementary departments will be needing to access different aspects of a clientâs core NAP, known challenges, company history, and goals.
Having this information clearly recorded in shareable media is the key to both organization and collaboration, as well as being the best preventative measure against costly data-oriented mistakes. Clear and consistent data play vital roles in Local SEO. Information must not only be gathered, but carefully verified with the client.
This article will offer you a working Client Discovery Questionnaire, an Initial Discovery Phone Call Script, and a useful Location Data Spreadsheet that will be easy for any customer to fill out and for you to then use to get those listings up to date. Youâre about to take your client discovery process to awesome new heights!
Why agencies donât always get onboarding right
Lack of a clearly delineated, step-by-step onboarding process increases the potential for human error. Your agencyâs Local SEO manager may be having allergies on Monday and simply forget to ask your new client if they have more than one website, if theyâve ever purchased reviews, or if they have direct access to their Google My Business listings. Or they could have that information and forget to share it when they jump to a new agency.
The outcomes of disorganized onboarding can range from minor hassles to disastrous mistakes.
Minor hassles would include having to make a number of follow-up phone calls to fill in holes in a spreadsheet that could have been taken care of in a single outreach. Itâs inconvenient for all teammates when they have to scramble for missing data that should have been available at the outset of the project.
Disastrous mistakes can stem from a failure to fully gauge the details and scope of a clientâs holdings. Suddenly, a medium-sized project can take on gigantic proportions when the agency learns that the client actually has 10 mini-sites with duplicate content on them, or 10 duplicate GMB listings, or a series of call tracking numbers around the web.
Itâs extremely disheartening to discover a mountain of work you didnât realize would need to be undertaken, and the agency can end up having to put in extra uncompensated time or return to the client to renegotiate the contract. It also leads to client dissatisfaction.
Setting correct client expectations is completely dependent on being able to properly gauge the scope of a project, so that you can provide an appropriate timeline, quote, and projected benchmarks. In Local, that comes down to documenting core business information, identifying past and present problems, and understanding which client goals are achievable. With the right tools and effective communication, your agency will be making a very successful start to what you want to be a very successful project.
Professional client discovery made simple
Thereâs a lot you want to learn about a new client up front, but asking (and answering) all those questions right away can be grueling. Not to mention information fatigue, which can make your client give shorter and shorter answers when they feel like theyâve spent enough time already. Meanwhile your brain reaches max capacity and you canât use all that valuable information because you canât remember it.
To prevent such a disaster, we recommend dividing your Local SEO discovery process into a questionnaire to nail down the basics, a follow-up phone call to help you feel out some trickier issues, and a CSV to gather the location data. And weâve created templates to get you started…
Client Discovery Questionnaire
Use our Local SEO Client Discovery Questionnaire to understand your clientâs history, current organization, and what other consultants they might also be working with. Weâve annotated each question in the Google Doc template to help you understand what you can learn and potential pitfalls to look out for.
If you want to make collecting and preserving your clientsâ answers extra easy, use Google Forms to turn that questionnaire into a form like this:
You can even personalize the graphic, questions, and workflow to suit your brand.
Client Discovery Phone Script
Once youâve received your clientâs completed questionnaire and have had time to process the responses and do any necessary due diligence (like using our Check Listings tool to check how aggregators currently display their information), itâs time to follow up on the phone. Use our annotated Local SEO Client Discovery Phone Script to get you started.
No form necessary this time, because youâll be asking the client verbally. Be sure to pay attention to the clientâs tone of voice as they answer and refer to the notes under each question to see what you might be in for.
Location Data CSV
Sometimes the hardest part of Local SEO is getting all the location info letter-perfect. Make that easier by having the client input all those details into your copy of the Location Data Spreadsheet.
Then use the File menu to download that document as a CSV.
Youâll want to proof this before uploading it to any data aggregators. If youâre working with Moz Local, the next step is an easy upload of your CSV. If youâre working with other services, you can always customize your data collection spreadsheet to meet their standards.
Keep up to date on any business moves or changes in hours by designing a data update form like this one from SEER and periodically reminding your client contact to use it.
Why mutual signals of commitment really matter
There are two sides to every successful client project: one half belongs to the agency and the other to the company it serves. The attention to detail your agency displays via clean, user-friendly forms and good phone sessions will signal your professionalism and commitment to doing quality work. At the same time, the willingness of the client to take the necessary time to fill out these documents and have these conversations signals their commitment to receiving value from their investment.
Itâs not unusual for a new client to express some initial surprise when they realize how many questions you’re asking them to answer. Past experience may even have led them to expect half-hearted, sloppy work from other SEO agencies. But, what you want to see is a willingness on their part to share everything they can about their company with you so that you can do your best work.
Anecdotally, Iâve fully refunded the down payments of a few incoming clients who claimed they couldnât take the time to fill out my forms, because I detected in their unwillingness a lack of genuine commitment to success. These companies have, fortunately, been the exception rather than the rule for me, and likely will be for your agency, too.
Itâs my hope that, with the right forms and a commitment to having important conversations with incoming clients at the outset, the work you undertake will make your Local team top agency and client heroes!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Posted by MatthewBarby
[Estimated read time: 10 minutes]
The traditional ways of measuring the success or failure of content are broken. We canât just rely on metrics like the number of pageviews/visits or bounce rate to determine whether what weâre creating has performed well.
âThe primary thing we look for with news is impact, not traffic,â says Jonah Peretti, Founder of BuzzFeed. One of the ways that BuzzFeed have mastered this is with the development of their proprietary analytics platform, POUND.
POUND enables BuzzFeed to predict the potential reach of a story based on its content, understand how effective specific promotions are based on the downstream sharing and traffic, and power A/B tests â and thatâs just a few examples.
Just because youâve managed to get more eyeballs onto your content doesnât mean itâs actually achieved anything. If that were the case then Iâd just take a few hundred dollars and buy some paid StumbleUpon traffic every time.
Yeah, Iâd generate traffic, but itâs highly unlikely to result in me achieving some of my actual business goals. Not only that, but Iâd have no real indication of whether my content was satisfying the needs of my visitors.
The scary thing is that the majority of content marketing campaigns are measured this way. I hear statements like âitâs too difficult to measure the performance of individual pieces of contentâ far too often. The reality is that itâs pretty easy to measure content marketing campaigns on a micro level â a lot of the time people donât want to do it.
Engagement over entrances
Within any commercial content marketing campaign that youâre running, measurement should be business goal-centric. By that I mean that you should be determining the overall success of your campaign based on the achievement of core business goals.
If your primary business goal is to generate 300 leads each month from the content that youâre publishing, youâll need to have a reporting mechanism in place to track this information.
On a more micro-level, youâll want to be tracking and using engagement metrics to enable you to influence the achievement of your business goals. In my opinion, all content campaigns should have robust, engagement-driven reporting behind them.
Total Time Reading (TTR)
One metric that Medium uses, which I think adds a lot more value than pageviews, is “Total Time Reading (TTR).” This is a cumulative metric that quantifies the total number of minutes spent reading a piece of content. For example, if I had 10 visitors to one of my blog articles and they each stayed reading the article for 1 minute each, the total reading time would be 10 minutes.
âWe measure every user interaction with every post. Most of this is done by periodically recording scroll positions. We pipe this data into our data warehouse, where offline processing aggregates the time spent reading (or our best guess of it): we infer when a reader started reading, when they paused, and when they stopped altogether. The methodology allows us to correct for periods of inactivity (such as having a post open in a different tab, walking the dog, or checking your phone).â (source)
The reason why this is more powerful than just pageviews is because it takes into account how engaged your readers are to give a more accurate representation of its visibility. You could have an article with 1,000 pageviews that has a greater TTR than one with 10,000 pageviews.
Scroll depth & time on page
A related and simpler metric to acquire is the average time on page (available within Google Analytics). The average time spent on your webpage will give a general indication of how long your visitors are staying on the page. Combining this with âscroll depthâ (i.e. how far down the page has a visitor scrolled) will help paint a better picture of how âengagedâ your visitors are. Youâll be able to get the answer to the following:
âHow much of this article are my visitors actually reading?â
âIs the length of my content putting visitors off?â
âAre my readers remaining on the page for a long time?â
Having the answers to these questions is really important when it comes to determining which types of content are resonating more with your visitors.
BuzzFeedâs âSocial Liftâ metric is a particularly good way of understanding the âviralityâ of your content (you can see this when you publish a post to BuzzFeed). BuzzFeed calculates âSocial Liftâ as follows:
((Social Views)/(Seed Views)+1)
Social Views: Traffic thatâs come from outside BuzzFeed; for example, referral traffic, email, social media, etc.
Seed Views: Owned traffic thatâs come from within the BuzzFeed platform; e.g. from appearing in BuzzFeedâs newsfeed.
This is a great metric to use when youâre a platform publisher as it helps separate out traffic thatâs coming from outside of the properties that you own, thus determining its “viral potential.”
There are ways to use this kind of approach within your own content marketing campaigns (without being a huge publisher platform) to help get a better idea of its “viral potential.”
One simple calculation can just involve the following:
This simple stat can be used to determine which content is likely to perform better on social media, and as a result it will enable you to prioritize certain content over others for paid social promotion. The higher the score, the higher its “viral potential.” This is exactly what BuzzFeed does to understand which pieces of content they should put more weight behind from a very early stage.
You can even take this to the next level by replacing pageviews with TTR to get a more representative view of engagement to sharing behavior.
The bottom line
Alongside predicting “viral potential” and “TTR,” youâll want to know how your content is performing against your bottom line. For most businesses, thatâs the main reason why theyâre creating content.
This isnât always easy and a lot of people get this wrong by looking for a silver bullet that doesnât exist. Every sales process is different, but letâs look at the typical process that we have at HubSpot for our free CRM product:
- Visitor comes through to our blog content from organic search.
- Visitor clicks on a CTA within the blog post.
- Visitor downloads a gated offer in exchange for their email address and other data.
- Prospect goes into a nurturing workflow.
- Prospect goes through to a BOFU landing page and signs up to the CRM.
- Registered user activates and invites in members of their team.
This is a simple process, but it can still be tricky sometimes to get a dollar value on each piece of content we produce. To do this, youâve got to understand what the value of a visitor is, and this is done by working backwards through the process.
The first question to answer is, âwhatâs the lifetime value (LTV) of an activated user?â In other words, âhow much will this customer spend in their lifetime with us?â
For e-commerce businesses, you should be able to get this information by analyzing historical sales data to understand the average order value that someone makes and multiply that by the average number of orders an individual will make with you in their lifetime.
For the purposes of this example, letâs say each of our activated CRM users has an LTV of 0. Itâs now time to work backwards from that figure (all the below figures are theoretical)âŚ
Question 1: âWhatâs the conversion rate of new CRM activations from our email workflow(s)?â
Answer 1: â5%â
Question 2: âHow many people download our gated offers after coming through to the blog content?â
Answer 2: â3%â
Knowing this would help me to start putting a monetary value against each visitor to the blog content, as well as each lead (someone that downloads a gated offer).
Letâs say we generate 500,000 visitors to our blog content each month. Using the average conversion rates from above, weâd convert 15,000 of those into email leads. From there weâd nurture 750 of them into activated CRM users. Multiply that by the LTV of a CRM user (0) and weâve got ,000 (again, these figures are all just made up).
Using this final figure of ,000, we could work backwards to understand the value of a single visitor to our blog content:
Single Visitor Value: .15
We can do the same for email leads using the following calculation:
Individual Lead Value: .00
Knowing these figures will help you be able to determine the bottom-line value of each of your pieces of content, as well as calculating a rough return on investment (ROI) figure.
Letâs say one of the blog posts weâre creating to encourage CRM signups generated 500 new email leads; weâd see a ,500 return. We could then go and evaluate the cost of producing that blog post (letâs say it takes 6 hours at 0 per hour â 0) to calculate a ROI figure of 316%.
ROI in its simplest form is calculated as:
You donât necessarily need to follow these figures religiously when it comes to content performance on a broader level, especially when you consider that some content just doesnât have the primary goal of lead generation. That said, for the content that does have this goal, it makes sense to pay attention to this.
The link between engagement and ROI
So far Iâve talked about two very different forms of measurement:
- Return on investment
What youâll want to avoid is actually thinking about these as isolated variables. Return on investment metrics (for example, lead conversion rate) are heavily influenced by engagement metrics, such as TTR.
The key is to understand exactly which engagement metrics have the greatest impact on your ROI. This way you can use engagement metrics to form the basis of your optimization tests in order to make the biggest impact on your bottom line.
Letâs take the following scenario that I faced within my own blog as an exampleâŚ
The average length of the content across my website is around 5,000 words. Some of my content way surpasses 10,000 words in length, taking an estimated hour to read (my recent SEO tips guide is a perfect example of this). As a result, the bounce rate on my content is quite high, especially from mobile visitors.
Keeping people engaged within a 10,000-word article when they havenât got a lot of time on their hands is a challenge. Needless to say, it makes it even more difficult to ensure my CTAs (aimed at newsletter subscriptions) stand out.
From some testing, I found that adding my CTAs closer to the top of my content was helping to improve conversion rates. The main issue I needed to tackle was how to keep people on the page for longer, even when theyâre in a hurry.
To do this, I worked on the following solution: give visitors a concise summary of the blog post that takes under 30 seconds to read. Once theyâve read this, show them a CTA that will give them something to read in more detail in their own time.
All this involved was the addition of a “Summary” button at the top of my blog post that, when clicked, hides the content and displays a short summary with a custom CTA.
This has not only helped to reduce the number of people bouncing from my long-form content, but it also increased the number of subscribers generated from my content whilst improving user experience at the same time (which is pretty rare).
Iâve thought that more of you might find this quite a useful feature on your own websites, so I packaged it up as a free WordPress plugin that you can download here.
The above example is just one example of a way to impact the ROI of your content by improving engagement. My advice is to get a robust measurement process in place so that youâre able to first of all identify opportunities, and then go through with experiments to take advantage of the opportunity.
More than anything, I’d recommend that you take a step back and re-evaluate the way that you’re measuring your content campaigns to see if what you’re doing really aligns with the fundamental goals of your business. You can invest in endless tools that help you measure things better, but if core metrics that you’re looking for are wrong, then this is all for nothing.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Posted by KelseyLibert
[Estimated read time: 9 minutes]
In a recent Whiteboard Friday about 10x content, Rand said to expect it to take 5 to 10 attempts before youâll create a piece of content thatâs a hit.
If youâve been at the content marketing game for a while, you probably agree with Rand. Seasoned content marketers know youâre likely to see a percentage of content flops before you achieve a big win. Then, as you gain a sense for why some content fails and other content succeeds, you integrate what youâve learned into your process. Gradually, you start batting fewer base hits and more home runs.
At Fractl, we regularly look back at campaign performance and refine our production and promotion processes based on what the data tells us. Are publishers rejecting a certain content format? Is there a connection between Domain Authority (DA) and the industry vertical we targeted? Do certain topics attract the most social shares? These are the types of questions we ask, and then we use the related data to create better content.
We recently dug through three years of content marketing campaigns and asked: What factors increase contentâs ability to earn links? In this post, Iâll show you what we found.
We analyzed campaign data from a sample of 345 Fractl campaigns that launched between 2013 and 2016. To compare linking performance, we set benchmarks based on the industry averages for links per campaign from our content marketing agency survey: High success (more than 100 placements), moderate success (20â100 placements), and low success (fewer than 20 placements).
We looked at the relationship between the number of placements and the contentâs topic, visual assets, and formatting. “Placement” refers to any time a publisher wrote about the campaign. In terms of links, a placement could mean dofollow, cocitation, nofollow, or text attribution.
Which content elements can increase link earning potential?
The chart below highlights the largest differences between our high- and low-success campaigns.
We found the following characteristics were present in content that earned the most links:
- Highly emotional
- Broad appeal
- Pop culture-themed
The data confirmed our assumptions about why some content is better than others at attracting links, as all four of the above characteristics were present in some of our biggest hits. As an example, our Women in Video Games campaign checked all four of those boxes.
It paired a highly emotional topic (body image issues) with a strong visual contrast. It also included a pop culture theme that appealed to a niche audience (video game fans) while also resonating with a broader audience. To date, this campaign has amassed nearly 900 placements, including links from high-authority sites such as BuzzFeed, Huffington Post, MTV, and Vice Motherboard.
Read on for more takeaways on how to increase your contentâs link-earning potential.
Content that evokes a strong emotional response is extremely effective at earning links.
Emotional impact was the greatest differentiator between our most successful campaigns and all other campaigns, with those that secured over 100 placements being 3 times more likely to feature a strong emotional hook than less successful campaigns.
Example: The Truth About Hotel Hygiene
Our Truth About Hotel Hygiene earned more than 700 placements thanks to a high “ick” factor, which gave it emotional resonance paired with universal interest (most people use hotels). Weâve also found including an element of surprise helps strengthen the contentâs emotional impact. This study definitely surprised readers with a shocking finding: The nicest hotels had the most germs.
Example: Perceptions of Perfection
In our Perceptions of Perfection campaign, audiences were surprised to see drastically how designers altered a womanâs photo to fit their countryâs standards of beauty. The surprise factor added an additional layer of emotionality to the already emotional topic of womenâs body image issues, which helped this campaign get nearly 600 placements.
Choose content topics with wide appeal to increase potential for high-quality links.
So weâve proven emotionally provocative content can attract a lot of links, but what about high-quality links? We found a correlation between high average domain authority and content topics with mass appeal. Broad topics appeal to a greater range of publishers, thus increasing the number of relevant high-authority sites your content can be placed on.
Some verticals may have an advantage when it comes to link quality too. Campaigns for our travel, entertainment, and retail clients tend to have a high average domain authority per placement since these verticals naturally lend themselves to content ideas with mass appeal.
Some examples of campaign topics with a DA-per-placement average above 55:
- Cities That Hate Tourist
- Most Googled Brands in Each State
- Data Breaches by State and Sector
- Airline Hygiene Exposed
- Deadliest Driving States
Pro tip: A siteâs influence matters more than the type of link youâll acquire from it. Donât fear nofollow links; for two of our best-performing campaigns of all time, the initial links were nofollows from high-authority sites. A nofollow link on a high-authority site can lead to syndication on hundreds of other sites that will give dofollow links.
Use rankings and comparisons to fuel online discussion.
Contrast was a recurring theme in our high-performing campaigns, with strong contrasts achieved through visual or numerical comparisons. More than half of our highest-performing campaigns centered around a ranking or comparison, compared to just a third of our lowest-performing campaigns. Pitting two or more things against one another fuels discussion around the content, which can lead to more placements.
Example: Comparing Siri, Cortana, and Google Now
Comparing Cortana was a hands-on study for which participants gave a command to their virtual assistant and rated their satisfaction with the response. Comparing the three most widely used smartphone assistants attracted the attention of techies (especially Apple fans) as well as the broader public, since most people have one of these assistants on their smartphone.
Example: Airport Rankings
The Airport Rankings campaign looked at which airports offered the best and worst experiences, based on data including the volume of canceled flights, delays, and lost luggage. Local publishers loved this campaign; many focused on the story around how their regional airport fared in the rankings. Since most travelers have lived through at least one terrible airport experience, the content was extremely relatable too.
Pro tip: Side-by-side visualizations pack a high-contrast visual punch that helps drive linking and social shares. This type of contrasting imagery is extremely powerful visually since itâs easy to process. It helps evoke an immediate response that quickly engages viewers.
Incorporate a geographic angle to earn international or regional links.
Did you notice a majority of the broad-topic campaigns with a high domain authority listed above also had a geographic angle? In addition to broad appeal, geography-focused topics help attract interest from international and regional publishers, thus securing additional links.
Example: Most Popular Concert Drugs
The Most Popular Concert Drugs, one of our most successful campaigns to date with nearly 1,900 placements, examined the connection between music festivals and drug mentions on Instagram. Many global sites featured the story for its worldwide festivals, including publishers in the U.K., France, Italy, Australia, and Brazil. Had we limited our selection to U.S. festivals, itâs doubtful this campaign would have attracted as much attention.
Example: Most Instagrammed Locations
As with the example above, pairing a geographic angle with Instagram data proved to be a winning formula for the Most Instagrammed Locations campaign. We featured the most Instagrammed places in both the U.S. and Canada, which helped the campaign secure additional coverage from Canadian publishers.
Pro tip: To extend a campaignâs reach to the offline world, consider pitching relevant TV and radio stations with geo-themed content that offers new data; traditional news outlets seem to love these stories. Weâve had multiple geo-focused campaigns featured on national and local news stations simply because they saw the story getting covered by online media.
Include pop culture references to pique audience interest.
Our campaigns with more than 100 pickups were nearly twice as likely to incorporate a pop culture theme than our campaigns with fewer than 20 pickups. Content that ties in pop culture is primed for targeting a niche of dedicated fans who will want to share and discuss it like crazy, while it simultaneously resonates on a surface level for many people. Geek-culture themes, such as comic books and sci-fi movies, tend to attract a lot of attention thanks to rabid fan bases.
New School vs. Old School
Trending pop culture phenomena are best for making your content feel relevant to the current zeitgeist (think: a Walking Dead theme that appeals to fans of the show while also playing up the current cultural obsession with zombies).
On the other hand, old school pop culture references are effective for creating strong feelings of nostalgia (think: everything in BuzzFeedâs â90s category). If your audience falls within a certain age bracket, consider what would be nostalgic to them. What did they grow up with, and how can you weave this into your content?
Example: Fictional Power Sources
Fictional Power Sources looked at which iconic weapons, vehicles, and superpowers featured in movies were the most powerful. Rather than focusing on one movie, we featured a handful of popular movies â including Star Wars, Back to the Future, and The Matrix â which increased it the campaignâs appeal to movie fans.
Example: Sitcom Cribs
Sitcom Cribs looked at the affordability of the living spaces on various TV shows â could the âFriendsâ characters really afford their trendy Manhattan digs? By featuring a lot of older TV shows, this campaign had a high nostalgia factor for audiences familiar with classic â90s sitcoms. Including newer TV shows kept the campaign relevant to younger audiences too.
Pro tip: To increase the appeal, feature a range of pop culture icons as opposed to just one, such as a list of movies, musicians, or TV shows. This adds to the range of pop culture fans who will connect with the content, rather than limiting the potential audience to one fan base.
Earning high-quality links is just one benefit of creating content that incorporates high emotionality, contrast, broad appeal, or pop culture references. Weâve also found these characteristics present in our campaigns that perform well in terms of social sharing.
In particular, emotional resonance is a key ingredient, not only for earning links but also for getting your content widely shared. Our campaigns that received more than 20,000 social shares were 8 times more likely to include a strong emotional hook than campaigns that received fewer than 1,000 shares.
How can you ensure these elements are incorporated into your content, thus increasing its linking and sharing potential? In a previous post, I walk through exactly how we create campaigns like the examples I shared above. Check it out for a step-by-step guide to creating engaging, highly shareable content.
What observations have you made about your most successful content? I’d love to hear your thoughts on which content elements attract the most links and shares.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Posted by Dr-Pete
[Estimated read time: 7 minutes]
Four years ago, just weeks before the first Penguin update, the MozCast project started collecting its first real data. Detecting and interpreting Google algorithm updates has been both a far more difficult and far more rewarding challenge than I ever expected, and I’ve learned a lot along the way, but there’s one nagging question that I’ve never been able to answer with any satisfaction. Can we use past Google data to predict future updates?
Before any analysis, I’ve always been a fan of using my eyes. What does Google algorithm “weather” look like over a long time-period? Here’s a full year of MozCast temperatures:
Most of us know by now that Google isn’t a quiet machine that hums along until the occasional named update happens a few times a year. The algorithm is changing constantly and, even if it wasn’t, the web is changing constantly around it. Finding the signal in the noise is hard enough, but what does any peak or valley in this graph tell you about when the next peak might arrive? Very little, at first glance.
It’s worse than that, though
Even before we dive into the data, there’s a fundamental problem with trying to predict future algorithm updates. To understand it, let’s look at a different problem â predicting real-world weather. Predicting the weather in the real world is incredibly difficult and takes a massive amount of data to do well, but we know that that weather follows a set of natural laws. Ultimately, no matter how complex the problem is, there is a chain of causality between today’s weather and tomorrow’s and a pattern in the chaos.
The Google algorithm is built by people, driven by human motivations and politics, and is only constrained by the rules of what’s technologically possible. Granted, Google won’t replace the entire SERP with a picture of a cheese sandwich tomorrow, but they can update the algorithm at any time, for any reason. There are no natural laws that link tomorrow’s algorithm to today’s. History can tell us about Google’s motivations and we can make reasonable predictions about the algorithm’s future, but those future algorithm updates are not necessarily bound to any pattern or schedule.
What do we actually know?
If we trust Google’s public statements, we know that there are a lot of algorithm updates. The fact that only a handful get named is part of why we built MozCast in the first place. Back in 2011, Eric Schmidt testified before Congress, and his written testimony included the following data:
To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm.
I’ve highlighted one phrase â “516 changes”. At a time when we believed Google made maybe a dozen updates per year, Schmidt revealed that it was closer to 10X/week. Now, we don’t know how Google defines “changes,” and many of these changes were undoubtedly small, but it’s clear that Google is constantly changing.
Google’s How Search Works page reveals that, in 2012, they made 665 “improvements” or “launches” based on an incredible 118,812 precision evaluations. In August of 2014, Amit Singhal stated on Google+ that they had made “more than 890 improvements to Google Search last year alone.” It’s unclear whether that referred to the preceding 12 months or calendar year 2013.
We don’t have a public number for the past couple of years, but it is incredibly unlikely that the rate of change has slowed. Google is making changes to search on the order of 2X/day.
Of course, anyone who has experience in software development realizes that Google didn’t evenly divide 890 improvements over the year and release one every 9 hours and 51 minutes. That would be impractical for many reasons. It’s very likely that releases are rolled out in chunks and are tied to some kind of internal process or schedule. That process or schedule may be irregular, but humans at Google have to approve, release, and audit every change.
In March of 2012, Google released a video of their weekly Search Quality meeting, which, at the time, they said occurred “almost every Thursday”. This video and other statements since reveal a systematic process within Google by which updates are reviewed and approved. It doesn’t take very advanced math to see that there are many more updates per year than there are weekly meetings.
Is there a weekly pattern?
Maybe we can’t predict the exact date of the next update, but is there any regularity to the pattern at all? Admittedly, it’s a bit hard to tell from the graph at the beginning of this post. Analyzing an irregular time series (where both the period between spikes and intensity of those spikes changes) takes some very hairy math, so I decided to start a little simpler.
I started by assuming that a regular pattern was present and looking for a way to remove some of the noise based on that assumption. The simplest analysis that yielded results involved taking a 3-day moving average and calculating the Mean Standard Error (MSE). In other words, for every temperature (each temperature is a single day), take the mean of that day and the day on either side of it (a 3-day window) and square the difference between that day’s temperature and the 3-day mean. This exaggerates stand-alone peaks, and smooths some of the noisier sequences, resulting in the following graph:
This post was inspired in part by February 2016, which showed an unusually high signal-to-noise ratio. So, let’s zoom in on just the last 90 days of the graph:
See peaks 2â6 (starting on January 21)? The space between them, respectively, is 6 days, 7 days, 7 days, and 8 days. Then, there’s a 2-week gap to the next, smaller spike (March 3) and another 8 days to the one after that. While this is hardly proof of a clear regular pattern, it’s hard to believe the weekly pacing is entirely a coincidence, given what we know about the algorithm update approval process.
This pattern is less clear in other months, and I’m not suggesting that a weekly update cycle is the whole picture. We know Google also does large data refreshes (including Penguin) and sometimes rolls updates out over multiple days (or even weeks). There’s a similar, although noisier, pattern in April 2015 (the first part of the 12-month MSE graph). It’s also interesting to note the activity levels around Christmas 2015:
Despite all of our conspiracy theories, there really did seem to be a 2015 Christmas lull in Google activity, lasting approximately 4 weeks, followed by a fairly large spike that may reflect some catch-up updates. Engineers go on vacation, too. Notice that that first January spike is followed by a roughly 2-week gap and then two 1-week gaps.
The most frequent day of the week for these spikes seems to be Wednesday, which is odd, if we believe there’s some connection to Google’s Thursday meetings. It’s possible that these approximately weekly cycles are related to naturally occurring mid-week search patterns, although we’d generally expect less pronounced peaks if change were related to something like mid-week traffic spikes or news volume.
Did we win Google yet?
I’ve written at length about why I think algorithm updates still matter, but, tactically speaking, I don’t believe we should try to plan our efforts around weekly updates. Many updates are very small and even some that are large on average may not effect our employer or clients.
I view the Google weather as a bit like the unemployment rate. It’s interesting to know whether that rate is, say, 5% or 7%, but ultimately what matters to you is whether or not you have a job. Low or high unemployment is a useful economic indicator and may help you decide whether to risk finding a new job, but it doesn’t determine your fate. Likewise, measuring the temperature of the algorithm can teach us something about the system as a whole, but the temperature on any given day doesn’t decide your success or failure.
Ultimately, instead of trying to predict when an algorithm update will happen, we should focus on the motivations behind those updates and what they signal about Google’s intent. We don’t know exactly when the hammer will fall, but we can get out of the way in plenty of time if we’re paying attention.
Posted by larry.kim
[Estimated read time: 13 minutes]
Does organic click-through rate (CTR) data impact page rankings? This has been a huge topic of debate for years within the search industry.
Some people think the influence of CTR on rankings is nothing more than a persistent myth. Like the one where humans and dinosaurs lived together at the same time â you know, like in that reality series “The Flintstones”?
Some other people are convinced that Google must look at end user data. Because how in the world would Google know which pages to rank without it?
Google (OK, at least one Google engineer who spoke at SMX) seems to indicate the latter is indeed the case:
— Rand Fishkin (@randfish) March 17, 2016
I also highly encourage you to check out Rand Fishkin’s Whiteboard Friday discussing clicks and click-through rate. In short, the key point is this: If a page is ranking in position 3, but gets a higher than expected CTR, Google may decide to rank that page higher because tons of people are obviously interested in that result.
Seems kind of obvious, right?
And if true, we ought to be able to measure it! In this post, Iâm going to try to show that RankBrain may just be the missing link between CTR and rankings.
Untangling meaning from Google RankBrain confusion
Let’s be honest: Suddenly, everyone is claiming to be a RankBrain expert. RankBrain-shaming is quickly becoming an industry epidemic.
Please ask yourself: Do most of these people â especially those who aren’t employed by Google, but even some of the most helpful and well-intentioned spokespeople who actually work for Google â thoroughly know what they’re talking about? I’ve seen a lot of confusing and conflicting statements floating around.
Here’s the wildest one. At SMX West, Google’s Paul Haahr said Google doesn’t really understand what RankBrain is doing.
If this really smart guy who works at Google doesn’t know what RankBrain does, how in the heck does some random self-proclaimed SEO guru definitively know all the secrets of RankBrain? They must be one of those SEOs who “knew” RankBrain was coming, even before Google announced it publicly on October 26, but just didn’t want to spoil the surprise.
Now let’s go to two of the most public Google figures: Gary Illyes and John Mueller.
Illyes seemed to shoot down the idea that RankBrain could become the most important ranking factor (something which I strongly believe is inevitable). Google’s Greg Corrado publicly stated that RankBrain is “the third-most important signal contributing to the result of a search query.”
Illyes also said on Twitter that: “Rankbrain lets us understand queries better. No affect on crawling nor indexing or replace anything in ranking.” But then later clarified: â…it does change ranking.”
I don’t disagree at all. It hasn’t. (Not yet, anyway.)
Links still matter. Content still matters. Hundreds of other signals still matter.
It’s just that RankBrain had to displace something as a ranking signal. Whatever used to be Google’s third most important signal is no longer the third most important signal. RankBrain couldn’t be the third most important signal before it existed!
Now let’s go to Mueller. He believes machine learning will gain more prominence in search results, noting Bing and Yandex do a lot of this already. He noted that machine learning needs to be tested over time, but there are a lot of interesting cases where Google’s algorithm needs a system to react to searches it hasn’t seen before.
Bottom line: RankBrain, like other new Google changes, is starting out as a relatively small part of the Google equation today. RankBrain won’t replace other signals any time soon (think of it simply like this: Google is adding a new ingredient to your favorite dish to make it even tastier). But if RankBrain delivers great metrics and keeps users happy, then surely it will be given more weight and expanded in the future.
If you want to nerd out on RankBrain, neural networks, semantic theory, word vectors, and patents, then you should read:
- Getting Your Head Around Googleâs RankBrain by David Harry
- RankBrain: What Do We Know About Googleâs Machine-Learning System? by Virginia Nussey (concentrate on Marcus Tober’s SMX presentation recap in the section “Machine Learning Ranks Relevance”)
- How Google Works: A Google Ranking Engineer’s Story by Kristi Kellogg
To be clear: my goal with this post isn’t to discuss tweets from Googlers, patents, research, or speculative theories.
Rather, Iâm just going to ignore EVERYBODY and look at actual click data.
Searching for Rankbrain
Rand conducted one of the most popular tests of the influence of CTR on Google’s search results. He asked people to do a specific search and click on the link to his blog (which was in 7th position). This impacted the rankings for a short period of time, moving the post up to 1st position.
But these are all transient changes. Changes donât persist.
It’s like how you canât increase your AdWords Quality Scores simply by clicking on your own ads a few times. This is the oldest trick in the book and it doesn’t work.
The results of another experiment appeared on Search Engine Land last August and concluded that CTR isn’t a ranking factor. But this test had a pretty significant flaw â it relied on bots artificially inflating CTRs and search volume (and this test was only for a single two-word keyword: “negative SEO”). So essentially, this test was the organic search equivalent of click fraud. Google AdWords has been fighting click fraud for 15 years and they can easily apply these learnings to organic search. What did I just say about old tricks?
Before we look at the data, a final “disclaimer.” I donât know if what this data reveals is definitively RankBrain, or another CTR-based ranking signal that’s part of the core Google algorithm. Regardless, there’s something here â and I can most certainly say with confidence that CTR is impacting rank. For simplicity, Iâll be referring to this as Rankbrain.
Google has said that RankBrain is being tested on long-tail terms, which makes sense. Google wants to start testing its machine-learning system with searches they have little to no data on â and 99.9 percent of pages have zero external links pointing to them.
So how is Google able to tell which pages should rank in these cases? By examining engagement and relevance. CTR is one of the best indicators of both.
Head terms, as far as we know, aren’t being exposed to RankBrain right now. So by observing the differences between the organic search CTRs of long-tail terms versus head terms, we should be able to spot the difference:
We used 1,000 keywords in the same keyword niche (to isolate external factors like Google shopping and other SERP features that can alter CTR characteristics). The keywords are all from my own website: Wordstream.com.
I compared CTR versus rank for 1â2 word search terms, and did the same thing for long-tail keywords (4â10 word search terms).
Notice how the long-tail terms get much higher average CTRs for a given position. For example, in this data set, the head term in position 1 got an average CTR of 17.5 percent, whereas the long-tail term in position 1 had a remarkably high CTR, at an average of 33 percent.
Youâre probably thinking: “Well, that makes sense. Youâd expect long-tail terms to have stronger query intent, thus higher CTRs.” Thatâs true, actually.
But why is that long-tail keyword terms with high CTRs are so much more likely to be in top positions versus bottom-of-page organic positions? That’s a little weird, right?
OK, let’s do an analysis of paid search queries in the same niche. I use organic search to come up with paid search keyword ideas and vice versa, so weâre looking at the same keywords in many cases.
Long-tail terms in this same vertical get higher CTRs than head terms. However, the difference between long-tail and head term CTR is very small in positions 1â2, and becomes huge as you go out to lower positions.
So in summary, something unusual is happening:
- In paid search, long-tail and head terms do roughly the same CTR in high ad spots (1â2) and see huge differences in CTR for lower spots (3â7).
- But in organic search, the long-tail and head terms in spots (1â2) have huge differences in CTR and very little difference as you go down the page.
Why are the same keywords behaving so differently in organic versus paid?
The difference (we think) is that RankBrain is boosting the search rankings of pages that have higher organic click-through rates.
Not convinced yet?
Which came first: the CTR or the ranking?
CTR and ranking are codependent variables. Thereâs obviously a relationship between the two, but which is causing what? In order to get to the bottom of this âchicken versus eggâ situation, weâre going to have to do a bit more analysis.
The following graph takes the difference between an observed organic search CTR minus the expected CTR, to figure out if your page is beating â or being beaten by â the expected average CTR for a given organic position.
By only looking at the extent by which a keyword beats or is beaten by the predicted CTR, you are essentially isolating the natural relationship between CTR and ranking in order to get a better picture of whatâs going on.
We found on average, that if you beat the expected CTR, then you’re far more likely to rank in more prominent positions. Failing to beat the expected CTR makes it more likely you’ll appear in positions 6â10.
So, based on our example of long-tail search terms for this niche, if a page:
- Beats the expected CTR for a given position by 20 percent, you’re likely to appear in position 1.
- Beats beat the expected CTR for a given position by 12 percent, then you’re likely to appear in position 2.
- Falls below the expected CTR for a given position by 6 percent, then you’re likely to appear in position 10.
And so on.
Here’s a greatly simplified rule of thumb:
The more your pages beat the expected organic CTR for a given position, the more likely you are to appear in prominent organic positions.
If your pages fall below the expected organic search CTR, then you’ll find your pages in lower organic positions on the SERP.
Want to move up by one position in Google’s rankings? Increase your CTR by 3 percent. Want to move up another spot? Increase your CTR by another 3 percent.
If you canât beat the expected click-through rate for a given position, youâre unlikely to appear in positions 1â5.
Essentially, you can think of all of this as though Google is giving bonus points to pages that have high click-through rates. The fact that it looks punitive is just a natural side effect.
If Google gives “high CTR bonus points” to other websites, then your relative performance will decline. It’s not that you got penalized; it’s just you’re the only one who didn’t get the rewards.
A simple example: The Long-tail Query That Could
Hereâs one quick example from our 1000-keyword data set. For the query: âemail subjects that get opened,â this page has a ridiculously high organic CTR of 52.17%, which beats the expected CTR for the top spot in this vertical by over 60%. It also generates insanely great engagement rates, including a time on page of over 24 minutes.
We believe that these two strong engagement metrics send a clear signal to Google that the page matches the queryâs intent, despite not having an exact keyword match in the content.
What does Google want?
A lot of factors go into ranking. We know links, content, and RankBrain are the top 3 search ranking factors in Google’s algorithm. But there are hundreds of additional signals Google looks at.
So let’s make this simple. Your website is a house.
This is a terrible website. It was built a long time ago and has received no SEO love in a long time (terrible structure, markup, navigation, content, etc). It ranks terribly. Nobody visits it. And those poor souls who do stumble across it wish they never had and quickly leave, wondering why it even exists.
This website is pretty good. It’s designed well. It’s obviously well-maintained. It addresses all the SEO essentials. Everything is optimized. It ranks reasonably well. A good amount of people visit and hang out a while since, hey, it has everything you’d expect in a website nowadays.
Now we get to the ultimate house. It has everything you could want in a website â beautifully designed, great content, and optimized in every way possible. It owns tons of prominent search positions and everyone goes here to visit (the parties are AMAZING) again and again because of the amazing experience â and they’re very likely to tell their friends about it after they leave.
People love this house. Google goes where the people are. So Google rewards it.
This is the website you need to look like to Google.
No fair, right? The big house gets all the advantages!
A bunch of articles say that thereâs absolutely nothing you can or should do to optimize your site for Rankbrain today, and for any future updates. I couldnât disagree more.
If you want to rank better, you need to get more people to YOUR party. This is where CTR comes in.
It appears that Google RankBrain has been “inspired by” AdWords and many other technologies that look at user engagement signals to determine page quality and relevance. And RankBrain is learning how to assign ratings to pages that may have insufficient link or historical page data, but are relevant to a searcher’s query.
So how do you raise your CTRs? You should focus your efforts in four key areas:
- Optimize pages with low “organic Quality Scores.” Download all of your query data from Google Search Console. Sort your data, figure out which of your pages have below average CTRs, and prioritize those â it’s far less risky to focus on fixing your losers because they have the most potential upside. None of these pages will get any love from RankBrain!
- Combine your SEO keywords with emotional triggers to create irresistible headlines. Emotions like anger, disgust, affirmation, and fear are proven to increase click-through rates and conversion rates. If everyone who you want to beat already has crafted optimized title tags, then packing an emotional wallop will give you the edge you need and make your listing stand out.
- Increase other user engagement rates. Like click-through rate, we believe you need to have higher-than-expected engagement metrics (e.g. time on site, bounce rate â more on this in a future article). This is a critical relevance signal! Google knows the expected conversion and engagement rates based on a variety of factors (e.g. industry, query, location, time of day, device type). So create 10X content!
- Use social media ads and remarketing to increase search volume and CTR. Paid social ads and remarketing display ads can generate serious awareness and exposure for a reasonable cost (no more than a day). If people aren’t familiar with your brand, bombard your target audience with Facebook and Twitter ads. People who are familiar with your brand are 2x more likely to click through and to convert.
Whether or not RankBrain becomes the most important ranking signal (and I believe it will be someday), it’s smart to ensure your pages get as many organic search clicks as possible. It means more people are visiting your site and it sends important signals to Google that your page is relevant and awesome.
Our research also shows that achieving above-expected user engagement metrics result in better organic rankings, which results in even more clicks to your site.
Donât settle for average CTRs. Be a unicorn among a sea of donkeys! Raise your organic CTRs and engagement rates! Get optimizing now!