Moz the Monster: Anatomy of An (Averted) Brand Crisis

Posted by Dr-Pete

On the morning of Friday, November 10, we woke up to the news that John Lewis had launched an ad campaign called “Moz the Monster“. If you’re from the UK, John Lewis needs no introduction, but for our American audience, they’re a high-end retail chain that’s gained a reputation for a decade of amazing Christmas ads.

It’s estimated that John Lewis spent upwards of £7m on this campaign (roughly .4M). It quickly became clear that they had organized a multi-channel effort, including a #mozthemonster Twitter campaign.

From a consumer perspective, Moz was just a lovable blue monster. From the perspective of a company that has spent years building a brand, John Lewis was potentially going to rewrite what “Moz” meant to the broader world. From a search perspective, we were facing a rare possibility of competing for our own brand on Google results if this campaign went viral (and John Lewis has a solid history of viral campaigns).

Step #1: Don’t panic

At the speed of social media, it can be hard to stop and take a breath, but you have to remember that that speed cuts both ways. If you’re too quick to respond and make a mistake, that mistake travels at the same speed and can turn into a self-fulfilling prophecy, creating exactly the disaster you feared.

The first step is to get multiple perspectives quickly. I took to Slack in the morning (I’m two hours ahead of the Seattle team) to find out who was awake. Two of our UK team (Jo and Eli) were quick to respond, which had the added benefit of getting us the local perspective.

Collectively, we decided that, in the spirit of our TAGFEE philosophy, a friendly monster deserved a friendly response. Even if we chose to look at it purely from a pragmatic, tactical standpoint, John Lewis wasn’t a competitor, and going in metaphorical guns-blazing against a furry blue monster and the little boy he befriended could’ve been step one toward a reputation nightmare.

Step #2: Respond (carefully)

In some cases, you may choose not to respond, but in this case we felt that friendly engagement was our best approach. Since the Seattle team was finishing their first cup of coffee, I decided to test the waters with a tweet from my personal account:

I’ve got a smaller audience than the main Moz account, and a personal tweet as the west coast was getting in gear was less exposure. The initial response was positive, and we even got a little bit of feedback, such as suggestions to monitor UK Google SERPs (see “Step #3″).

Our community team (thanks, Tyler!) quickly followed up with an official tweet:

While we didn’t get direct engagement from John Lewis, the general community response was positive. Roger Mozbot and Moz the Monster could live in peace, at least for now.

Step #3: Measure

There was a longer-term fear – would engagement with the Moz the Monster campaign alter Google SERPs for Moz-related keywords? Google has become an incredibly dynamic engine, and the meaning of any given phrase can rewrite itself based on how searchers engage with that phrase. I decided to track “moz” itself across both the US and UK.

In that first day of the official campaign launch, searches for “moz” were already showing news (“Top Stories”) results in the US and UK, with the text-only version in the US:

…and the richer Top Stories carousel in the UK:

The Guardian article that announced the campaign launch was also ranking organically, near the bottom of page one. So, even on day one, we were seeing some brand encroachment and knew we had to keep track of the situation on a daily basis.

Just two days later (November 12), Moz the Monster had captured four page-one organic results for “moz” in the UK (at the bottom of the page):

While it still wasn’t time to panic, John Lewis’ campaign was clearly having an impact on Google SERPs.

Step #4: Surprises

On November 13, it looked like the SERPs might be returning to normal. The Moz Blog had regained the Top Stories block in both US and UK results:

We weren’t in the clear yet, though. A couple of days later, a plagiarism scandal broke, and it was dominating the UK news for “moz” by November 18:

This story also migrated into organic SERPs after The Guardian published an op-ed piece. Fortunately for John Lewis, the follow-up story didn’t last very long. It’s an important reminder, though, that you can’t take your eyes off of the ball just because it seems to be rolling in the right direction.

Step #5: Results

It’s one thing to see changes in the SERPs, but how was all of this impacting search trends and our actual traffic? Here’s the data from Google Trends for a 4-week period around the Moz the Monster launch (2 weeks on either side):

The top graph is US trends data, and the bottom graph is UK. The large spike in the middle of the UK graph is November 10, where you can see that interest in the search “moz” increased dramatically. However, this spike fell off fairly quickly and US interest was relatively unaffected.

Let’s look at the same time period for Google Search Console impression and click data. First, the US data (isolated to just the keyword “moz”):

There was almost no change in impressions or clicks in the US market. Now, the UK data:

Here, the launch spike in impressions is very clear, and closely mirrors the Google Trends data. However, clicks to Moz.com were, like the US market, unaffected. Hindsight is 20/20, and we were trying to make decisions on the fly, but the short-term shift in Google SERPs had very little impact on clicks to our site. People looking for Moz the Monster and people looking for Moz the search marketing tool are, not shockingly, two very different groups.

Ultimately, the impact of this campaign was short-lived, but it is interesting to see how quickly a SERP can rewrite itself based on the changing world, especially with an injection of ad dollars. At one point (in UK results), Moz the Monster had replaced Moz.com in over half (5 of 8) page-one organic spots and Top Stories – an impressive and somewhat alarming feat.

By December 2, Moz the Monster had completely disappeared from US and UK SERPs for the phrase “moz”. New, short-term signals can rewrite search results, but when those signals fade, results often return to normal. So, remember not to panic and track real, bottom-line results.

Your crisis plan

So, how can we generalize this to other brand crises? What happens when someone else’s campaign treads on your brand’s hard-fought territory? Let’s restate our 5-step process:

(1) Remember not to panic

The very word “crisis” almost demands panic, but remember that you can make any problem worse. I realize that’s not very comforting, but unless your office is actually on fire, there’s time to stop and assess the situation. Get multiple perspectives and make sure you’re not overreacting.

(2) Be cautiously proactive

Unless there’s a very good reason not to (such as a legal reason), it’s almost always best to be proactive and respond to the situation on your own terms. At least acknowledge the situation, preferably with a touch of humor. These brand intrusions are, by their nature, high profile, and if you pretend it’s not happening, you’ll just look clueless.

(3) Track the impact

As soon as possible, start collecting data. These situations move quickly, and search rankings can change overnight in 2017. Find out what impact the event is really having as quickly as possible, even if you have to track some of it by hand. Don’t wait for the perfect metrics or tracking tools.

(4) Don’t get complacent

Search results are volatile and social media is fickle – don’t assume that a lull or short-term change means you can stop and rest. Keep tracking, at least for a few days and preferably for a couple of weeks (depending on the severity of the crisis).

(5) Measure bottom-line results

As the days go by, you’ll be able to more clearly see the impact. Track as deeply as you can – long-term rankings, traffic, even sales/conversions where necessary. This is the data that tells you if the short-term impact in (3) is really doing damage or is just superficial.

The real John Lewis

Finally, I’d like to give a shout-out to someone who has felt a much longer-term impact of John Lewis’ succesful holiday campaigns. Twitter user and computer science teacher @johnlewis has weathered his own brand crisis year after year with grace and humor:

So, a hat-tip to John Lewis, and, on behalf of Moz, a very happy holidays to Moz the Monster!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

December 13, 2017  Tags: , , , ,   Posted in: SEO / Traffic / Marketing  No Comments

Keyword Research Beats Nate Silver’s 2016 Presidential Election Prediction

Posted by BritneyMuller

100% of statisticians would say this is a terrible method for predicting elections. However, in the case of 2016’s presidential election, analyzing the geographic search volume of a few telling keywords “predicted” the outcome more accurately than Nate Silver himself.

The 2016 US Presidential Election was a nail-biter, and many of us followed along with the famed statistician’s predictions in real time on FiveThirtyEight.com. Silver’s predictions, though more accurate than many, were still disrupted by the election results.

In an effort to better understand our country (and current political chaos), I dove into keyword research state-by-state searching for insights. Keywords can be powerful indicators of intent, thought, and behavior. What keyword searches might indicate a personal political opinion? Might there be a common denominator search among people with the same political beliefs?

It’s generally agreed that Fox News leans to the right and CNN leans to the left. And if we’ve learned anything this past year, it’s that the news you consume can have a strong impact on what you believe, in addition to the confirmation bias already present in seeking out particular sources of information.

My crazy idea: What if Republican states showed more “fox news” searches than “cnn”? What if those searches revealed a bias and an intent that exit polling seemed to obscure?

The limitations to this research were pretty obvious. Watching Fox News or CNN doesn’t necessarily correlate with voter behavior, but could it be a better indicator than the polls? My research says yes. I researched other media outlets as well, but the top two ideologically opposed news sources — in any of the 50 states — were consistently Fox News and CNN.

Using Google Keyword Planner (connected to a high-paying Adwords account to view the most accurate/non-bucketed data), I evaluated each state’s search volume for “fox news” and “cnn.”

Eight states showed the exact same search volumes for both. Excluding those from my initial test, my results accurately predicted 42/42 of the 2016 presidential state outcomes including North Carolina and Wisconsin (which Silver mis-predicted). Interestingly, “cnn” even mirrored Hillary Clinton, similarly winning the popular vote (25,633,333 vs. 23,675,000 average monthly search volume for the United States).

In contrast, Nate Silver accurately predicted 45/50 states using a statistical methodology based on polling results.

Click for a larger image

This gets even more interesting:

The eight states showing the same average monthly search volume for both “cnn” and “fox news” are Arizona, Florida, Michigan, Nevada, New Mexico, Ohio, Pennsylvania, and Texas.

However, I was able to dive deeper via GrepWords API (a keyword research tool that actually powers Keyword Explorer’s data), to discover that Arizona, Nevada, New Mexico, Pennsylvania, and Ohio each have slightly different “cnn” vs “fox news” search averages over the previous 12-month period. Those new search volume averages are:

“fox news” avg monthly search volume

“cnn” avg monthly search volume

KWR Prediction

2016 Vote

Arizona

566333

518583

Trump

Trump

Nevada

213833

214583

Hillary

Hillary

New Mexico

138833

142916

Hillary

Hillary

Ohio

845833

781083

Trump

Trump

Pennsylvania

1030500

1063583

Hillary

Trump

Four out of five isn’t bad! This brought my new prediction up to 46/47.

Silver and I each got Pennsylvania wrong. The GrepWords API shows the average monthly search volume for “cnn” was ~33,083 searches higher than “fox news” (to put that in perspective, that’s ~0.26% of the state’s population). This tight-knit keyword research theory is perfectly reflected in Trump’s 48.2% win against Clinton’s 47.5%.

Nate Silver and I have very different day jobs, and he wouldn’t make many of these hasty generalizations. Any prediction method can be right a couple times. However, it got me thinking about the power of keyword research: how it can reveal searcher intent, predict behavior, and sometimes even defy the logic of things like statistics.

It’s also easy to predict the past. What happens when we apply this model to today’s Senate race?

Can we apply this theory to Alabama’s special election in the US Senate?

After completing the above research on a whim, I realized that we’re on the cusp of yet another hotly contested, extremely close election: the upcoming Alabama senate race, between controversy-laden Republican Roy Moore and Democratic challenger Doug Jones, fighting for a Senate seat that hasn’t been held by a Democrat since 1992.

I researched each Alabama county — 67 in total — for good measure. There are obviously a ton of variables at play. However, 52 out of the 67 counties (77.6%) 2016 presidential county votes are correctly “predicted” by my theory.

Even when giving the Democratic nominee more weight to the very low search volume counties (19 counties showed a search volume difference of less than 500), my numbers lean pretty far to the right (48/67 Republican counties):

It should be noted that my theory incorrectly guessed two of the five largest Alabama counties, Montgomery and Jefferson, which both voted Democrat in 2016.

Greene and Macon Counties should both vote Democrat; their very slight “cnn” over “fox news” search volume is confirmed by their previous presidential election results.

I realize state elections are not won by county, they’re won by popular vote, and the state of Alabama searches for “fox news” 204,000 more times a month than “cnn” (to put that in perspective, that’s around ~4.27% of Alabama’s population).

All things aside and regardless of outcome, this was an interesting exploration into how keyword research can offer us a glimpse into popular opinion, future behavior, and search intent. What do you think? Any other predictions we could make to test this theory? What other keywords or factors would you look at? Let us know in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

December 12, 2017  Tags: , , , , , , , ,   Posted in: SEO / Traffic / Marketing  No Comments

Not-Actually-the-Best Local SEO Practices

Posted by MiriamEllis

It’s never fun being the bearer of bad news.

You’re on the phone with an amazing prospect. Let’s say it’s a growing appliance sales and repair provider with 75 locations in the western US. Your agency would absolutely love to onboard this client, and the contact is telling you, with some pride, that they’re already ranking pretty well for about half of their locations.

With the right strategy, getting them the rest of the way there should be no problem at all.

But then you notice something, and your end of the phone conversation falls a little quiet as you click through from one of their Google My Business listings in Visalia to Streetview and see… not a commercial building, but a house. Uh-oh. In answer to your delicately worded question, you find out that 45 of this brand’s listings have been built around the private homes of their repairmen — an egregious violation of Google’s guidelines.

“I hate to tell you this…,” you clear your throat, and then you deliver the bad news.

marketingfoundations1.jpg

If you do in-house Local SEO, do it for clients, or even just answer questions in a forum, you’ve surely had the unenviable (yet vital) task of telling someone they’re “doing it wrong,” frequently after they’ve invested considerable resources in creating a marketing structure that threatens to topple due to a crack in its foundation. Sometimes you can patch the crack, but sometimes, whole edifices of bad marketing have to be demolished before safe and secure new buildings can be erected.

Here are 5 of the commonest foundational marketing mistakes I’ve encountered over the years as a Local SEO consultant and forum participant. If you run into these in your own work, you’ll be doing someone a big favor by delivering “the bad news” as quickly as possible:

1. Creating GMB listings at ineligible addresses

What you’ll hear:

“We need to rank for these other towns, because we want customers there. Well, no, we don’t really have offices there. We have P.O. Boxes/virtual offices/our employees’ houses.”

Why it’s a problem:

Google’s guidelines state:

  • Make sure that your page is created at your actual, real-world location
  • PO Boxes or mailboxes located at remote locations are not acceptable.
  • Service-area businesses—businesses that serve customers at their locations—should have one page for the central office or location and designate a service area from that point.

All of this adds up to Google saying you shouldn’t create a listing for anything other than a real-world location, but it’s extremely common to see a) spammers simply creating tons of listings for non-existent locations, b) people of good will not knowing the guidelines and doing the same thing, and c) service area businesses (SABs) feeling they have to create fake-location listings because Google won’t rank them for their service cities otherwise.

In all three scenarios, the brand puts itself at risk for detection and listing removal. Google can catch them, competitors and consumers can catch them, and marketers can catch them. Once caught, any effort that was put into ranking and building reputation around a fake-location listing is wasted. Better to have devoted resources to risk-free marketing efforts that will add up to something real.

What to do about it:

Advise the SAB owner to self-report the problem to Google. I know this sounds risky, but Google My Business forum Top Contributor Joy Hawkins let me know that she’s never seen a case in which Google has punished a business that self-reported accidental spam. The owner will likely need to un-verify the spam listings (see how to do that here) and then Google will likely remove the ineligible listings, leaving only the eligible ones intact.

What about dyed-in-the-wool spammers who know the guidelines and are violating them regardless, turning local pack results into useless junk? Get to the spam listing in Google Maps, click the “Suggest an edit” link, toggle the toggle to “Yes,” and choose the radio button for spam. Google may or may not act on your suggestion. If not, and the spam is misleading to consumers, I think it’s always a good idea to report it to the Google My Business forum in hopes that a volunteer Top Contributor may escalate an egregious case to a Google staffer.

2. Sharing phone numbers between multiple entities

What you’ll hear:

“I run both my dog walking service and my karate classes out of my house, but I don’t want to have to pay for two different phone lines.”

-or-

“Our restaurant has 3 locations in the city now, but we want all the calls to go through one number for reservation purposes. It’s just easier.”

-or-

“There are seven doctors at our practice. Front desk handles all calls. We can’t expect the doctors to answer their calls personally.”

Why it’s a problem:

There are actually multiple issues at hand on this one. First of all, Google’s guidelines state:

  • Provide a phone number that connects to your individual business location as directly as possible, and provide one website that represents your individual business location.
  • Use a local phone number instead of a central, call center helpline number whenever possible.
  • The phone number must be under the direct control of the business.

This rules out having the phone number of a single location representing multiple locations.

Confusing to Google

Google has also been known in the past to phone businesses for verification purposes. Should a business answer “Jim’s Dog Walking” when a Google rep is calling to verify that the phone number is associated with “Jim’s Karate Lessons,” we’re in trouble. Shared phone numbers have also been suspected in the past of causing accidental merging of Google listings, though I’ve not seen a case of this in a couple of years.

Confusing for businesses

As for the multi-practitioner scenario, the reality is that some business models simply don’t allow for practitioners to answer their own phones. Calls for doctors, dentists, attorneys, etc. are traditionally routed through a front desk. This reality calls into question whether forward-facing listings should be built for these individuals at all. We’ll dive deeper into this topic below, in the section on multi-practitioner listings.

Confusing for the ecosystem

Beyond Google-related concerns, Moz Local’s awesome engineers have taught me some rather amazing things about the problems shared phone numbers can create for citation-building campaigns in the greater ecosystem. Many local business data platforms are highly dependent on unique phone numbers as a signal of entity uniqueness (the “P” in NAP is powerful!). So, for example, if you submit both Jim’s Dog Walking and Jim’s Bookkeeping to Infogroup with the same number, Infogroup may publish both listings, but leave the phone number fields blank! And without a phone number, a local business listing is pretty worthless.

It’s because of realities like these that a unique phone number for each entity is a requirement of the Moz Local product, and should be a prerequisite for any citation building campaign.

What to do about it:

Let the business owner know that a unique phone number for each business entity, each business location, and each forward-facing practitioner who wants to be listed is a necessary business expense (and, hey, likely tax deductible, too!). Once the investment has been made in the unique numbers, the work ahead involves editing all existing citations to reflect them. The free tool Moz Check Listing can help you instantly locate existing citations for the purpose of creating a spreadsheet that details the bad data, allowing you to start correcting it manually. Or, to save time, the business owner may wish to invest in a paid, automated citation correction product like Moz Local.

Pro tip: Apart from removing local business listing stumbling blocks, unique phone numbers have an added bonus in that they enable the benefits of associating KPIs like clicks-to-call to a given entity, and existing numbers can be ported into call tracking numbers for even further analysis of traffic and conversions. You just can’t enjoy these benefits if you lump multiple entities together under a single, shared number.

3. Keyword stuffing GMB listing names

What you’ll hear:

“I have 5 locations in Dallas. How are my customers supposed to find the right one unless I add the neighborhood name to the business name on the listings?”

-or-

“We want customers to know we do both acupuncture and massage, so we put both in the listing name.”

-or-

“Well, no, the business name doesn’t actually have a city name in it, but my competitors are adding city names to their GMB listings and they’re outranking me!”

Why it’s a problem:

Long story short, it’s a blatant violation of Google’s guidelines to put extraneous keywords in the business name field of a GMB listing. Google states:

  • Your name should reflect your business’ real-world name, as used consistently on your storefront, website, stationery, and as known to customers.
  • Including unnecessary information in your business name is not permitted, and could result in your listing being suspended.

What to do about it:

I consider this a genuine Local SEO toughie. On the one hand, Google’s lack of enforcement of these guidelines, and apparent lack of concern about the whole thing, makes it difficult to adequately alarm business owners about the risk of suspension. I’ve successfully reported keyword stuffing violations to Google and have had them act on my reports within 24 hours… only to have the spammy names reappear hours or days afterwards. If there’s a suspension of some kind going on here, I don’t see it.

Simultaneously, Google’s local algo apparently continues to be influenced by exact keyword matches. When a business owner sees competitors outranking him via outlawed practices which Google appears to ignore, the Local SEO may feel slightly idiotic urging guideline-compliance from his patch of shaky ground.

But, do it anyway. For two reasons:

  1. If you’re not teaching business owners about the importance of brand building at this point, you’re not really teaching marketing. Ask the owner, “Are you into building a lasting brand, or are you hoping to get by on tricks?” Smart owners (and their marketers) will see that it’s a more legitimate strategy to build a future based on earning permanent local brand recognition for Lincoln & Herndon, than for Springfield Car Accident Slip and Fall Personal Injury Lawyers Attorneys.
  2. I find it interesting that, in all of Google’s guidelines, the word “suspended” is used only a few times, and one of these rare instances relates to spamming the business title field. In other words, Google is using the strongest possible language to warn against this practice, and that makes me quite nervous about tying large chunks of reputation and rankings to a tactic against which Google has forewarned. I remember that companies were doing all kinds of risky things on the eve of the Panda and Penguin updates and they woke up to a changed webscape in which they were no longer winners. Because of this, I advocate alerting any business owner who is risking his livelihood to chancy shortcuts. Better to build things for real, for the long haul.

Fortunately, it only takes a few seconds to sign into a GMB account and remove extraneous keywords from a business name. If it needs to be done at scale for large multi-location enterprises across the major aggregators, Moz Local can get the job done. Will removing spammy keywords from the GMB listing title cause the business to move down in Google’s local rankings? It’s possible that they will, but at least they’ll be able to go forward building real stuff, with the moral authority to report rule-breaking competitors and keep at it until Google acts.

And tell owners not to worry about Google not being able to sort out a downtown location from an uptown one for consumers. Google’s ability to parse user proximity is getting better every day. Mobile-local packs prove this out. If one location is wrongly outranking another, chances are good the business needs to do an audit to discover weaknesses that are holding the more appropriate listing back. That’s real strategy – no tricks!

4. Creating a multi-site morass

What you’ll hear:

“So, to cover all 3 or our locations, we have greengrocerysandiego.com, greengrocerymonterey.com and greengrocerymendocino.com… but the problem is, the content on the three sites is kind of all the same. What should we do to make the sites different?”

-or-

“So, to cover all of our services, we have jimsappliancerepair.com, jimswashingmachinerepair.com, jimsdryerrepair.com, jimshotwaterheaterrepair.com, jimsrefrigeratorrepair.com. We’re about to buy jimsvacuumrepair.com … but the problem is, there’s not much content on any of these sites. It feels like management is getting out of hand.”

Why it’s a problem:

Definitely a frequent topic in SEO forums, the practice of relying on exact match domains (EMDs) proliferates because of Google’s historic bias in their favor. The ranking influence of EMDs has been the subject of a Google updateand has lessened over time. I wouldn’t want to try to rank for competitive terms with creditcards.com or insurance.com these days.

But if you believe EMDs no longer work in the local-organic world, read this post in which a fellow’s surname/domain name gets mixed up with a distant city name and he ends up ranking in the local packs for it! Chances are, you see weak EMDs ranking all the time for your local searches — more’s the pity. And, no doubt, this ranking boost is the driving force behind local business models continuing to purchase multiple keyword-oriented domains to represent branches of their company or the variety of services they offer. This approach is problematic for 3 chief reasons:

  1. It’s impractical. The majority of the forum threads I’ve encountered in which small-to-medium local businesses have ended up with two, or five, or ten domains invariably lead to the discovery that the websites are made up of either thin or duplicate content. Larger enterprises are often guilty of the same. What seemed like a great idea at first, buying up all those EMDs, turns into an unmanageable morass of web properties that no one has the time to keep updated, to write for, or to market.
  2. Specific to the multi-service business, it’s not a smart move to put single-location NAP on multiple websites. In other words, if your construction firm is located at 123 Main Street in Funky Town, but consumers and Google are finding that same physical address associated with fences.com, bathroomremodeling.com, decks.com, and kitchenremodeling.com, you are sowing confusion in the ecosystem. Which is the authoritative business associated with that address? Some business owners further compound problems by assuming they can then build separate sets of local business listings for each of these different service-oriented domains, violating Google’s guidelines, which state:

    Do not create more than one page for each location of your business.

    The whole thing can become a giant mess, instead of the clean, manageable simplicity of a single brand, tied to a single domain, with a single NAP signal.

  1. With rare-to-nonexistent exceptions, I consider EMDs to be missed opportunities for brand building. Imagine, if instead of being Whole Foods at WholeFoods.com, the natural foods giant had decided they needed to try to squeeze a ranking boost out of buying 400+ domains to represent the eventual number of locations they now operate. WholeFoodsDallas.com, WholeFoodsMississauga.com, etc? Such an approach would get out of hand very fast.

Even the smallest businesses should take cues from big commerce. Your brand is the magic password you want on every consumer’s lips, associated with every service you offer, in every location you open. As I recently suggested to a Moz community member, be proud to domain your flower shop as rossirovetti.com instead of hoping FloralDelivery24hoursSanFrancisco.com will boost your rankings. It’s authentic, easy to remember, looks trustworthy in the SERPs, and is ripe for memorable brand building.

What to do about it:

While I can’t speak to the minutiae of every single scenario, I’ve yet to be part of a discussion about multi-sites in the Local SEO community in which I didn’t advise consolidation. Basically, the business should choose a single, proud domain and, in most cases, 301 redirect the old sites to the main one, then work to get as many external links that pointed to the multi-sites to point to the chosen main site. This oldie but goodie from the Moz blog provides a further technical checklist from a company that saw a 40% increase in traffic after consolidating domains. I’d recommend that any business that is nervous about handling the tech aspects of consolidation in-house should hire a qualified SEO to help them through the process.

5. Creating ill-considered practitioner listings

What you’ll hear:

“We have 5 dentists at the practice, but one moved/retired last month and we don’t know what to do with the GMB listing for him.”

-or-

“Dr. Green is outranking the practice in the local results for some reason, and it’s really annoying.”

Why it’s a problem:

I’ve saved the most complex for last! Multi-practitioner listings can be a blessing, but they’re so often a bane that my position on creating them has evolved to a point where I only recommend building them in specific cases.

When Google first enabled practitioner listings (listings that represent each doctor, lawyer, dentist, or agent within a business) I saw them as a golden opportunity for a given practice to dominate local search results with its presence. However, Google’s subsequent unwillingness to simply remove practitioner duplicates, coupled with the rollout of the Possum update which filters out shared category/similar location listings, coupled with the number of instances I’ve seen in which practitioner listings end up outranking brand listings, has caused me to change my opinion of their benefits. I should also add that the business title field on practitioner listings is a hotbed of Google guideline violations — few business owners have ever read Google’s nitty gritty rules about how to name these types of listings.

In a nutshell, practitioner listings gone awry can result in a bunch of wrongly-named listings often clouded by duplicates that Google won’t remove, all competing for the same keywords. Not good!

What to do about it:

You’ll have multiple scenarios to address when offering advice about this topic.

1.) If the business is brand new, and there is no record of it on the Internet as of yet, then I would only recommend creating practitioner listings if it is necessary to point out an area of specialization. So, for example if a medical practice has 5 MDs, the listing for the practice covers that, with no added listings needed. But, if a medical practice has 5 MDs and an Otolaryngologist, it may be good marketing to give the specialist his own listing, because it has its own GMB category and won’t be competing with the practice for rankings. *However, read on to understand the challenges being undertaken any time a multi-practitioner listing is created.

2.) If the multi-practitioner business is not new, chances are very good that there are listings out there for present, past, and even deceased practitioners.

  • If a partner is current, be sure you point his listing at a landing page on the practice’s website, instead of at the homepage, see if you can differentiate categories, and do your utmost to optimize the practice’s own listing — the point here is to prevent practitioners from outranking the practice. What do I mean by optimization? Be sure the practice’s GMB listing is fully filled out, you’ve got amazing photos, you’re actively earning and responding to reviews, you’re publishing a Google Post at least once a week, and your citations across the web are consistent. These things should all strengthen the listing for the practice.
  • If a partner is no longer with the practice, it’s ideal to unverify the listing and ask Google to market it as moved to the practice — not to the practitioner’s new location. Sound goofy? Read Joy Hawkins’ smart explanation of this convoluted issue.
  • If, sadly, a practitioner has passed away, contact Google to show them an obituary so that the listing can be removed.
  • If a listing represents what is actually a solo practitioner (instead of a partner in a multi-practitioner business model) and his GMB listing is now competing with the listing for his business, you can ask Google to merge the two listings.

3.) If a business wants to create practitioner listings, and they feel up to the task of handling any ranking or situational management concerns, there is one final proviso I’d add. Google’s guidelines state that practitioners should be “directly contactable at the verified location during stated hours” in order to qualify for a GMB listing. I’ve always found this requirement rather vague. Contactable by phone? Contactable in person? Google doesn’t specify. Presumably, a real estate agent in a multi-practitioner agency might be directly contactable, but as my graphic above illustrates, we wouldn’t really expect the same public availability of a surgeon, right? Point being, it may only make marketing sense to create a practitioner listing for someone who needs to be directly available to the consumer public for the business to function. I consider this a genuine grey area in the guidelines, so think it through carefully before acting.

Giving good help

It’s genuinely an honor to advise owners and marketers who are strategizing for the success of local businesses. In our own small way, local SEO consultants live in the neighborhood Mister Rogers envisioned in which you could look for the helpers when confronted with trouble. Given the livelihoods dependent on local commerce, rescuing a company from a foundational marketing mistake is satisfying work for people who like to be “helpers,” and it carries a weight of responsibility.

I’ve worked in 3 different SEO forums over the past 10+ years, and I’d like to close with some things I’ve learned about helping:

  1. Learn to ask the right questions. Small nuances in business models and scenarios can necessitate completely different advice. Don’t be scared to come back with second and third rounds of follow-up queries if someone hasn’t provided sufficient detail for you to advise them well. Read all details thoroughly before replying.
  2. Always, always consult Google’s guidelines, and link to them in your answers. It’s absolutely amazing how few owners and marketers have ever encountered them. Local SEOs are volunteer liaisons between Google and businesses. That’s just the way things have worked out.
  3. Don’t say you’re sure unless you’re really sure. If a forum or client question necessitates a full audit to surface a useful answer, say so. Giving pat answers to complicated queries helps no one, and can actually hurt businesses by leaving them in limbo, losing money, for an even longer time.
  4. Network with colleagues when weird things come up. Ranking drops can be attributed to new Google updates, or bugs, or other factors you haven’t yet noticed but that a trusted peer may have encountered.
  5. Practice humility. 90% of what I know about Local SEO, I’ve learned from people coming to me with problems for which, at some point, I had to discover answers. Over time, the work put in builds up our store of ready knowledge, but we will never know it all, and that’s humbling in a very good way. Community members and clients are our teachers. Let’s be grateful for them, and treat them with respect.
  6. Finally, don’t stress about delivering “the bad news” when you see someone who is asking for help making a marketing mistake. In the long run, your honesty will be the best gift you could possibly have given.

Happy helping!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

December 11, 2017  Tags: , ,   Posted in: SEO / Traffic / Marketing  No Comments

What Do Google’s New, Longer Snippets Mean for SEO? – Whiteboard Friday

Posted by randfish

Snippets and meta descriptions have brand-new character limits, and it’s a big change for Google and SEOs alike. Learn about what’s new, when it changed, and what it all means for SEO in this edition of Whiteboard Friday.

What do Google's now, longer snippets mean for SEO?

Click on the whiteboard image above to open a high-resolution version in a new tab!


Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about Google’s big change to the snippet length.

This is the display length of the snippet for any given result in the search results that Google provides. This is on both mobile and desktop. It sort of impacts the meta description, which is how many snippets are written. They’re taken from the meta description tag of the web page. Google essentially said just last week, “Hey, we have officially increased the length, the recommended length, and the display length of what we will show in the text snippet of standard organic results.”

So I’m illustrating that for you here. I did a search for “net neutrality bill,” something that’s on the minds of a lot of Americans right now. You can see here that this article from The Hill, which is a recent article — it was two days ago — has a much longer text snippet than what we would normally expect to find. In fact, I went ahead and counted this one and then showed it here.

So basically, at the old 165-character limit, which is what you would have seen prior to the middle of December on most every search result, occasionally Google would have a longer one for very specific kinds of search results, but more than 90%, according to data from SISTRIX, which put out a great report and I’ll link to it here, more than 90% of search snippets were 165 characters or less prior to the middle of November. Then Google added basically a few more lines.

So now, on mobile and desktop, instead of an average of two or three lines, we’re talking three, four, five, sometimes even six lines of text. So this snippet here is 266 characters that Google is displaying. The next result, from Save the Internet, is 273 characters. Again, this might be because Google sort of realized, “Hey, we almost got all of this in here. Let’s just carry it through to the end rather than showing the ellipsis.” But you can see that 165 characters would cut off right here. This one actually does a good job of displaying things.

So imagine a searcher is querying for something in your field and they’re just looking for a basic understanding of what it is. So they’ve never heard of net neutrality. They’re not sure what it is. So they can read here, “Net neutrality is the basic principle that prohibits internet service providers like AT&T, Comcast, and Verizon from speeding up, slowing down, or blocking any . . .” And that’s where it would cut off. Or that’s where it would have cut off in November.

Now, if I got a snippet like that, I need to visit the site. I’ve got to click through in order to learn more. That doesn’t tell me enough to give me the data to go through. Now, Google has tackled this before with things, like a featured snippet, that sit at the top of the search results, that are a more expansive short answer. But in this case, I can get the rest of it because now, as of mid-November, Google has lengthened this. So now I can get, “Any content, applications, or websites you want to use. Net neutrality is the way that the Internet has always worked.”

Now, you might quibble and say this is not a full, thorough understanding of what net neutrality is, and I agree. But for a lot of searchers, this is good enough. They don’t need to click any more. This extension from 165 to 275 or 273, in this case, has really done the trick.

What changed?

So this can have a bunch of changes to SEO too. So the change that happened here is that Google updated basically two things. One, they updated the snippet length, and two, they updated their guidelines around it.

So Google’s had historic guidelines that said, well, you want to keep your meta description tag between about 160 and 180 characters. I think that was the number. They’ve updated that to where they say there’s no official meta description recommended length. But on Twitter, Danny Sullivan said that he would probably not make that greater than 320 characters. In fact, we and other data providers, that collect a lot of search results, didn’t find many that extended beyond 300. So I think that’s a reasonable thing.

When?

When did this happen? It was starting at about mid-November. November 22nd is when SISTRIX’s dataset starts to notice the increase, and it was over 50%. Now it’s sitting at about 51% of search results that have these longer snippets in at least 1 of the top 10 as of December 2nd.

Here’s the amazing thing, though — 51% of search results have at least one. Many of those, because they’re still pulling old meta descriptions or meta descriptions that SEO has optimized for the 165-character limit, are still very short. So if you’re the person in your search results, especially it’s holiday time right now, lots of ecommerce action, if you’re the person to go update your important pages right now, you might be able to get more real estate in the search results than any of your competitors in the SERPs because they’re not updating theirs.

How will this affect SEO?

So how is this going to really change SEO? Well, three things:

A. It changes how marketers should write and optimize the meta description.

We’re going to be writing a little bit differently because we have more space. We’re going to be trying to entice people to click, but we’re going to be very conscientious that we want to try and answer a lot of this in the search result itself, because if we can, there’s a good chance that Google will rank us higher, even if we’re actually sort of sacrificing clicks by helping the searcher get the answer they need in the search result.

B. It may impact click-through rate.

We’ll be looking at Jumpshot data over the next few months and year ahead. We think that there are two likely ways they could do it. Probably negatively, meaning fewer clicks on less complex queries. But conversely, possible it will get more clicks on some more complex queries, because people are more enticed by the longer description. Fingers crossed, that’s kind of what you want to do as a marketer.

C. It may lead to lower click-through rate further down in the search results.

If you think about the fact that this is taking up the real estate that was taken up by three results with two, as of a month ago, well, maybe people won’t scroll as far down. Maybe the ones that are higher up will in fact draw more of the clicks, and thus being further down on page one will have less value than it used to.

What should SEOs do?

What are things that you should do right now? Number one, make a priority list — you should probably already have this — of your most important landing pages by search traffic, the ones that receive the most search traffic on your website, organic search. Then I would go and reoptimize those meta descriptions for the longer limits.

Now, you can judge as you will. My advice would be go to the SERPs that are sending you the most traffic, that you’re ranking for the most. Go check out the limits. They’re probably between about 250 and 300, and you can optimize somewhere in there.

The second thing I would do is if you have internal processes or your CMS has rules around how long you can make a meta description tag, you’re going to have to update those probably from the old limit of somewhere in the 160 to 180 range to the new 230 to 320 range. It doesn’t look like many are smaller than 230 now, at least limit-wise, and it doesn’t look like anything is particularly longer than 320. So somewhere in there is where you’re going to want to stay.

Good luck with your new meta descriptions and with your new snippet optimization. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

December 8, 2017  Tags: , , , , ,   Posted in: SEO / Traffic / Marketing  No Comments

Don’t Be Fooled by Data: 4 Data Analysis Pitfalls & How to Avoid Them

Posted by Tom.Capper

Digital marketing is a proudly data-driven field. Yet, as SEOs especially, we often have such incomplete or questionable data to work with, that we end up jumping to the wrong conclusions in our attempts to substantiate our arguments or quantify our issues and opportunities.

In this post, I’m going to outline 4 data analysis pitfalls that are endemic in our industry, and how to avoid them.

1. Jumping to conclusions

Earlier this year, I conducted a ranking factor study around brand awareness, and I posted this caveat:

“…the fact that Domain Authority (or branded search volume, or anything else) is positively correlated with rankings could indicate that any or all of the following is likely:

  • Links cause sites to rank well
  • Ranking well causes sites to get links
  • Some third factor (e.g. reputation or age of site) causes sites to get both links and rankings”
    ~ Me

However, I want to go into this in a bit more depth and give you a framework for analyzing these yourself, because it still comes up a lot. Take, for example, this recent study by Stone Temple, which you may have seen in the Moz Top 10 or Rand’s tweets, or this excellent article discussing SEMRush’s recent direct traffic findings. To be absolutely clear, I’m not criticizing either of the studies, but I do want to draw attention to how we might interpret them.

Firstly, we do tend to suffer a little confirmation bias — we’re all too eager to call out the cliché “correlation vs. causation” distinction when we see successful sites that are keyword-stuffed, but all too approving when we see studies doing the same with something we think is or was effective, like links.

Secondly, we fail to critically analyze the potential mechanisms. The options aren’t just causation or coincidence.

Before you jump to a conclusion based on a correlation, you’re obliged to consider various possibilities:

  • Complete coincidence
  • Reverse causation
  • Joint causation
  • Linearity
  • Broad applicability

If those don’t make any sense, then that’s fair enough — they’re jargon. Let’s go through an example:

Before I warn you not to eat cheese because you may die in your bedsheets, I’m obliged to check that it isn’t any of the following:

  • Complete coincidence - Is it possible that so many datasets were compared, that some were bound to be similar? Why, that’s exactly what Tyler Vigen did! Yes, this is possible.
  • Reverse causation - Is it possible that we have this the wrong way around? For example, perhaps your relatives, in mourning for your bedsheet-related death, eat cheese in large quantities to comfort themselves? This seems pretty unlikely, so let’s give it a pass. No, this is very unlikely.
  • Joint causation - Is it possible that some third factor is behind both of these? Maybe increasing affluence makes you healthier (so you don’t die of things like malnutrition), and also causes you to eat more cheese? This seems very plausible. Yes, this is possible.
  • Linearity - Are we comparing two linear trends? A linear trend is a steady rate of growth or decline. Any two statistics which are both roughly linear over time will be very well correlated. In the graph above, both our statistics are trending linearly upwards. If the graph was drawn with different scales, they might look completely unrelated, like this, but because they both have a steady rate, they’d still be very well correlated. Yes, this looks likely.
  • Broad applicability - Is it possible that this relationship only exists in certain niche scenarios, or, at least, not in my niche scenario? Perhaps, for example, cheese does this to some people, and that’s been enough to create this correlation, because there are so few bedsheet-tangling fatalities otherwise? Yes, this seems possible.

So we have 4 “Yes” answers and one “No” answer from those 5 checks.

If your example doesn’t get 5 “No” answers from those 5 checks, it’s a fail, and you don’t get to say that the study has established either a ranking factor or a fatal side effect of cheese consumption.

A similar process should apply to case studies, which are another form of correlation — the correlation between you making a change, and something good (or bad!) happening. For example, ask:

  • Have I ruled out other factors (e.g. external demand, seasonality, competitors making mistakes)?
  • Did I increase traffic by doing the thing I tried to do, or did I accidentally improve some other factor at the same time?
  • Did this work because of the unique circumstance of the particular client/project?

This is particularly challenging for SEOs, because we rarely have data of this quality, but I’d suggest an additional pair of questions to help you navigate this minefield:

  • If I were Google, would I do this?
  • If I were Google, could I do this?

Direct traffic as a ranking factor passes the “could” test, but only barely — Google could use data from Chrome, Android, or ISPs, but it’d be sketchy. It doesn’t really pass the “would” test, though — it’d be far easier for Google to use branded search traffic, which would answer the same questions you might try to answer by comparing direct traffic levels (e.g. how popular is this website?).

2. Missing the context

If I told you that my traffic was up 20% week on week today, what would you say? Congratulations?

What if it was up 20% this time last year?

What if I told you it had been up 20% year on year, up until recently?

It’s funny how a little context can completely change this. This is another problem with case studies and their evil inverted twin, traffic drop analyses.

If we really want to understand whether to be surprised at something, positively or negatively, we need to compare it to our expectations, and then figure out what deviation from our expectations is “normal.” If this is starting to sound like statistics, that’s because it is statistics — indeed, I wrote about a statistical approach to measuring change way back in 2015.

If you want to be lazy, though, a good rule of thumb is to zoom out, and add in those previous years. And if someone shows you data that is suspiciously zoomed in, you might want to take it with a pinch of salt.

3. Trusting our tools

Would you make a multi-million dollar business decision based on a number that your competitor could manipulate at will? Well, chances are you do, and the number can be found in Google Analytics. I’ve covered this extensively in other places, but there are some major problems with most analytics platforms around:

  • How easy they are to manipulate externally
  • How arbitrarily they group hits into sessions
  • How vulnerable they are to ad blockers
  • How they perform under sampling, and how obvious they make this

For example, did you know that the Google Analytics API v3 can heavily sample data whilst telling you that the data is unsampled, above a certain amount of traffic (~500,000 within date range)? Neither did I, until we ran into it whilst building Distilled ODN.

Similar problems exist with many “Search Analytics” tools. My colleague Sam Nemzer has written a bunch about this — did you know that most rank tracking platforms report completely different rankings? Or how about the fact that the keywords grouped by Google (and thus tools like SEMRush and STAT, too) are not equivalent, and don’t necessarily have the volumes quoted?

It’s important to understand the strengths and weaknesses of tools that we use, so that we can at least know when they’re directionally accurate (as in, their insights guide you in the right direction), even if not perfectly accurate. All I can really recommend here is that skilling up in SEO (or any other digital channel) necessarily means understanding the mechanics behind your measurement platforms — which is why all new starts at Distilled end up learning how to do analytics audits.

One of the most common solutions to the root problem is combining multiple data sources, but…

4. Combining data sources

There are numerous platforms out there that will “defeat (not provided)” by bringing together data from two or more of:

  • Analytics
  • Search Console
  • AdWords
  • Rank tracking

The problems here are that, firstly, these platforms do not have equivalent definitions, and secondly, ironically, (not provided) tends to break them.

Let’s deal with definitions first, with an example — let’s look at a landing page with a channel:

  • In Search Console, these are reported as clicks, and can be vulnerable to heavy, invisible sampling when multiple dimensions (e.g. keyword and page) or filters are combined.
  • In Google Analytics, these are reported using last non-direct click, meaning that your organic traffic includes a bunch of direct sessions, time-outs that resumed mid-session, etc. That’s without getting into dark traffic, ad blockers, etc.
  • In AdWords, most reporting uses last AdWords click, and conversions may be defined differently. In addition, keyword volumes are bundled, as referenced above.
  • Rank tracking is location specific, and inconsistent, as referenced above.

Fine, though — it may not be precise, but you can at least get to some directionally useful data given these limitations. However, about that “(not provided)”…

Most of your landing pages get traffic from more than one keyword. It’s very likely that some of these keywords convert better than others, particularly if they are branded, meaning that even the most thorough click-through rate model isn’t going to help you. So how do you know which keywords are valuable?

The best answer is to generalize from AdWords data for those keywords, but it’s very unlikely that you have analytics data for all those combinations of keyword and landing page. Essentially, the tools that report on this make the very bold assumption that a given page converts identically for all keywords. Some are more transparent about this than others.

Again, this isn’t to say that those tools aren’t valuable — they just need to be understood carefully. The only way you could reliably fill in these blanks created by “not provided” would be to spend a ton on paid search to get decent volume, conversion rate, and bounce rate estimates for all your keywords, and even then, you’ve not fixed the inconsistent definitions issues.

Bonus peeve: Average rank

I still see this way too often. Three questions:

  1. Do you care more about losing rankings for ten very low volume queries (10 searches a month or less) than for one high volume query (millions plus)? If the answer isn’t “yes, I absolutely care more about the ten low-volume queries”, then this metric isn’t for you, and you should consider a visibility metric based on click through rate estimates.
  2. When you start ranking at 100 for a keyword you didn’t rank for before, does this make you unhappy? If the answer isn’t “yes, I hate ranking for new keywords,” then this metric isn’t for you — because that will lower your average rank. You could of course treat all non-ranking keywords as position 100, as some tools allow, but is a drop of 2 average rank positions really the best way to express that 1/50 of your landing pages have been de-indexed? Again, use a visibility metric, please.
  3. Do you like comparing your performance with your competitors? If the answer isn’t “no, of course not,” then this metric isn’t for you — your competitors may have more or fewer branded keywords or long-tail rankings, and these will skew the comparison. Again, use a visibility metric.

Conclusion

Hopefully, you’ve found this useful. To summarize the main takeaways:

  • Critically analyse correlations & case studies by seeing if you can explain them as coincidences, as reverse causation, as joint causation, through reference to a third mutually relevant factor, or through niche applicability.
  • Don’t look at changes in traffic without looking at the context — what would you have forecasted for this period, and with what margin of error?
  • Remember that the tools we use have limitations, and do your research on how that impacts the numbers they show. “How has this number been produced?” is an important component in “What does this number mean?”
  • If you end up combining data from multiple tools, remember to work out the relationship between them — treat this information as directional rather than precise.

Let me know what data analysis fallacies bug you, in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

December 7, 2017  Tags: , , , , , , ,   Posted in: SEO / Traffic / Marketing  No Comments



TechNetSource on Facebook




TechNetSource. WebSite Development, Hosting, and Technology Resources and Information.