It’s Time to Treat Content as Part of the User Experience

Posted by wrttnwrd

Forget content marketing, SEO content, and whatever else as you know them. We need to fundamentally change our approach to content.
It’s not an add-on or a separate thing. It’s an inseparable part of the user experience. Let’s act that way.

Content: the silent epidemic

Your site’s infested.

Most organizations treat content like some kind of horrific disease. They try to shove it as far away as possible from the “real” web site, like a bad case of body lice.

Where do they put it? The blog, of course:

content-its-not-just-for-blogs.png

Don’t worry, this isn’t another put-the-blog-on-the-site-dammit rant. Hopefully, you already understand that
blog.site.com isn’t as good as site.com/blog.

They also incorrectly define “content.” Content isn’t “stuff we write to rank higher” or “infographics” or “longform articles.”

Content is anything that communicates a message to the audience.
Anything.

Product descriptions? Content.

The company story? Content.

Images? Content.

That video of your company picnic that someone posted to your site three years ago and shows everyone dressed as Muppets? Content.

If it says something, shows something, or otherwise communicates, it’s content.

Change your approach

We all need to change our entire approach to content. Treat it as part of the user experience, instead of a nasty skin disease:

  1. Integrate content that can enhance the user experience
  2. Optimize what you already have

Integrate content that can enhance the user experience

Interlink and integrate related information. That includes connecting promotional to informational and showing related visuals and text on promotional pages.

“Promotional” means product descriptions or anything else that “sells” an idea or makes a call to action to the visitor.

Companies are terrified of this. They believe it’ll send customers away. But it doesn’t happen.

I have never seen revenue drop because of interlinking or other integration. I
have seen it generate long-term customer relationships, increase referrals and increase near-term conversions.

Link to the blog

If nothing else,
link to relevant blog posts. People intent on making a purchase aren’t going to click away never to return. Check out how Surly Bikes does it:

moz_content_4.jpg

(By the way, that bike’s a steal at ,700, if anyone’s trying to figure out what to get me for Hanukkah this year.)

Linking to a relevant post allows really interested visitors to drill down an additional layer of detail. They can get impressions, learn why one product might be better for them than another, and maybe even (gasp) realize that the folks behind the product are just like them.

Embed related social content

Urban Outfitters does so much right. They have an amazing 
Instagram account:

But, for some reason, they don’t link to it from product pages.

It’s OK. I’m not cool enough for their stuff anyway. But why hide all those attractive people using their products? That’ll encourage all sorts of purchasers.

Also, link to related social content right from your product pages. Ideally, you want to embed examples right in the page. At the very least, link prominently to the relevant account (but seriously, embed the examples).

Here’s another example. I’m definitely a Democrat, but I have to offer a tip to the other side of the aisle here: If you have someone with decent YouTube videos, include ‘em. Representative DeSantis has an entire YouTube channel. Why not show a few videos here?

moz_content_9.jpg

If you want to see someone do it right, have a look at
top10.com. They’re pulling Instagram images straight into their hotel information.

You can do this with any social platform that lets you: Facebook, Twitter, Instagram, Pinterest, Vine, etc. So what’s stopping you?

Optimize what you already have

Your site is already stuffed with content.

You might deny it. But it’s true.

So why not optimize what you’ve got?

Write decent descriptions

Whatever you’re selling/promoting, write a decent description. That includes category pages. I’m not sure what to say about the following top-of-category page “description,” so I’ll go with hysterical, bitter laughter:

moz_content_12.png

By the way, for those who think this kind of content is a great SEO tactic, this site’s on page 2 for “jeans.”

I’m not thrilled with this one, as it’s buried at the bottom of the category page and a little keyword stuffed, but compared to the previous, it’s a shining light in the darkness:

moz_content_13.png

That site ranks #3 for “jeans.”

Even if you care only about rankings, better descriptions are a better strategy.

At this time, the #1 site for “jeans” has a description buried at the bottom of their category page that’s so awful I cried. I’ll dig into that another time, but I doubt that travesty is helping them much, and more importantly, it sure doesn’t make me want to buy anything.

Don’t be ashamed

Your content is not a zit. Show it proudly. I like the way Juicy Couture does it. I can actually read the product description:

moz_content_10.jpg

This, on the other hand, makes me think I need bifocals.

moz_content_11.png

That’s actual size, by the way.

Follow the same rules of typography you would anywhere else. Make sure your type is high-contrast and readable. Put it somewhere that I’ll actually see it. At the very least, don’t hide it, for heaven’s sake.

Guide me when I’m lost

Please don’t redirect me to a category page without any explanation. I’m not bashing a pinata.

Blindfolding me, spinning me around 8 times and then sending me on my way is not entertaining. It’s annoying as hell.

If I search for a product you no longer sell, and click the description:

moz_content_5.png

  1. Show me the product page with a “Sorry, this product is no longer available. But you might like…” and send me along
  2. Or show me a note explaining what just happened

Urban Outfitters does it right:

moz_content_7.jpg

Nice!

You might be thinking, “Hey, that’s not content!”

Yeah, it is. When content disappears, send me to stuff you’ve got. Content UX 101.

Oh, and that technology thing…

One last step: You need to enable all of this through technology. You have to be able to do all the stuff I listed above. That requires the right tools.

This is the source of teeth-grinding frustration for many content folks. If you can’t edit the site, you can’t do any of this stuff, right? Weellll yes and no. Here are things I’ve tried, and the result:

  1. Screaming. Generally a turn-off. Never gets the desired result.
  2. Demanding. See screaming.
  3. Asking, with a justification. Ask for the features you need, explaining why and how they might help. If you can, show competitors who are doing the same thing. This can take…. a….. long……. time. But it works.
  4. Getting small wins. Can’t add a new page? Edit a product description. Can’t add a new chunk of content to a product page? Add a little bit to the existing description, or edit it as desired. This one works pretty well, but keep asking for the other features, or you’ll never make progress.
  5. Move off the site. You can set up a separate blog, social media account, whatever. I usually punch myself in the spleen right about then, but this can get results, especially for a big brand. Record the results and use that to advocate for more. Best if used in tandem with #3. Runs directly counter to half this article, but what’re you gonna do?

I’m sorry I don’t have an easier solution here. Just remember you’re not the only person asking the IT team for stuff, or telling your boss you’re being prevented from doing a good job, and proceed accordingly.

If you are the boss or IT team, and you’re reading this, please: Don’t sacrifice content or shove it off the site. Listen to your marketers. They want to succeed. “Helped triple revenue” looks a lot better on a resume than “Proposed worthless ideas.” So they’ve got significant incentive.

OK, but is this legit?

I have to admit, I don’t have data on all of this. Know what? Not all marketing is data-driven. But look at some real-life examples of user experience optimization through content:

In the “real world,” the
environment is the content:

  • Starbucks doesn’t just operate a bunch of walk-in, walk-out coffee shops. They provide music, comfy chairs and nice people. An experience. Not a transaction.
  • New car dealers have completely transformed from big lots with cheesy pitches to mini-museums.
  • Airlines attempt to sell an experience. Some do it better than others. And it’s not about money. “Low fare” airlines like Southwest have been particularly successful.

Online, features and… well, content are the content.

  • Amazon feels like a purely transactional site at first. But in-depth reviews, editors’ comments, lists of recently-viewed items and other gadgetry transform the site.
  • Woot.com lives and breathes cool content. It’s their brand, and it’s an intimate part of the user experience.
  • And check out Surly, as I said above.

These brands all do pretty well, yes? Good content UX sure doesn’t hurt.

Another example: We worked with a major fashion brand. We got them thinking about the content user experience. They integrated, and optimized their product descriptions. Our technical recommendations had to wait for release cycles. It didn’t matter. They immediately hit number one for the most competitive phrases in their industry. Coincidence?
I think not. So, even if rankings are your only goal, content UX is a powerful tool.

Get to work

Practice user experience optimization through content. By “optimization,” I don’t mean “stuffing in keywords until readers want to puke.” I mean “optimal combination of promotional and informational content.”

Content optimization drives interest, engagement and yes, rankings. It also takes visitors from transactional to loyal.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

October 29, 2014  Tags: , , , , , ,   Posted in: SEO / Traffic / Marketing  No Comments

Semantic Analytics: How to Track Performance and ROI of Structured Data

Posted by Mike_Arnesen

If you’re interested in tracking the ROI of adding semantic markup to your website, while simultaneously improving your web analytics, this post is for you! Join me, friend.

Semantic markup and structured data: Can I get a heck yes?!

If you haven’t heard of semantic markup and the SEO implications of applying said markup, you may have been living in a dark cave with no WiFi for the past few years. Or perhaps you’re new to this whole search marketing thing. In the later case, I won’t fault you, but you should really check this stuff out, because
it’s the future.

That said, I’d wager most people reading this post are well acquainted with semantic markup and the idea of structured data. More than likely, you have some of this markup on your site already and you probably have some really awesome rich snippets showing up in search.

Organic snippets like these are why most SEOs are implementing semantic markup. I don’t think we need to debate that. Everyone wants to get those beautiful, attractive, CTR-boosting rich snippets and, in some cases, you’re at a competitive disadvantage simply by not having them.

If you’re like me, you
love seeing your sites earn rich snippets in Google’s search results. I loved it so much that I let myself believe that this was the end goal of semantic markup: landing the rich snippet. When I implemented markup for various entities on the sites I worked on, I’d get the markup added to the site’s code, verify that it was successfully crawled, watch the rich snippet show up, and then call it a victory! Hooray!

Tracking the ROI of semantic markup

Well, I’ve come to the realization that
this simply can’t be the measure of success for your semantic SEO strategy! What difference does that rich snippet really make? C’mon, be honest. Do you know what the
real impact was? Can you speak to your boss or your client about how pages with a specific type of markup are performing compared to their non-marked up counterparts? Another question to ask: Are you leveraging that semantic data for as much value as you can?

Is there a way to more effectively track the ROI of semantic markup implementation while simultaneously giving us a deeper level of insight regarding how our site is performing?

The answer is yes! How? It’s (relatively) easy, because we’ve already done the hard work. Through applying semantic markup to our site, we’ve embedded an incredibly rich layer of meaningful data in our code. Too often, SEOs like us forget that the idea of the semantic web extends far beyond search engines. It’s easy to add schema.org entity markup to our pages and and think that it ends when search engines pick up on it. But that can’t be the end of the story! Don’t let the search engines have all the fun; we can use that data, too.

By looking at the semantic markup on any given page, we can see what type of “entity” we’re looking at (be it an “Event,” “Person,” “Product,” “Article,” or anything else) and we can also see what attributes or properties that entity has. If we could gather that information and pump it into an analytics platform, we’d really have something great. So let’s do that!

Using Google Tag Manager to record structured data

Google Tag Manager was the game changer I didn’t know I needed. There are a few great posts that provide nice overviews of GTM, so I won’t get too deep into that here, but the key capability of Google Tag Manager that is going to allow us to do amazing things is its inherent ability to be awesome.

Okay, let me explain.

The value of any tag management platform lies in its ability to fire off tags dynamically based off of Rules and Macros. This is incredible for anyone doing advanced analytics tracking because you can attach granular tracking elements to various sections of your site without (theoretically) ever having to touch your code. Need to track a click on an image banner in your sidebar? Just set up a Tag in Google Tag Manager that fires based on a Rule that uses a Macro to identify that image banner in the code of your site!

So what I’m ultimately trying to share with you through this post is a methodology for using GTM to bring your semantic markup in to your analytics platform so you can not just track the ROI of adding semantic markup to your site, but leverage that markup for a deeper level of insight into your data. I’ve taken to calling this “semantic analytics.”

Tags, rules, and macros

Before we get into the nuts and bolts of how this all works, let’s go over
Tags, Rules, and Macros in Google Tag Manager.

  • Tags: In the context of analytics, a Tag is any piece of tracking code that is going to send information back to Google Analytics (or your analytics platform of choice). Nearly every site on the web is going to have a basic pageview tracking Tag on every page; every time you load a page, that Tag is fired and sends information about that pageview to an analytics platform (e.g. Google Analytics). But we can get even better intelligence by having additional tags send other information into Google, like “event” tags which can send information for things that happen on the site (clicks, scrolling, non-click interactions, video plays, etc.). Google Tag Manager lets you configure any Tag you want, which will fire based on a Rule.
  • Rules: A
    Rule in Google Tag Manager tells a Tag when to fire. Without a Rule attached to a Tag, it will never fire (i.e. send info to Google Analytics) so the most basic Rule is one that is triggered on every page. However, you could set up a thank-you page conversion event tag for AdWords, for example, that only fires on a page with a URL matching /contact-form/thank-you/.
  • Macros:
    Macros are by far the most powerful features in Google Tag Manager. Their power seems almost limitless, but the key thing we’ll be looking at here is the ability to create a JavaScript Macro that will look in the DOM (Document Object Model) for specific elements. This allows you to look for specific elements in the HTML and fire events based on what you find.

What we’ll want to do in Google Tag Manger is create a
Macro that looks for semantic markup in the code of a page. We can then use a Rule to fire a Tag every time someone views a page that has semantic markup on it and include event labels that record what type of entity that person looked at. Ultimately, this will let us drill down into analytics and view reports to see how marked up pages perform against their non-marked up counterparts. We can even pull out granular properties of entities and analyze based on those (for example, pull the “performer” item property out of all “Event” entities and see which “performers” got more traffic and/or led to more conversion events).

Setting up semantic analytics

So let’s walk though the whole semantic analytics process using a website that lists industry events as an example. Since I’m familiar with it, let’s use SwellPath.com as our example since we list
all the events we present at in our Resources section.

For each industry event on our site, we have semantic markup that specifies the Event schema.org itemtype and defines various associated itemprops, including the speaker (itemprop=”performer”), venue (itemprop=”eventVenue”), event name (itemprop=”name”), and time (itemprop=”startTime”). At the most basic level, I want to be able to track all the pages that have Event markup. If I wanted to get ambitious (which I do!), I want to pull the speaker name, event name, and venue name, too.

To do this, we’ll want to set up a
Macro, which is the condition for a Rule, which then fires a Tag. However, we’re going to dive into that progression in reverse order. Yeah, we’re going full Tarantino.

Setting up the Tag

The
Tag we want to set up in Google Tag Manager will look like this:

The category for all of our semantic events will be “Semantic Markup,” so we can use it to group together any page with markup on it. The event action will be “Semantic – Event Markup On-Page” (even though it’s not much of an “action,” per se). Finally, we’ll want to make the label pretty specific the individual item we’re talking about, so we’ll pull in the speaker’s name and combine it with the even name so we have plenty of context. We’ll use a
Macro for that, but more on that below.

Configuring the Rule

Without a
Rule though, our Tag won’t ever fire. We can’t just set it up to fire on every page, though; we need to have a Rule that says “only fire this tag if semantic markup is on the page.” Our Rule will include two conditions.

  1. The first condition looks for an event that is equal to “gtm.dom”. This is an event that Google Tag Manager can pick up out-of-the-box and it means the that Document Object Model (DOM) finished loading (in simple terms, the page finished downloading). The reason we need this is because we need to tell Google Tag Manager to look in our code to find semantic markup; it doesn’t make sense to do that before the page has finished loading.
  2. The second condition for our Rule is a Macro that’s going to look for specific markup on the page.

Building the Macro

The
Macro is the really cool part! To get it set up, we’ll create a Macro that uses “Custom JavaScript.” Inside of the Macro, we essentially want to create a function that looks for our itemtype tag from schema.org on the page and returns either “true” or “false”. The screenshot that follows shows what it looks like when you set it up in Google Tag Manager, but I’ve provided the text of the Macro as well so you can cut and paste.

function () {
   var SemElem = document.querySelectorAll('[itemtype*="Event"]');
   SemElem = SemElem.length > 0 ? true : false;
   return SemElem;
}

Keep in mind that I’m using jQuery here to make sure it works across most browsers. Make sure that whatever site you implement this on also has jQuery installed, or this Macro won’t work.

While we’re here, we’ll also create a
Macro to pull out specific itemprops that we want to use later. Specifically, the event name and the performer name. We can then combine those two variables in our Macro function to form a sentence that we’ll use as an event label later on. I also added an If statement so that it returns “No semantic data” if any important events are missing.

function () {
   var venue = $('[itemtype*="Event"] [itemprop*="name"]') [0];
   var performer = $('[itemtype*="Event"] [itemprop*="performer"]') .text();

   venue = $(venue).text();

   label = performer + " at " + venue + " (Semantic Event)";

   check = venue.length > 0 ? true : false;
   if (check === false) {
      label = "No semantic data";
      return label;
   }
   else {
      return label;
   }
}

Putting it all together

To actually set this up in Google Tag Manager, you’ll set up all the elements we just discussed in reverse order (do you get my previous Tarantino joke now?). First, create your
Macros in GTM. Then create your Rule using the Macro you just created as one of the criterium. Finally, create your Tag that fires based on the Rule.

From there, you can push the new version of your GTM Container Tag live. If you’re smart, though, you’ll run it in Debug Mode first and make sure that you have it set up correctly.

Naming Conventions

What good is a standardized vocabulary for your web data if you don’t have a standardized naming convention for your Google Tag Manager and Google Analtyics set up? Here’s what I use, but feel free to use what works for you:

  • Macros: Semantic – {Item Type} Markup Detection
  • Macros: Semantic – {Item Type} Markup Properties
  • Rule: Semantic – Has {Item Type} Markup Rule
  • Tag: Semantic – {Item Type} Markup Analytics Event

Making it even easier

Thanks to Google Tag Manager’s amazing new API and Import/Export feature, you can speed up this whole process by importing a GTM Container Tag to your existing account. That way, you don’t have to set up any of the above; you can just import it.

All you have to do is download this JSON file called ”
Semantic Analytics Headstart” (DropBox link) and then use the Import option in your Google Tag Manager account.

Within GTM, just select the Semantic Analytics Headstart JSON file you saved as your file to import, select Merge, and choose Overwrite. The only thing that this Container Tag has in it is the Semantic Macros, Rules, and Tags, so Merge and Overwrite will simply add these special features to your existing configuration. Just note that the Semantic Tags reference a Macro that contains your Universal Analytics tracking ID (i.e., make sure to edit the Macro called “Universal Anatlyics UA-ID” and put in your own tracking).

Semantic data in Google Analytics

Congratulations! You now have all the pieces in place to start receiving semantic data in Google Analytics. Go ahead, go check your Real Time Events report. I’ll hang here.

Okay, seriously, how cool was that?

There’s something incredibly special about giving your data meaning. Whether you get that by having an intimate relationship with your data platform, having super-advanced tagging in place, or making your analytics truly semantic by applying the principles of the semantic web to your data collection, you’re doing something amazing. Now that you have semantic data in your analytics, you can drill down into specific categories and get some really cool information.

Another path

I feel like passing in semantic data as Events in Google Analytics is fairly straightforward, and the step-by-step process makes it fairly easy to grasp, but there’s another (perhaps even better) way to add semantic data to your analytics. In analytics speak, a “dimension” is a descriptive attribute of a data object. Sounds pretty similar to itemprops on the semantic web, eh? So, why not set up Custom Dimensions in Google Analytics and use those to enhance our semantic analytics? Let’s do it!

Fortunately, we’ve already put a lot of the pieces in place to access our semantic data, so we just have to create the Custom Dimension in Google Analytics and shoot data to it by adding a field in GTM. First, go to the Admin panel in you Google Analytics account and go to “Custom Definitions” > “Custom Dimensions”. From there you’ll want to create a new Custom Dimension called “Semantic Markup” with the “Scope” of “Hit” and set it to be active.

Make a mental note of what the index is; you’ll need to specify it in Google Tag Manager. With the Semantic Event tag that we set up in GTM, we created an entirely new tag that would fire something on pages with semantic markup. For Custom Dimensions, we’ll want to add something to our general analytics.js tag (the basic pageview tracking for Google Analytics). Once you find your main analytics tracking code in the list of tags, open it up and scross down to Custom Dimensions (under More Settings). Click the button to “Add Custom Dimension” and use the same index that you made a note of and, for the Dimension field, we’ll use the same Macro we used for our Event label: Semantic – Event Markup Propertites.

Once you have this set up, you’ll be able to bring in a “Semantic Markup” dimension to almost any Google Analytics report. Here’s an example All Pages report that now displays Semantic Markup in addition to the Page URL.

I introduced this Custom Dimension approach as “another path,” but really, I like to use it as a supplement and work both angles. Having both semantic events and semantic dimensions set up in Google Tag Manager won’t cause any issues; it will just give you more meaningful data. Who doesn’t love that?

Looking forward with semantic analytics

What can you accomplish by a applying semantic values to your data? That’s what I’m most excited to find out.

I’m working on getting this up and running on sites that publish tons of content (Article markup), process thousands of eCommerce transactions (Product markup), and have lists of experts (Person markup). I’d love to see what semantic analytics could do for local business directories (Yelp), movie sites (IMDB), car dealerships, and recipe sites (my buddy
Sam Edwards is already looking to implement this idea for Duncan Hines).

One of the biggest “mind blown moments” of my career was when I discovered that there was a whole semantic web community out there that wasn’t just concerned with marking up content to get better looking snippets in the SERPs; they wanted to use semantic markup to make data more accessible and meaningful and to make the web a better place to be. I’m hoping that amazing folks like
Aaron Bradley and Jarno van Driel will be able to help evolve this concept and inspire widespread adoption of semantic analytics.

If you have any questions, ideas for how this could be applied, or ways to extend this concept, let me know in the comments! Happy optimizing.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

October 28, 2014  Tags: , , , , ,   Posted in: SEO / Traffic / Marketing  No Comments

Introducing Followerwonk Profile Pages

Posted by petebray

Followerwonk has always been primarily about social graph analysis and exploration: from tracking follower growth, comparing relationships, and so on.

Followerwonk now adds content analysis and user profiling, too

In the Analyze tab, you’ll find a new option to examine any Twitter user’s tweets. (Note that this is a Pro-only feature, so you’ll need to be a subscriber to use it.)

You can also access these profile pages by simply clicking on a Twitter username anywhere else in Followerwonk.

For us, this feature is really exciting, because we let you analyze not just yourself, but other people too. In fact, Pro users can analyze as many other Twitter accounts as they want!

Now, you’ll doubtlessly learn lots by analyzing your own tweets. But you already probably have a pretty good sense of what content works well for you (and who you engage with frequently).

We feel that Profile Pages really move the needle by letting you surface the relationships and content strategies of competitors, customers, and prospects.

Let’s take a closer look.

Find the people any Twitter user engages with most frequently

Yep, just plug in a Twitter name and we’ll analyze their most recent 2000 tweets. We’ll extract out all of the mentions and determine which folks they talk to the most.

Here, we see that 
@dr_pete talks most frequently with (or about) Moz, Rand, Elisa, and Melissa. In fact, close to 10% of his tweets are talking to these four! (Note the percentage above each listed name.)

This analysis is helpful as it lets you quickly get a sense for the relationships that are important for this person. That provides possible inroads to that person in terms of engagement strategies.

Chart when and what conversations happen with an analyzed user’s most important relationships

We don’t just stop there. By clicking on the little “see engagement” link below each listed user, you can see the history of the relationship.

Here, we can see when the engagements happened in the little chart. And we actually show you the underlying tweets, too.

This is a great way to quickly understand the context of that relationship: is it a friendly back and forth, a heated exchange, or the last gasp of a bad customer experience? Perhaps the tweets from a competitor to one his top customers occurred weeks back? Maybe there’s a chance for you to make inroads to that customer?

There’s all sorts of productive tea-reading that can happen with this feature. And, by the way, don’t forget that you already have the ability to track all the relationships a competitor forms (or breaks), too.

Rank any Twitter user’s tweets by importance to surface their best content

This is my favorite feature—by far—in Followerwonk.

Sure, there are other tools that tell you your most popular tweets, but there are few that let you turn that feature around and examine other Twitter users. This is important because (let’s face it) few of us have the volume of RTs and favorites to make self-analysis that useful. But when we examine top Twitter accounts, we come away with hints about what content strategies they’re using that work well.

Here we see that Obama’s top tweets include a tribute, an irreverent bit of humor, and an image that creatively criticizes a recent Supreme Court ruling. What lessons might you draw from the content that works best for Obama? What content works best for other people? Their image tweets? Tweets with humor? Shorter tweets? Tweets with links? Go do some analyzing!

Uncover top source domains of any Twitter users

Yep, we dissect all the URLs for any analyzed user to assemble a list of their top domains.

This feature offers a great way to quickly snapshot the types of content and sources that users draw material from. Moreover, we can click on “see mentions” to see a timeline of when those mentions occurred for each domain, as well as what particular tweets accounted for them.

In sum…

These features offer exciting ways to quickly profile users. Such analysis should be at the heart of any engagement strategy: understand who your target most frequently engages with, what content makes them successful, and what domains they pull from.

At the same time, this approach reveals content strategies—what, precisely, works well for you, but also for other thought leaders in your category. Not only can you draw inspiration from this approach, but you can find content that might deserve a retweet (or reformulation in your own words).

I don’t want to go too Freudian on you, but consider this: What’s the value of self-analysis? I mean that to say that unless you have a lot of data, any analytics product isn’t going to be totally useful. That’s why this addition in Followerwonk is so powerful. Now you can analyze others, including thought leaders in your particular industry, to find the secrets of their social success.

Start analyzing!

Finally, this is a bittersweet blog post for me. It’s my last one as a Mozzer. I’m off to try my hand at another bootstrapping startup: this time, software that lets you build feature tours and elicit visitor insights. I’m leaving Followerwonk in great hands, and I look forward to seeing awesome new features down the line. Of course, you can always stay in touch with me on Twitter. Keep on wonkin’!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

October 27, 2014  Tags: , , ,   Posted in: SEO / Traffic / Marketing  No Comments

Why is a Responsive Website So Important?

Why is a Responsive Website So Important? is a post by SEO expert SEO.com. For information about our SEO services or more great SEO tips and tricks, visit the SEO.com blog.

We are obsessed with instant response. In a world full of devices that can access the Internet at the touch of a button or the flick of a finger, we’ve become more reliant on fast connections and responsive websites. If a website doesn’t load properly on our smartphone or tablet, you better believe we’re already on our way to a website that will.

A responsive website has a different connotation than you might think. When we think responsive, we usually think only about working right. There are many websites that work right on our smartphones and tablets, but for some reason, they don’t fit the smaller-sized screen. It can quickly get annoying when a user has to scroll back and forth and up and down to get the full content of the site.Depositphotos_22839092_original

A webpage is considered responsive when it automatically fits to the screen, regardless of what device you are using to view it. This means that you won’t have to worry about developing different sites to work for all the different devices, which is always a huge plus.

Google’s Thoughts

You may have heard that Google is a pretty big deal these days. Big surprise, right? But sometimes it’s import to remind yourself that, according to comScore, the search engine still has a 67.5 percent market share, as of March 2014. So as much as we may dislike being led around by the whims of a single company, the simple fact is that they know what they’re talking about when it comes to user experience. And if they say they prefer one type of website over another, we need to listen. And they have clearly stated that they prefer responsive websites over multiple versions of the website.

To get a little more technical, Google refers to this type of design as “a setup where the server always sends the same HTML code to all devices and CSS is used to alter the rendering of the page on the device using media queries.” In other words, Google doesn’t want to have to index two sites when it could just focus on one.

Google prefers this setup over others because it makes it easier for their bots to crawl the sites and index and organize everything that is found online. This is because your site will have one URL and the same HTML code across the board. If you have both desktop and mobile versions of your website, there will be different URLs and HTML, which translates to more work for the Googlebots. (And the last thing you want to do when you want to impress someone is make them work harder.)

Google and SEO

A responsive design for your website is important for your SEO, as well. Google has recently placed a larger emphasis on user experience as a ranking factor for your site. On top of that, a single URL helps users share your site easier through social media channels. If you’re not using responsive design, a mobile user could share a link only to have a desktop user open it and find a stripped, mobile version, which creates an unpleasant user experience. The opposite also applies, of course, with a desktop user sharing something that looks great at home, but is unmanageable on a smaller screen.

Your SEO rankings can improve by creating a better experience for your users without having to consider what devices they are using.

Conversion

What’s the point of having people browse your sites if none of them are converting and buying your products? It’s one thing to provide a good user experience to the customers, but it’s another thing completely to make sure your site has been built to guide them toward conversion. If a user is having problems navigating through your site structure, they’re less likely to buy anything.

One study has recently shown that 69% of tablet users have shopped online. This is indicative of the increased amount of mobile users putting their devices to work on everyday tasks. If it’s harder for them to actually buy something, do you think that they will still convert? It’s a lot less likely. Responsive design can help with higher conversion rates because it creates an easier browsing environment for your customers no matter where they are.

Easy to Manage

We’ve talked a lot about creating a great experience for your customers, but what about you? There is a certain level of thought that needs to be put into making things easier for you to run the site, keep it updated, and make sure you’re not falling behind the trends.

If you have two different URLs for a mobile and desktop version of your site, you will need to have two separate SEO campaigns. Having responsive-designed web pages will create less work for your developing team. It also means more cost effectiveness in the long run.

Doing What’s Best

In the end, every decision you make should have some positive effect on you, your team, and your customers. The facts are all laid out above. Mobile usage is increasing, people are happier with faster loading sites, and Google says responsive design is their recommended configuration.

A responsive design can create a real difference in your website’s performance, and with some work, you can make something good come out of it.

Get Internet Marketing Insight For Your Company - SEO.com

Why is a Responsive Website So Important? is a post by SEO expert SEO.com. For information about our SEO services or more great SEO tips and tricks, visit the SEO.com blog.


SEO.com » Blog

October 27, 2014  Tags: , ,   Posted in: SEO / Traffic / Marketing  No Comments

How Big Was Penguin 3.0?

Posted by Dr-Pete

Sometime in the last week, the first Penguin update in over a year began to roll out (Penguin 2.1 hit around October 4, 2013). After a year, emotions were high, and expectations were higher. So, naturally, people were confused when MozCast showed the following data:

The purple bar is Friday, October 17th, the day Google originally said Penguin 3.0 rolled out. Keep in mind that MozCast is tuned to an average temperature of roughly 70°F. Friday’s temperature was slightly above average (73.6°), but nothing in the last few days indicates a change on the scale of the original Penguin update. For reference, Penguin 1.0 measured a scorching 93°F.

So, what happened? I’m going to attempt to answer that question as honestly as possible. Fair warning – this post is going to dive
very deep into the MozCast data. I’m going to start with the broad strokes, and paint the finer details as I go, so that anyone with a casual interest in Penguin can quit when they’ve seen enough of the picture.

What’s in a name?

We think that naming something gives us power over it, but I suspect the enchantment works both ways – the name imbues the update with a certain power. When Google or the community names an algorithm update, we naturally assume that update is a large one. What I’ve seen across many updates, such as the 27 named Panda iterations to date, is that this simply isn’t the case. Panda and Penguin are classifiers, not indicators of scope. Some updates are large, and some are small – updates that share a name share a common ideology and code-base, but they aren’t all equal.

Versioning complicates things even more – if Barry Schwartz or Danny Sullivan name the latest update “3.0”, it’s mostly a reflection that we’ve waited a year and we all assume this is a major update. That feels reasonable to most of us. That doesn’t necessarily mean that this is an entirely new version of the algorithm. When a software company creates a new version, they know exactly what changed. When Google refreshes Panda or Penguin, we can only guess at how the code changed. Collectively, we do our best, but we shouldn’t read too much into the name.

Was this Penguin just small?

Another problem with Penguin 3.0 is that our expectations are incredibly high. We assume that, after waiting more than a year, the latest Penguin update will hit hard and will include both a data refresh and an algorithm update. That’s just an assumption, though. I firmly believe that Penguin 1.0 had a much broader, and possibly much more negative, impact on SERPs than Google believed it would, and I think they’ve genuinely struggled to fix and update the Penguin algorithm effectively.

My beliefs aside, Pierre Far tried to clarify Penguin 3.0’s impact on Oct 21, saying that it affected less than 1% of US/English queries, and that it is a “slow, worldwide rollout”. Interpreting Google’s definition of “percent of queries” is tough, but the original Penguin (1.0) was clocked by Google as impacting 3.1% of US/English queries. Pierre also implied that Penguin 3.0 was a data “refresh”, and possibly not an algorithm change, but, as always, his precise meaning is open to interpretation.

So, it’s possible that the graph above is correct, and either the impact was relatively small, or that impact has been spread out across many days (we’ll discuss that later). Of course, many reputable people and agencies are reporting Penguin hits and recoveries, so that begs the question – why doesn’t their data match ours?

Is the data just too noisy?

MozCast has shown me with alarming clarity exactly how messy search results can be, and how dynamic they are even without major algorithm updates. Separating the signal from the noise can be extremely difficult – many SERPs change every day, sometimes multiple times per day.

More and more, we see algorithm updates where a small set of sites are hit hard, but the impact over a larger data set is tough to detect. Consider the following two hypothetical situations:

The data points on the left have an average temperature of 70°, with one data point skyrocketing to 110°. The data points on the right have an average temperature of 80°, and all of them vary between about 75-85°. So, which one is the update? A tool like MozCast looks at the aggregate data, and would say it’s the one on the right. On average, the temperature was hotter. It’s possible, though, that the graph on the left represents a legitimate update that impacted just a few sites, but hit those sites hard.

Your truth is your truth. If you were the red bar on the left, then that change to you is more real than any number I can put on a graph. If the unemployment rate drops from 6% to 5%, the reality for you is still either that you have a job or don’t have a job. Averages are useful for understanding the big picture, but they break down when you try to apply them to any one individual case.

The purpose of a tool like MozCast, in my opinion, is to answer the question “Was it just me?” We’re not trying to tell you if you were hit by an update – we’re trying to help you determine if, when you are hit, you’re the exception or the rule.

Is the slow rollout adding noise?

MozCast is built around a 24-hour cycle – it is designed to detect day-over-day changes. What if an algorithm update rolls out over a couple of days, though, or even a week? Is it possible that a relatively large change could be spread thin enough to be undetectable? Yes, it’s definitely possible, and we believe Google is doing this more often. To be fair, I don’t believe their primary goal is to obfuscate updates – I suspect that gradual rollouts are just safer and allow more time to address problems if and when things go wrong.

While MozCast measures in 24-hour increments, the reality is that there’s nothing about the system limiting it to that time period. We can just as easily look at the rate of change over a multi-day window. First, let’s stretch the MozCast temperature graph from the beginning of this post out to 60 days:

For reference, the average temperature for this time period was 68.5°. Please note that I’ve artificially constrained the temperature axis from 50-100° – this will help with comparisons over the next couple of graphs. Now, let’s measure the “daily” temperature again, but this time we’ll do it over a 48-hour (2-day) period. The red line shows the 48-hour flux:

It’s important to note that 48-hour flux is naturally higher than 24-hour flux – the average of the 48-hour flux for these 60 days is 80.3°. In general, though, you’ll see that the pattern of flux is similar. A longer window tends to create a smoothing effect, but the peaks and valleys are roughly similar for the two lines. So, let’s look at 72-hour (3-day) flux:

The average 72-hour flux is 87.7° over the 60 days. Again, except for some smoothing, there’s not a huge difference in the peaks and valleys – at least nothing that would clearly indicate the past week has been dramatically different from the past 60 days. So, let’s take this all the way and look at a full 7-day flux calculation:

I had to bump the Y-axis up to 120°, and you’ll see that smoothing is in full force – making the window any larger is probably going to risk over-smoothing. While the peaks and valleys start to time-shift a bit here, we’re still not seeing any obvious climb during the presumed Penguin 3.0 timeline.

Could Penguin 3.0 be spread out over weeks or a month? Theoretically, it’s possible, but I think it’s unlikely given what we know from past Google updates. Practically, this would make anything but a massive update very difficult to detect. Too much can change in 30 days, and that base rate of change, plus whatever smaller updates Google launched, would probably dwarf Penguin.

What if our keywords are wrong?

Is it possible that we’re not seeing Penguin in action because of sampling error? In other words, what if we’re just tracking the wrong keywords? This is a surprisingly tough question to answer, because we don’t know what the population of all searches looks like. We know what the population of Earth looks like – we can’t ask seven billion people to take our survey or participate in our experiment, but we at least know the group that we’re sampling. With queries, only Google has that data.

The original MozCast was publicly launched with a fixed set of 1,000 keywords sampled from Google AdWords data. We felt that a fixed data set would help reduce day-over-day change (unlike using customer keywords, which could be added and deleted), and we tried to select a range of phrases by volume and length. Ultimately, that data set did skew a bit toward commercial terms and tended to contain more head and mid-tail terms than very long-tail terms.

Since then, MozCast has grown to what is essentially 11 weather stations of 1,000 different keywords each, split into two sets for analysis of 1K and 10K keywords. The 10K set is further split in half, with 5K keywords targeted to the US (delocalized) and 5K targeted to 5 cities. While the public temperature still usually comes from the 1K set, we use the 10K set to power the Feature Graph and as a consistency check and analysis tool. So, at any given time, we have multiple samples to compare.

So, how did the 10K data set (actually, 5K delocalized keywords, since local searches tend to have more flux) compare to the 1K data set? Here’s the 60-day graph:

While there are some differences in the two data sets, you can see that they generally move together, share most of the same peaks and valleys, and vary within roughly the same range. Neither set shows clear signs of large-scale flux during the Penguin 3.0 timeline.

Naturally, there are going to be individual SEOs and agencies that are more likely to track clients impacted by Penguin (who are more likely to seek SEO help, presumably). Even self-service SEO tools have a certain degree of self-selection – people with SEO needs and issues are more likely to use them and to select problem keywords for tracking. So, it’s entirely possible that someone else’s data set could show a more pronounced Penguin impact. Are they wrong or are we? I think it’s fair to say that these are just multiple points of view. We do our best to make our sample somewhat random, but it’s still a sample and it is a small and imperfect representation of the entire world of Google.

Did Penguin 3.0 target a niche?

In that every algorithm update only targets a select set of sites, pages, or queries, then yes – every update is a “niche” update. The only question we can pose to our data is whether Penguin 3.0 targeted a specific industry category/vertical. The 10K MozCast data set is split evenly into 20 industry categories. Here’s the data from October 17th, the supposed data of the main rollout:

Keep in mind that, split 20 ways, the category data for any given day is a pretty small set. Also, categories naturally stray a bit from the overall average. All of the 20 categories recorded temperatures between 61.7-78.2°. The “Internet & Telecom” category, at the top of the one-day readings, usually runs a bit above average, so it’s tough to say, given the small data set, if this temperature is meaningful. My gut feeling is that we’re not seeing a clear, single-industry focus for the latest Penguin update. That’s not to say that the impact didn’t ultimately hit some industries harder than others.

What if our metrics are wrong?

If the sample is fundamentally flawed, then the way we measure our data may not matter that much, but let’s assume that our sample is at least a reasonable window into Google’s world. Even with a representative sample, there are many, many ways to measure flux, and all of them have pros and cons.

MozCast still operates on a relatively simple metric, which essentially looks at how much the top 10 rankings on any given day change compared to the previous day. This metric is position- and direction-agnostic, which is to say that a move from #1 to #3 is the same as a move from #9 to #7 (they’re both +2). Any keyword that drops off the rankings is a +10 (regardless of position), and any given keyword can score a change from 0-100. This metric, which I call “Delta100”, is roughly linearly transformed by taking the square root, resulting in a metric called “Delta10”. That value is then multiplied by a constant based on an average temperature of 70°. The transformations involve a little more math, but the core metric is pretty simplistic.

This simplicity may lead people to believe that we haven’t developed more sophisticated approaches. The reality is that we’ve tried many metrics, and they tend to all produce similar temperature patterns over time. So, in the end, we’ve kept it simple.

For the sake of this analysis, though, I’m going to dig into a couple of those other metrics. One metric that we calculate across the 10K keyword set uses a scoring system based on a simple CTR curve. A change from, say #1 to #3 has a much higher impact than a change lower in the top 10, and, similarly, a drop from the top of page one has a higher impact than a drop from the bottom. This metric (which I call “DeltaX”) goes a step farther, though…


If you’re still riding this train and you have any math phobia at all, this may be the time to disembark. We’ll pause to make a brief stop at the station to let you off. Grab your luggage, and we’ll even give you a couple of drink vouchers – no hard feelings.


If you’re still on board, here’s where the ride gets bumpy. So far, all of our metrics are based on taking the average (mean) temperature across the set of SERPs in question (whether 1K or 10K). The problem is that, as familiar as we all are with averages, they generally rely on certain assumptions, including data that is roughly normally distributed.

Core flux, for lack of a better word, is not remotely normally distributed. Our main Delta100 metric falls roughly on an exponential curve. Here’s the 1K data for October 21st:

The 10K data looks smoother, and the DeltaX data is smoother yet, but the shape is the same. A few SERPs/keywords show high flux, they quickly drop into mid-range flux, and then it all levels out. So, how do we take an average of this? Put simply, we cheat. We tested a number of transformations and found that the square root of this value helped create something a bit closer to a normal distribution. That value (Delta10) looks like this:

If you have any idea what a normal distribution is supposed to look like, you’re getting pretty itchy right about now. As I said, it’s a cheat. It’s the best cheat we’ve found without resorting to some really hairy math or entirely redefining the mean based on an exponential function. This cheat is based on an established methodology – Box-Cox transformations – but the outcome is admittedly not ideal. We use it because, all else being equal, it works about as well as other, more complicated solutions. The square root also handily reduces our data to a range of 0-10, which nicely matches a 10-result SERP (let’s not talk about 7-result SERPs… I SAID I DON’T WANT TO TALK ABOUT IT!).

What about the variance? Could we see how the standard deviation changes from day-to-day instead? This gets a little strange, because we’re essentially looking for the variance of the variance. Also, noting the transformed curve above, the standard deviation is pretty unreliable for our methodology – the variance on any given day is very high. Still, let’s look at it, transformed to the same temperature scale as the mean/average (on the 1K data set):

While the variance definitely moves along a different pattern than the mean, it moves within a much smaller range. This pattern doesn’t seem to match the pattern of known updates well. In theory, I think tracking the variance could be interesting. In practice, we need a measure of variance that’s based on an exponential function and not our transformed data. Unfortunately, such a metric is computationally expensive and would be very hard to explain to people.

Do we have to use mean-based statistics at all? When I experimented with different approaches to DeltaX, I tried using a median-based approach. It turns out that the median flux for any given day is occasionally zero, so that didn’t work very well, but there’s no reason – at least in theory – that the median has to be measured at the 50th percentile.

This is where you’re probably thinking “No, that’s
*exactly* what the median has to measure – that’s the very definition of the median!” Ok, you got me, but this definition only matters if you’re measuring central tendency. We don’t actually care what the middle value is for any given day. What we want is a metric that will allow us to best distinguish differences across days. So, I experimented with measuring a modified median at the 75th percentile (I call it “M75” – you’ve probably noticed I enjoy codenames) across the more sophisticated DeltaX metric.

That probably didn’t make a lot of sense. Even in my head, it’s a bit fuzzy. So, let’s look at the full DeltaX data for October 21st:

The larger data set and more sophisticated metric makes for a smoother curve, and a much clearer exponential function. Since you probably can’t see the 1,250th data point from the left, I’ve labelled the M75. This is a fairly arbitrary point, but we’re looking for a place where the curve isn’t too steep or too shallow, as a marker to potentially tell this curve apart from the curves measured on other days.

So, if we take all of the DeltaX-based M75’s from the 10K data set over the last 60 days, what does that look like, and how does it compare to the mean/average of Delta10s for that same time period?

Perhaps now you feel my pain. All of that glorious math and even a few trips to the edge of sanity and back, and my wonderfully complicated metric looks just about the same as the average of the simple metric. Some of the peaks are a bit peakier and some a bit less peakish, but the pattern is very similar. There’s still no clear sign of a Penguin 3.0 spike.

Are you still here?

Dear God, why? I mean, seriously, don’t you people have jobs, or at least a hobby? I hope now you understand the complexity of the task. Nothing in our data suggests that Penguin 3.0 was a major update, but our data is just one window on the world. If you were hit by Penguin 3.0 (or if you received good news and recovered) then nothing I can say matters, and it shouldn’t. MozCast is a reference point to use when you’re trying to figure out whether the whole world felt an earthquake or there was just construction outside your window. 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

October 25, 2014  Tags:   Posted in: SEO / Traffic / Marketing  No Comments



TechNetSource on Facebook




TechNetSource. WebSite Development, Hosting, and Technology Resources and Information.