You Are Browsing The SEO Category

My Favorite SEO Tool

March 24 2015 // SEO // 16 Comments

My favorite SEO tool isn’t an SEO tool at all. Don’t get me wrong, I use and like plenty of great SEO tools. But I realized that I was using this one tool all the time.

Chrome Developer Tools how I love thee, let me count the ways.

Chrome Developer Tools

The one tool I use countless times each day is Chrome Developer Tools. You can find this handy tool under the View -> Developer menu in Chrome.

chrome-developer-tools

Or you can simply right click and select Inspect Element. (I suppose the latter is actually easier.) Here’s what it looks like (on this site) when you open Chrome Developer Tools.

Chrome Developer Tools In Action

There is just an incredible amount of functionality packed into Chrome Developer Tools. Some of it is super technical and I certainly don’t use all of the features. I’m only going to scratch the surface with this post.

But hopefully you’re not overwhelmed by it all because there are some simple features that are really helpful on a day-to-day basis.

Check Status Codes

One of the simplest things to do is to use the Network tab to check on the status code of a page. For instance, how does a site handle domain level canonicalization.

Chrome Developer Tools Network Tab

With the Network tab open I go directly to the non-www version of this site and I can see how it redirects to the www version. In this case it’s doing exactly what it’s supposed to do.

If I want more information I can click on any of these line items and see the headers information.

Chrome Developer Tools Network Detail

You can catch some pretty interesting things by looking at what comes through the Network tab. For instance, soon after a client transitioned from http to https I noted the following response code chain.

An https request for a non-www URL returned a 301 to the www http version (domain level canonicalization) and then did another 301 to the www https version of that URL.

The double 301 and routing from https to http and back again can (and should) be avoided by doing the domain level canonicalization and https redirect at the same time. So that’s what we did … in the span of an hour!

I won’t get into the specifics of what you can tease out of the headers here because it would get way too dense. But suffice to say it can be a treasure of information.

Of course there are times I fire up something more detailed like Charles or Live HTTP Headers, but I’m doing so less frequently given the advancements in Chrome Developer Tools.

Check Mobile

There was a time when checking to see how a site would look on mobile was a real pain in the ass. But not with Chrome Developer Tools!

Chrome Developer Tools Viewport Rendering

The little icon that looks like mobile phone is … awesome. Click it!

Chrome Developer Tools Select Mobile Device

Now you can select a Device and reload the page to see how it looks on that device. Here’s what this site looks like on mobile.

Chrome Developer Tools Mobile Test

The cool thing is you can even click around and navigate on mobile in this interface to get a sense of what the experience is really like for mobile users without firing up your own phone.

A little bonus tip here is that you can clear the device by clicking the icon to the left and then use the UA field to do specific User Agent (UA) testing.

Chrome Developer Tools Override

For instance, without a Device selected what happens when Googlebot Smartphone hits my site. All I have to do is use the UA override and put in the Googlebot Smartphone User Agent.

Chrome Developer Tools UA Override Example

Sure enough it looks like Googlebot Smartphone will see the page correctly. This is increasingly important as we get closer to the 4/21/15 mopocalypse.

You can copy and paste from the Google Crawlers list or use one of a number of User Agent extensions (like this one) to do this. However, if you use one of the User Agent extensions you won’t see the UA show up in the UA field. But you can confirm it’s working via the headers in the Network tab.

Show Don’t Tell

The last thing I’ll share is how I use Chrome Developer Tools to show instead of tell clients about design and readability issues.

If you go back to some of my older posts you’ll find that they’re not as readable. I had to figure this stuff out as I went along.

Show Don't Tell Irony

This is a rather good post about Five Foot Web Design, which pretty much violates a number of the principles described in the piece. I often see design and readability issues and it can be difficult for a client to get that feedback, particularly if I’m just pointing out the flaws and bitching about it.

So instead I give them a type of side-by-side comparison by editing the HTML in Chrome Developer Tools and then taking a screen capture of the optimized version I’ve created.

You do this by using the Elements tab (1) and then using the Inspect tool (2) to find the area of the code you want to edit.

Chrome Developer Tools Elements Tab

The inspect tool is the magnifying glass if you’re confused and it just lets you sort of zero in on the area of that page. It will highlight the section on the page and then show where that section resides in the code below.

Now, the next step can be a bit scary because you’re just wading into the HTML to tweak what the page looks like.

Chrome Developer Tools Edit HTML

A few things to remember here. You’re not actually changing the code on that site or page. You can’t hurt that site by playing with the code here. Trust me, I screw this up all the time because I know just enough HTML and CSS to be dangerous.

In addition, if you reload this page after you’ve edited it using Chrome Developer Tools all of your changes will vanish. It’s sort of like an Etch-A-Sketch. You doodle on it and then you shake it and it disappears.

So the more HTML you know the more you can do in this interface. I generally just play with stuff until I get it to look how I want it to look.

Chrome Developer Tools HTML Edit Result

Here I’ve added a header of sorts and changed the font size and line height. I do this sort of thing for a number of clients so I can show them what I’m talking about. A concrete example helps them understand and also gives them something to pass on to designer and developers.

TL;DR

Chrome Developer Tools is a powerful suite of tools that any SEO should be using to make their lives easier and more productive.

Non-Linking URLs Seen As Links

March 20 2015 // SEO // 26 Comments

(This post has been updated so make sure you read all the way to the bottom.)

Are non-linking URLs (pasted URLs) seen as links by Google? There’s long been chatter and rumor that they do among various members of the SEO community. I found something the other day that seems to confirm this.

Google Webmaster Tools Crawl Errors

I keep a close eye on the Crawl Errors report in Google Webmaster Tools with a particular focus on ‘Not found’ errors. I look to see if they’re legitimate and whether they’re linked internally (which is very bad) or externally.

The place to look for this information is in the ‘Linked from’ tab of a specific error.

Linked From Tab on 404 Error

Now, all too often the internal links presented here are woefully out-of-date (and that’s being generous.) You click through, search for the link in the code and don’t find it. Again and again and again. Such was the case here. This is extremely annoying but is a topic for another blog post.

Instead let’s focus on that one external link. Because I figured this was the reason Google continued to return the page as an error even though 1stdibs had stopped linking to it ages ago.

Pasted URL Seen As Link?

That’s not a link! It’s a pasted URL but it’s not a link. (Ignore the retargeted ad.) Looking at the code there’s no <a> tag. Maybe it was there and then removed but that … doesn’t seem likely. In addition, I’ve seen a few more examples of this behavior but didn’t capture them at the time and have since marked those errors as fixed. #kickingmyself

Google (or a tool Google provides) is telling me that the page in question links to this 404 page.

Non-Linking URLs Treated As Links?

This Is Not A Link

It’s not a stretch to think that Google would be able to recognize the pattern of a URL in text and, thus, treat it as a link. And there are good reasons why they might want to since many unsophisticated users botch the HTML.

By treating pasted URLs as links Google can recover those citations, acknowledge the real intent and pass authority appropriately. (Though it doesn’t look like they’re doing that but instead using it solely for discovery.)

All of this is interesting from an academic perspective but doesn’t really change a whole lot in the scheme of things. Hopefully you’re not suddenly thinking that you should go out and try to secure non-linking URLs. (Seriously, don’t!)

What’s your take? Is this the smoking gun proof that Google treats non-linking URLs as links?

[Update]

Apparently John Mueller confirmed this in a Google+ Hangout back in September of 2013. So while seeing it in Google Webmaster Tools might be new(ish), Google clearly acknowledges and crawls non-linked URLs. Thanks to Glenn Gabe for pointing me to this information.

In addition, Dan Petrovic did a study to determine if non-linking URLs influenced rankings and found it likely that they did not. This makes a bit of sense since you wouldn’t be able to nofollow these pasted URLs, opening the door to abuse via blog comments.

Aggregating Intent

March 13 2015 // SEO // 14 Comments

Successful search engine optimization strategies must aggregate intent. This is something I touched on in my What Is SEO post and also demonstrated in my Rich Snippets Algorithm piece. But I want to talk about it in depth because it’s that important.

Aggregating Intent

Many of Google’s Knowledge Cards aggregate intent. Here’s the Knowledge Card displayed when I search for ‘va de vi’.

Knowledge Card Aggregates Intent

Google knows that Va de Vi is a restaurant. But they don’t quite know what my intent is behind such a broad query. Before Knowledge Cards Google would rely on providing a mixture of results to satisfy different intents. This was effective but inefficient and incomplete. Knowledge Cards makes aggregating intent a breeze.

What type of restaurant is it? Is it expensive? Where is it? How do I get there? What’s their phone number? Can I make a reservation? What’s on the menu? Is the food good? Is it open now? What alternatives are nearby?

Just look at that! In one snapshot this Knowledge Card satisfies a multitude of intents and does so quickly.

It’s not just restaurants either. Here’s a Knowledge Card result for ‘astronautalis’.

Aggregating Intent in Google Knowledge Cards

Once again you can see a variety of intents addressed by this Knowledge Card. Who is Astronautalis? Can I listen to some of his music? Where is he playing next? What are some of his popular songs? How can I connect with him? What albums has he released?

Google uses Knowledge Cards to quickly aggregate multiple intents and essentially cover all their bases when it comes to entity based results. If it’s good enough for Google shouldn’t it be good enough for you?

Active and Passive Intent

Aggregating Intent

So how does this translate into the search strategies you and I can implement? The easiest way to think about this is to understand that each query comes with active and passive intent.

Active intent is the intent that is explicitly described by the query syntax. A search for ‘bike trails in walnut creek’ is explicitly looking for a list of bike trails in walnut creek. (Thank you captain obvious.)

You must satisfy active intent immediately.

If a user doesn’t immediately see that their active intent has been satisfied they’re going to head back to search results. Trust me, you don’t want that. Google doesn’t like pogosticking. This means that at a glance users must see the answer to their active intent.

One of the mistakes I see many making is addressing active and passive intent equally. Or simply not paying attention to query syntax and decoding intent properly. More than ever, your job as an SEO is to extract intents from query syntax.

Passive intent is the intent that is implicitly described by the query syntax. A search for ‘bike trails in walnut creek’ is implicitly looking for trail maps, trail photos, trail reviews and attributes about those trails such as difficulty and length to name a few.

You create value by satisfying passive intent.

When you satisfy passive intent you’ll see page views per session and time on site increase. You’re ensuring that your site generates long clicks, which is incredibly important from a search engine perspective. It also happens to be the way you build your brand, convert users and ween yourself from being overly dependent on search engine traffic.

I think one of the best ways to think about passive intent is to ask yourself what the user would search for next … over and over again.

Intent Hierarchy

First You Looked Here, Then Here

It’s essential to understand the hierarchy of intent so you can deliver the right experience. This is where content and design collide with “traditional” search. (I use the quotes here because I’ve never really subscribed to search being treated as a niche tactic.)

SEO is a user centric activity in this context. The content must satisfy active and passive intent appropriately. Usually this means that there is ample content to address active intent and units or snippets to satisfy passive intent.

The design must prominently feature active intent content while providing visual cues or a trail of sorts to show that passive intent can also be satisfied. These things are important to SEO.

We can look at Google’s Knowledge Cards to see how they prioritize intent. Sometimes it’s the order in which the content is presented. For instance, usually the ‘people also search for’ is at the bottom of the card. These alternatives always represent passive intent.

For location based entities the map and directions are clearly given more priority by being at the top (and having a strong call to action). While the reviews section is often presented later on, it takes up a material amount of real estate, signaling higher (and potentially active) intent. Those with more passive intent (address, phone, hours etc.) are still available but are not given as high a weight visually.

For an artist (such as Astonautalis) you’ll see that listening options are presented first. Yes, it’s an ad based unit but it also makes sense that this would be an active intent around these queries.

It’s up to us to work with content and design teams to ensure the hierarchy of intent is optimized. Simply putting everything on the page at once or with equal weight will distract or overwhelm the user and chase them back to search results or a competitor.

Decoding Intent

Decoding Intent

While the days of having one page for every variant of query syntax are behind us, we’re still not at the point where one page can address every query syntax and the intents behind them.

If I search for ‘head like a hole lyrics’ the page I reach should satisfy my active intent and deliver the lyrics to this epic NIN song. To serve passive intent I’d want to see a crosslink unit to other songs from Pretty Hate Machine as well as other NIN albums. Maybe there’s another section with links to songs with similar themes.

But if I search for ‘pretty hate machine lyrics’ the page I reach should have a list of songs from that album with each song linking to a page with its lyrics. The crosslink unit on this page would be to other NIN albums and potentially other similar artists albums.

By understanding the query syntax (and in this case query classes) you can construct different page types that address the right hierarchy of intent.

Target the keyword, optimize the intent.

TL;DR

Aggregating intent and understanding how to decode, identify and present active and passive intent from query syntax is vital to success in search and beyond.

We Want To Believe

January 20 2015 // Marketing + SEO + Social Media // 5 Comments

Fake news and images are flourishing and even Snopes can’t hold back the tide of belief. Marketers should be taking notes, not to create their own fake campaigns but to understand the psychology that makes it possible and connect that to digital marketing trends.

We Want To Believe

I Want To Believe Poster

Agent Mulder, of the X-Files, was famous for his desire to believe in aliens and all sorts of other phenomena. The truth is, we all want to believe. Maybe not in aliens but a host of other things. It’s not that we’re gullible, per se, but that we are inherently biased and seek what feels like the truth.

One of the pieces that fooled some of my colleagues was the story about a busy restaurant who’d commissioned research on why service ratings had declined over time.

Restaurant Service Cellphones Fake Research

This was a post on Craigslist in the rants & raves section. Think about that for a moment. This is not a bastion of authenticity. But the post detailed patrons’ obsession with their phones and the inordinate amount of time they took texting and taking pictures of their food.

This self-absorbed, technology-obsessed customer was the real problem. Many reported this ‘research’ as fact because the findings were ones that people wanted to believe. Too many of us have witnessed something similar. We have experience that creates a bias to believe.

We wanted the story to be true because it felt right and matched our preconceptions and beliefs.

The Subversive Jimmy Kimmel

While Jimmy Fallon may be the more affable of late night hosts, Jimmy Kimmel has been doing what I think is some ground-breaking work. His most popular pranks have exposed our desire to believe.

Twerking was jumping the shark and this video tapped into our collective eye-roll of the practice. But less than a week later Kimmel revealed that it was all a hoax.

He didn’t stop there though. The next time he enlisted Olympian Kate Hansen to post a video that purportedly showed a wolf wandering the halls at the Sochi Olympics.

Once again, Kimmel revealed that while it was a wolf, it wasn’t anywhere near Russia. I’m not sure people give Kimmel enough credit. He made people believe there were wolves roaming the halls at the Olympics!

Now, why did we believe? We believed because the narrative was already set. Journalists were complaining about the conditions at Sochi. So when the wolf video appeared and it was endorsed by an Olympic athlete no less, well, we fell for it. It matched our expectations.

It’s not about the truth, it’s about it making sense.

Experience, Belief and Marketing

Adventure Time Demon Cat

So how does our desire to believe connect to marketing? Marketers should be figuring out how to create a narrative and set expectations.

Content marketing is popular right now because it provides us the opportunity to shape expectations.

I’ve written quite a bit about how to maximize attention. If you only remember one thing from that piece it’s that we’re constantly rewriting our memory.

Every interaction we have with a site or brand will cause us to edit that entry in our head, if even just a little. Each time this happens marketers have an opportunity to change the narrative and reset expectations.

For a restaurant this means that a bad meal, even after having numerous good ones in the past, can have a serious impact on that patron’s perception and propensity to return. I used to love eating at Havana, a nearby Cuban restaurant. My wife and I had many great meals (and Mojitos) there. But about a year ago we had a sub-par dinner.

Because we’d had so many good meals before we wrote it off as an aberration. This is an important thing to understand. Because what it really means is that we felt like our experience didn’t match our expectation. But instead of changing our expectation we threw away that experience. You should get a raise if you’re able to pull this off as a marketer.

We returned a few months later and it was another clunker. This time we came to the conclusion that the food quality had simply taken a nose dive. We haven’t been back since. Our perception and expectation changed in the span of two bad experiences.

Content, in any form, follows the same rules. Consistently delivering content that reinforces or rewrites a positive brand expectation is vital to success.

Know The Landscape

Beached Whale Revealed In Painting

Our experiences create context and a marketer needs to understand that context, the landscape, before constructing a content strategy. Because it’s not about the truth. It’s about what people are willing to believe.

All too often I find companies who struggle with this concept. They have the best product or service out there but people are beating a path to their competitor(s) instead. It’s incomprehensible. They’re indignant. Their response is usually to double-down on the ‘but we’re the best’ meme.

Nearly ten years ago I was working at Alibris, a used, rare and out-of-print book site. Within the bookselling community the Alibris name was mud. The reason could be traced back to when Alibris entered the market. The Alibris CEO was blunt, telling booksellers that they would be out of business if they didn’t jump on the band wagon.

He was right. But the way the message was delivered, among other things, led to a general negative perception of the brand among booksellers, a notoriously independent bunch. #middlefingersraised

How could I change this negative brand equity? Did I just tell sellers that we were awesome? No. Instead I figured out the landscape and used content and influencer marketing to slowly change the perception of the brand.

Our largest competitor was Abebooks. So I signed up as a seller there, which also gave me access to their community forum. It was here that I carefully read seller complaints about the industry and about Abebooks itself. What I came to realize was that many of their complaints were actually areas where Alibris excelled. Sellers just weren’t willing to see it because of their perception (or expectation) of the brand.

So every month in our seller newsletter I would talk about an Alibris feature that I knew would hit a nerve. I knew that it was a pain point for the industry or an Abebooks pet peeve. Inevitably, these newsletter items were talked about in the forums. At first the response went a little like this. “Alibris is still evil, but at least they’re doing something about this one thing.”

At the same time I identified vocal detractors of our brand and called them on the phone. I wanted them to vent and asked them what it would take for them to give Alibris a try. My secret goal was to change their perception of the brand, to humanize it, and neutralize their contribution to the negative narrative in the community.

It didn’t happen overnight but over the course of a year the narrative did change. Booksellers saw us as a brand trying to do right by them, perhaps ‘seeing the error of our ways’ and forging a new path. They gave us the benefit of the doubt. They grudgingly told stories about how sales on Alibris were similar to those on Abebooks.

I’d changed the narrative about the brand.

I didn’t do this through cheerleading. Instead, I led the community to content that slowly rewrote their expectations of Alibris. I never told them Alibris was better, I simply presented content that made them re-evaluate their perception of ‘Abebooks vs. Alibris’.

Influencer Marketing

Why do some of these fake stories take hold so quickly? The Sochi wolf had a respected Olympic athlete in on the gag. She was a trusted person, an influencer, with no real reason to lie.

Fake NASA Weightless Tweet

People wouldn’t have believed this false weightless claim if it hadn’t been delivered as a (spoofed) Tweet from NASA’s official Twitter account. Our eyes told us that someone in authority, the ultimate authority in this case, said it was true. That and we wanted to believe. Maybe this time in something amazing. Not aliens exactly but close.

So when we talk about influencer marketing we’re talking about enlisting others who can reinforce the narrative of your brand. These people can act as a cementing agent. It’s not so much about their reach (though that’s always nice) but the fact that it suddenly makes sense for us to believe because someone else, someone we trust or respect, agrees.

At that point we’re more willing to become evangelizers of the brand. That’s the true value of influencer marketing. People will actively start passing along that positive narrative to their friends, family and colleagues. If you’re familiar with the Net Promoter concept you can think of influencer marketing as a way to get people from passives (7-8) to promoters (9-10).

Influencer marketing converts customers into evangelizers who actively spread your brand narrative.

Justin Timberlake Is A Nice Guy?

Dick In a Box Sceenshot

Take my opinion (and probably yours) of Justin Timberlake. He seems like a really nice guy, right? But I don’t know Justin. I’ve never met him and odds are neither have you. For all we know, he could be a raging asshole. But I think he isn’t because of a constant drip of content that has shaped my opinion of him.

He’s the guy who is willing to do crazy stuff and poke fun at himself on SNL. He goes on prom dates. He’s the sensitive guy who encourages a musician in a MasterCard commercial. He celebrates at Taco Bell. I don’t even like his music but I like him.

The next thing I want to say is that it probably helps that he really is a nice guy. But I honestly don’t know that! I want to believe that but I’m also sure he has a very savvy PR team.

Uber Is Evil?

Skepticism Intensifies

Uber is a great example of when you lose control of the narrative. A darling of the ‘sharing economy’ Uber might torpedo that movement because they’re suddenly seen as an uber-villain. (Sorry, I couldn’t help it.)

Once again, it’s about consistency. It’s about rewriting that perception. So taking a brand down doesn’t happen, generally, with just one gaff. You have to step in it over and over again.

Uber’s done that. From placing fake orders or other dirty tricks against competitors, to threatening journalists, to violating user privacy, to surge pricing, to sexual assault to verbal abuse of a cancer patient.

Suddenly, every Uber story fits a new narrative and expectation. Uber is evil. Is that the truth? Not really. Is it what we want to believe? Yup.

Uber screwed up numerous times but their negative brand equity is partly due to the landscape. There are enough people (me included) who aren’t keen on the sharing economy that took Uber’s missteps as an opportunity to float an alternate narrative, attacking the sharing economy by proxy.

Either way, it became increasingly easy to get stories published that met this new expectation and increasingly difficult for positive experiences to see the light of day. This is explained incredibly well in a case study on Internet celebrity brought to my attention by Rand Fishkin.

The video is 19 minutes long, which is usually an eternity in my book. But this video is worth it. Every marketer should watch it all the way through.

A Content Marketing Framework

I realize that I use a number of terms almost interchangeably throughout this piece. In truth, there are wrinkles and nuance to these ideas. If they weren’t confusing then everyone would be a marketing god. But I want to provide a strawman framework for you to remember and try out.

Why Content Marketing Works

Our experience with content creates context or bias that changes our belief or perception of that brand resulting in a new expectation when we encounter the brand again.

At any point in this journey a person can be exposed to a competitor’s content which can change context and bias. In addition, influencer marketing and social proof can help reinforce context and cement belief.

I’d love to hear your feedback on this framework and whether it helps you to better focus your marketing efforts.

TL;DR

The lesson marketers should be taking from the proliferation of fake news and images isn’t to create our own fake stories or products. Instead we should be deciphering why people believe and use that knowledge to construct more effective digital marketing campaigns.

Google Autocomplete Query Personalization

January 14 2015 // SEO // 22 Comments

The other day a friend emailed me asking if I’d ever seen behavior where Google’s autocomplete suggestions would change based on a prior query.

Lucifer from Battlestar Galatica

I’ve seen search results change based on prior queries but I couldn’t recall the autocomplete suggestions changing in the way he detailed. So I decided to poke around and see what was going on. Here’s what I found.

Query Dependent Autocomplete Example

Here’s the example I was sent. The individual was cleaning up an old computer and didn’t quite know the purpose of a specific program named ‘WineBottler’.

Search Result for WineBottler

Quickly understanding that he didn’t need this program anymore he began to search for ‘uninstall winebottler’ but found that Google’s autocomplete had beat him to it.

Query Dependent Google Autocomplete

There it was already listed as an autocomplete suggestion. This is very different from doing the uninstall query on a fresh session.

Normal Autocomplete Suggestions

I was intrigued. So I started to try other programs in hopes of replicating the query dependent functionality displayed. I tried ‘SnagIt’ and ‘Photoshop’ but each time I did I got the same generic autocomplete suggestions.

Query Class Volume

Coincidentally I was also chatting with Barbara Starr about an old research paper (pdf) that Bill Slawski had brought to my attention. The subject of the paper was in identifying what I call query classes, or a template of sorts, which is expressed as a root term plus a modifier. Easy examples might be ‘[song] lyrics’ or ‘[restaurant] menu’.

So what does this have to do with autocomplete suggestions? Well, my instinct told me that there might be a query class of ‘uninstall [program]’. I clicked over to Ubersuggest to see if I just hadn’t hit on the popular ones but the service was down. Instead I landed on SERPs Suggest which was handy since it also brought in query volume for those autocomplete suggestions.

I searched for ‘uninstall’ and scrolled to where the results were making the most sense to me.

SERPs Suggests Keyword Tool

Quite obviously there is a query class around ‘uninstall [program]’. Now it was time to see if those with high volume (aka intent) would trigger the query class based autocomplete suggestions.

Query Class Based Autocomplete Suggestions

The scourge of the pop-under world, MacKeeper, jumped out at me so I gave that one a try.

MacKeeper Search Result

Google Autocomplete for Uninstall after MacKeeper query

Sure enough the first autocomplete suggestion is uninstall mackeeper. It’s also interesting to note the prior query is kept in reference in the URL. This isn’t new. It’s been like that for quite some time but it makes this type of scenario far easier to explain.

At random I tried another one from my target list.

Parallels Search Results

Uninstall Autocomplete after Parallels Query

Yup. Same thing.

Classes or Attributes?

It got me thinking though whether it was about query classes or just attributes of an entity.  So I poked around a bit more and was able to find examples in the health field. (Sorry to be a debbie downer.) Here’s a search for lymphoma.

Lymphoma Search Results

Followed by a search for treatment.

Autocomplete for Treatment after Lymphoma Query

This differs from a clean search for ‘treat’.

Treat Autocomplete Suggestions

Treatment is an attribute of the entity Lymphoma. Then again ‘treatment of [ailment]’ is also a fairly well-defined query class. So perhaps I’m splitting hairs in trying to pry apart classes from attributes.

It Doesn’t Always Work

I figured I could find more of these quickly and selected a field that I thought had many query classes: music. Search for a band, then search for something like ‘tour dates’ or ‘tickets’ and see if I could get the query dependent autocomplete suggestions to fire.

I tried Kasabian.

Kasabian Search Results

And then tour dates.

Tour Dates Autocomplete Suggestions

Nothing about Kasabian at all. Just generic tour dates autocomplete suggestions. I tried this for many other artists including the ubiquitous Taylor Swift and got the same results, or lack thereof.

I had a few theories of why music might be exempted but it would all just be conjecture. But it did put a bit of a dent into my next leap in logic, which would have been to conversational search.

Not Conversational Search

One of the bigger components of Hummingbird was the ability to perform conversational search that, often, wouldn’t require the user to reference the specific noun again. The classic example being ‘How tall is the Eiffel Tower?’ ‘Who built it?’

Now in the scheme of things conversational search is, in part, built upon identifying query classes and how people string them together in a query session. So it wouldn’t be a shock if this started showing up in Google’s autocomplete suggestions. Yet that’s not what appears to be happening.

Because you can do a voice search using Google Now for ‘Kasabian’ and then follow up with ‘tickets for them’ and get a very different and relevant set of results. They figure out the pronoun reference and substitute appropriately to generate the right query: ‘Kasabian Tickets’.

What Does Google Say?

Of course it pays to see what Google says about their Autocomplete suggestions predictions.

About Google Autocomplete Predictions

I find it interesting that they call them predictions and not suggestions. It’s far more scientific. More Googly. But I’m not changing my references throughout this piece!

But here we can see a probable mash-up of “search activity of users” (aka query classes) and “relevant searches you’ve done in the past” (aka query history). Previously, the query history portion was more about ensuring that my autocomplete for ‘smx’ might start with ‘smx east’.

Personalized Autocomplete

While the autocomplete for someone unaffiliated with search wouldn’t get that suggestion.

Nonpersonalized Autocomplete

So I’m left to think that this new session based autocomplete personalization is relatively new but may have been going on for quite some time without many people noticing.

There’s a lot more research that could be done here so please let me know if and when you’ve noticed this feature as well as any other examples you might have of this behavior.

For Google the reason for doing this is easy. It’s just one more way that they can reduce the time to long click.

TL;DR

Google is personalizing autocomplete suggestions based on a prior query when it matches a defined query class or entity attribute.

Image Blind

December 16 2014 // Analytics + SEO // 15 Comments

Images are an increasingly important part of the Internet landscape. Yet marketers are provided very little in the way of reliable metrics to allow us to understand their power and optimize accordingly. This is doubly strange given the huge amount of research going on regarding images within search engine giants such as Google.

Image Tracking In Google Analytics

There is none. Or at least there is no image search tracking in Google Analytics unless you create filters based on referrers. I wrote about how to track image search in Google Analytics in March of 2013 and updated that post in April of 2014.

The problem with this method is that it is decreasing in usefulness. I still use it and recommend it because some visibility is better than none. But when Chrome removed the referrer completely from these clicks earlier this year it really hurt the accuracy of the filter.

Who cares you might be asking. I care because image search intent and the resulting user behavior is often wildly different than web search.

Google Image Search Traffic Behavior

The users coming to the site above via web search have vastly different behavior metrics than those coming from image search. I’ve highlighted the dramatic pages per visit and time on site metrics. Shouldn’t we be building user stories and personas round this type of user?

For a while I explained away the reasons for not providing image search tracking in Google Analytics under the umbrella of privacy. I understand that Google was pretty much forced to move to ‘not provided’ because of lawsuits, Gaos v. Google Inc. in particular. I get it.

But I’m with Chris Messina. Privacy shouldn’t be a four letter word. And the one company who has the best chance of changing the conversation about it is Google. But let’s not go down the privacy rabbit hole. Because we don’t have to.

Right now Google Analytics provides other data on how people search. They break things down by mobile or tablet. We can even get down to the device level.

Google Analytics by Device

Are we really saying that knowing the user came in via image search is more identifiable than what device they were using? They simply explain different meta data on how a user searched.

Furthermore, on both web and image search I can still drill down and see what page they landed on. In both instances I can make some inferences on what term was used to get them to that page.

There is no inherent additional data being revealed by providing image search as a source.

Image Clicks in Google Webmaster Tools

I wouldn’t be as frothed up about this if it was just Google Analytics. Because I actually like Google Analytics a lot and like the people behind it even more.

But then we’ve got to deal with Google Webmaster Tools data on top and that’s an even bigger mess. First let’s talk about the dark pattern where when you look at your search queries data it automatically applies the Web filter. #notcool

Default Web Filter for Search Queries in GWT

I’m sure there’s an argument that it’s prominent enough and might even draw the user’s attention. I could be persuaded. But defaults are dangerous. I’d hazard there are plenty of folks who don’t even know that you can see this data with other filters.

And a funny thing happens with sites that have a lot of images (think eCommerce) when you look at this data. It doesn’t make an ounce of sense.

What happens if I take a month’s worth of image filtered data and a month’s worth of web filtered data and then compare that to the actual data reported in Google Analytics?

Here’s the web filtered data which is actually from November 16 to December 14. It shows 369,661 Clicks.

GWT Web Filter Example

Now here the image filtered data from the same time frame. It shows 965,455 Clicks.

GWT Image Filter Traffic Graph

Now here’s what Google Analytics reports for the same timeframe.

Google Analytics Traffic Comparison

For those of you slow on the uptake, the image click data from Google Webmaster Tools is more than the entire organic search reported! Not just Google but organic search in total. Put web and image together and we’re looking at 1.3 million according to Google Webmaster Tools.

I’m not even going to get into the ratio of image clicks versus web clicks and how they don’t have any connection to reality when looking at the ratio in Google Analytics. Even taking the inaccuracy of the Google Analytics filters into account it points to one very clear truth.

The image click data in Google Webmaster Tools is wonky.

So that begs the question. What exactly is an image click? It doesn’t seem to be limited to clicks from image search to that domain. So what does it include?

This blog is currently number three for the term ‘cosmic cat’ in image search (#proud) so I’ll use that as an example.

What Is an Image Click?

Do image clicks include clicks directly to the image, which are generally not on that domain and not counted in most traffic packages including Google Analytics? Maybe. But that would mean a lot of people were clicking on a fairly small button. Not impossible but I’d put it in the improbable bucket.

Or do image clicks include any time a user clicks to expand that image result? This makes more sense given what I’m seeing.

But that’s lunacy. That’s comparing apples to oranges. How does that help a marketer? How can we trust the data in Google Webmaster Tools when we encounter such inconsistencies.

Every webmaster should be inquiring about the definition of an image click.

The definition (of sorts) provided by Google in their support documentation doesn’t help.

GWT Search Queries FAQ

The first line is incorrect and reflects that this document hasn’t been updated for some time. (You know, I hear care and attention to detail might be a quality signal these days.) There’s a line under devices that might explain the image click bloat but it’s not contained in that section and instead is attributed to devices.

Long story short, the documentation Google Webmaster Tools provides on this point isn’t helpful. (As an aside, I’d be very interested in hearing from others who have made the comparison of image filter and web filter clicks to Google Analytics traffic.)

Images During HTTPS Conversion

These problems came to a head during a recent HTTP to HTTPS conversion. Soon after the conversion the client involved saw a decent decline in search traffic. Alarm bells went off and we all scrambled to figure out what was going on.

This particular client has a material amount of images so I took the chart data from both HTTP and HTTPS for web and image clicks and graphed them together.

Exasperated Picard

In doing so the culprit in the decline post conversion was clearly image traffic! Now, some of you might be thinking that this shows how the Google Webmaster Tools data is just fine. You’re be wrong! The data there is still incorrect. It’s just wrong consistently enough for me to track fluctuations. I’m glad I can do it but relying on consistently bad data isn’t something I’m cheering about.

The conclusion here seems to be that it takes a long time to identify HTTPS images and match them to their new HTTPS pages. We’re seeing traffic starting to return but it’s slower than anyone would like. If Google wants sites to convert to HTTPS (which they do) then fixing this image search bottleneck should be a priority.

Image Blind?

I'm Mad as Hell And ...

The real problem here is that I was blindsided due to my lack of visibility into image search. Figuring out what was going on took a fair amount of man hours because the metrics that would have told us what was going on weren’t readily available.

Yet in another part of the Googleplex they’re spending crazy amounts of time on image research.

Google Image Advancements

I mean, holy smokes Batman, that’s some seriously cool work going on. But then I can’t tell image search traffic from web search traffic in Google Analytics and the Google Webmaster Tools data often shows more ‘image clicks’ to a site than total organic traffic to the site in the same time period. #wtf

Even as Google is appropriately moving towards the viewable impressions metric for advertisers (pdf), we marketers can’t make heads or tails of images, one of the most important elements on the web. This needs to change.

Marketers need data that they can both rely on and trust in to make fact based decisions.

TL;DR

Great research is being done by Google on images but they are failing marketers when it comes to image search metrics. The complete lack of visibility in Google Analytics coupled with ill defined image click data in Google Webmaster Tools leaves marketers in the dark for an increasingly important type of Internet content.

Sitelinks Search Box

September 19 2014 // SEO // 53 Comments

Google’s new sitelinks search box threatens to take your hard won branded traffic and hand it over to competitors unless you implement the specified markup.

Here’s what’s happening and why you need to bump the sitelinks search box markup implementation to the top of your priorities.

Sitelinks Search Box

Sitelinks Search Box YouTube Mobile

On September 5th Google announced the launch of an improved search box in sitelinks for branded queries.

When users search for a company by name—for example, [Megadodo Publications] or [Dunder Mifflin]—they may actually be looking for something specific on that website. In the past, when our algorithms recognized this, they’d display a larger set of sitelinks and an additional search box below that search result, which let users do site: searches over the site straight from the results, for example [site:example.com hitchhiker guides].

Now I’d argue that the prevalence of any search box in sitelinks was minimal at best. I hardly ever saw them. That’s going to be important to this story later on.

Sitelinks Search Box Email

On September 15th several of my clients received a ‘Make your site ready for the new sitelinks search box’ email informing them that their site was eligible for this feature.

Sitelinks Search Box Email

Soon after that email went out I began to see sitelink search boxes appearing on branded queries. That was fast!

Sitelinks Search Boxes UX

What if you haven’t implemented the markup yet? If you don’t implement the markup and connect it to your own search engine Google will simply perform a site: search using your domain and the user’s search query.

The problem? Competitors might suck away that traffic through paid ads on those site: queries. Here’s what it looks like.

Google Sitelinks Search Box

Cool Hunting isn’t a client by the way, just an example I happened to find. Now lets search for ‘electric cars’ using the Cool Hunting sitelinks search box.

Sitelinks Search Box Results

The user’s intent was to find Cool Hunting but if they use the sitelinks search box to find Cool Hunting content they are presented with a raft of ads for other sites. Not for every query obviously but if you’re running any decently sized brand and getting the sitelinks search box treatment there’s a good possibility for click attrition.

The sitelinks search box could steal your branded traffic.

What really burns here is that the original intent was on a branded term. You’d won mindshare and loyalty. The query intent was clear. Yet that search box might deliver them to unbranded results.

It would be nice to think that users would seek out the branded organic results but monkey clicks happen and the additional friction and options turn a slam dunk into a three point attempt.

Sitelinks Search Box Markup

Google You Got Some Splainin To Do

The experience doesn’t have to be like this though.

If you implement the markup on your site, users will have the ability to jump directly from the sitelinks search box to your site’s search results page. If we don’t find any markup, we’ll show them a Google search results page for the corresponding site: query, as we’ve done until now.

That’s clear enough. Yet the way in which this was rolled out is unsettling to say the least.

The prevalence of presenting the sitelink search box was very low before and even when shown wasn’t very prominent. So to go from that reality to one in which it’s shown far more often and more prominently with 10 days notice seems … uncharitable. That’s not even a standard two week sprint cycle!

I understand the value Google is trying to deliver here. And in some ways I think Google believes that a site: query might be better than many site’s own internal search engines. They’d be right on that account too in many instances.

But if that’s really what Google’s trying to do then searches through the sitelinks search box should only return content for that site. Shouldn’t the ads be suppressed at that point? Otherwise it seems rather self-serving from an advertising perspective. And it doesn’t honor intent.

Of course, by having it behave this way (and having someone like me ring the alarm bell) you might find adoption of the markup increase dramatically. That seems like an awfully big stick though. Not to mention the sites that may never understand what’s going on or have the technical chops and determination to fix it through markup.

TL;DR

The improved sitelinks search box threatens to divert branded traffic to competitors unless you implement the specified markup. Sadly, the current user experience doesn’t seem to match the user’s intent nor Google’s aim to serve the user.

Image Sitemap Indexation

September 09 2014 // SEO // 4 Comments

This post is a bit of penance for yours truly. Read on to make sure you don’t fall into this trap.

Image Sitemap Indexation

For many months I’d open Google Webmaster Tools and stare at poor indexation rates for images across a number of client accounts. Not just one or two but several clients with crappy image indexation rates.

Google Webmaster Tools Indexation Rate for Images

This didn’t make much sense to me since image traffic reported in Google Webmaster Tools was healthy.

Google Webmaster Tools Image Traffic

In addition, image traffic reported using Google Analytics filters was looking good too. Mind you the difference between image clicks reported in Google Webmaster Tools and what is captured in Google Analytics doesn’t match up, even when I add back in lost referrer data from Chrome and other browsers.

But that’s a story for another day.

Lazy Investigation

Cat on Couch with Beer and TV Remote

Image search is largely ignored and under appreciated. It’s tough to sell folks on it when the data is murky (at best) coupled with engagement and conversion that is usually very poor in comparison to web search. I think a proper attribution model would tell a different story. But I digress.

This is my way of rationalizing why I didn’t push harder on investigating poor image indexation rates. It’s an excuse. Of course I opened up the sitemap files and made sure that the images being passed were valid.

They were.

But I stopped there and chalked it up to a Google Webmaster Tools bug. This wasn’t out of the question and the engineering teams I was working with were top notch. I then reached out to other colleagues and asked if they had similar issues. Sure enough, a number said they too were encountering this problem.

So it wasn’t my fault. It was Google’s problem!

False Accusation

Harrison Ford in The Fugitive

I took it upon myself to message a number of Googlers asking them to investigate. Don’t get me wrong, I was nice about it. But my approach was to provide examples that I felt sure would expose this bug.

To my chagrin what I got back was a nice but pointed response that explained that I (and my clients) had screwed up. The image URLs in those sitemap files might have been valid but they weren’t the ones currently residing on the location URL provided in the sitemap.

Loosely translated: you’re stupid and wrong.

The Devil Is In The Details

I hate being wrong and I hate wasting the time of Googlers. Talk about tossing any good will I’d earned into a roaring bonfire!

It turned out that in every single instance where the indexation rate for images was low there was a problem with matching the image URL with the location URL.

Image Sitemap Example

More often than not it was that the image had been placed on a subdomain of the cookieless domain serving images and the sitemap file hadn’t been updated to reflect that optimization.

For instance, on example.com the image might have started at exampleimg.com but was now being served from shard3.exampleimg.com instead.

If the latter is what is found by Googlebot Image on that location URL but you’re referencing the former in the sitemap then it won’t show up as an indexed image via Google Webmaster Tools.

Does It Matter?

Disaster Girl

If you’re reading closely you probably realize that this is a reporting error and the image itself is most likely indexed. But the indexation count in Google Webmaster Tools is looking at whether the images you’re passing in relation to that location are indexed.

Some of you might decide it’s not worth paying attention to at that point. But I’d argue that you want those indexation rates to reflect reality so you can measure, optimize and react to any changes that might impact your business.

You can’t improve what you can’t measure.

Not only that, but by doing the due diligence for each client I uncovered issues with how images were being rendered and optimized. Remember, these are smart engineering teams. I’m not blaming them. Images are a bear for sites who are consumed with reducing load times and improving speed.

Do The Work

It’s embarrassing to find that you’ve overlooked something so … obvious. At Share14 I got a chance to sync up with Adam Audette. One of the things we talked about was the benefits of working in the trenches and how it becomes more difficult as you expand and grow.

Yet this is where a good SEO can make such a difference. By digging into the details and figuring out what’s going on you can tease out a problem that might have gone undiagnosed for months on end.

Since making the changes indexation rates are rising for all clients. I can’t tell you that the changes have increased image search traffic by 134.7%. This isn’t a redemption feel good story. This is a reminder to do the work and get it right.

TL;DR

If you’re seeing low image sitemap indexation in Google Webmaster Tools you need to carefully inspect the sitemaps to ensure that the image URL(s) being passed exist on the location URL referenced. Beyond the specific image sitemap issue, this is a reminder to not assume or get sloppy with your due diligence.

The Rich Snippets Algorithm

August 20 2014 // SEO // 69 Comments

There’s been a tremendous amount of chatter recently about rich snippets vanishing from Google search results, whether it’s Amazon losing their review aggregate snippets or a wholesale reduction in video snippets.

What we’re really talking about are changes to the rich snippets algorithm.

Inception Leo Squinting

That’s right, we need to go deeper. There’s an algorithm within the algorithm.

Here’s what I know about the rich snippets algorithm based on observation and conjecture as well as statements from Google representatives. I’ll also sketch out some theories on how Google might be replacing many rich snippets with Knowledge Graph panels and carousels.

Rich Snippets History

Wayback Machine Cartoon

Lets start at the beginning. Rich snippets were first introduced by Google on May 12, 2009. The strange thing is Google was the last search engine to embrace rich snippets.

For a long time Google didn’t want to employ a feature that would be naturally biased toward sites with greater development resources. In short, Google wanted to keep a level playing field. You still see some of this mentality in the Data Highlighter feature in Google Webmaster Tools.

But once they started down the rich snippets road Google all-in, launching Schema.org on June 2, 2011. Sure it’s a joint venture between search engines but lets be real, the main author here is Google.

Not Your Ordinary Result

Rich snippets are fancy results or results on steroids. They usually contain a visual element such as stars or a thumbnail image.

Ferncer Ferst!

Whether they’re stars, additional links,  thumbnail images or video captures, these results stand out from the crowd. As such, they draw both the eye and clicks.

Whitelist Days Of Yore

In the old days (circa 2010) I was working with PowerReviews and, by proxy, a number of eCommerce companies who were chomping at the bit to get the review aggregate snippet on their results.

Those stars were extremely powerful in those early days. Anything shiny and new will have that initial heightened response. The review rich snippet is still valuable but less so now that the novelty has worn off and there are multiple review rich snippets per result.

At the time, it was all about interfacing with the ‘rich snippets team’ and getting them to ‘turn on’ your snippets. As rich snippets grew in popularity and expanded to new types this non-algorithmic approach was untenable and simply … un-Googly.

Rich Snippets Algorithm

It shouldn’t be a surprise that there’s a rich snippets algorithm. Google states it clearly in their rich snippets guidelines.

Rich Snippets Algorithm

For a long time this algorithm was rather basic and disconnected from other search quality signals. It wasn’t until the release of Panda 4.0 that Google integrated search quality signals with the rich snippets algorithm.

That’s not entirely true. Prior to that they’d done something because the review aggregate snippets for one of my clients just up and vanished one day.

I scratched my head and for months in early 2014 had the team tweak the code and fix every stray microdata error or potential conflict that could be responsible for what I assumed was some markup confusion. But nothing worked. In frustration I gave up, cursed Google, and put it on the back burner.

When Panda 4.0 was released this client’s review aggregate snippets magically returned along with a huge boost in rank. At the same time, I had another client hit by Panda 4.0 who lost their snippets and saw the Panda-typical decline in rank and traffic. So it became crystal clear.

Site quality is now part of the rich snippets algorithm.

From Google’s perspective it makes perfect sense. If the search quality team believes the site isn’t very good then why would Google render a rich snippet that would draw more attention and clicks to results from that site?

What that means is Panda Jail produces a double whammy of rank reduction and rich snippet suppression.

Validating Rich Snippet Suppression

You can validate the rich snippets suppression by using the site: operator for a query that should be showing rich snippets but isn’t. Here is a search for ‘dr waldo frankenstein’.

SERP for Dr Waldo Frankenstein

The vitals.com result does not have a rich snippet. Using my structured data testing tool bookmarklet I can tell the page does have the review aggregate markup in place. So then we just perform the same query with a site:vitals.com prefix.

Vitals Site Query for Dr Waldo Frankenstein

That’s the same page but this time the review aggregate rich snippet shows up. This is a clear case where Google is intentionally suppressing the rich snippet in normal search results.

Rich Snippets Relevance and Expertise

All of this doesn’t quite explain the big reduction in video snippets though does it? Many of the sites that lost video snippets weren’t Panda victims nor would you think they’d fall into some sort of non-authoritative bucket.

Video Snippets Require Video Expertise

Casey Henry nails it in seeing the pattern. Those sites that are dedicated to video continue to get the video snippet. The algorithm seems to be looking for ‘topical’ expertise when rendering snippets. I don’t think Google wanted any ol’ site ‘hacking’ search results with a video result. (Yes, there was a cottage industry of folks doing this.)

I’ve seen this same ‘expertise’ issue occur on larger general interest sites. They may have received a recipe snippet before, but the new rich snippets algorithm decides not to render it because the site doesn’t have a focus or an expertise in recipes.

This expertise signal is a bit tough to pin down since there are other factors, such as overall site quality, involved. But it seems logical that Google is moving toward rendering snippets only when that site and snippet deliver relevance and expertise.

NASCAR SERPs?

Too Many Logos

The number of rich snippets per query might be a factor as well. Or if it isn’t, I think it will be soon. However, it is super dependent on the query.

For instance, search for ‘funny cat videos‘ and you get 8 video rich snippets, 7 of them from YouTube and one of them from Animal Planet. This makes a bit of sense since the query syntax makes it clear they’re looking for videos.

Sadly, a search for ‘funny cat‘ actually yields 10 video snippets, all from YouTube. I’ll give Google a pass with the query ‘funny cat’ since my guess is the overwhelming modifier is, in fact, ‘videos’.

So lets try the difference between ‘ombre hair’ and ‘ombre hair video’.

Ombre Hair Google Search Result

Ombre Hair Video Google Search Result

Sure enough you get just one video snippet with ‘ombre hair’ and a full 10 video snippets with ‘ombre hair video’. The only problem? They’re all from YouTube. In fact the first 15 are YouTube video snippets.

Look for a tweak to the rich snippets algorithm to dial back the YouTube host crowding issue. Even if YouTube is the most popular video destination it’s a public relations disaster to have it dominate to such an extent.

Similarly when you use ‘recipe’ in the query you get more recipe rich snippets. I’ve noticed that Google regularly removes the universal image result when you append the modifier ‘recipe’ to any ‘dish’ query.

Chicken Saltimbocca Google Search Result

Chicken Saltimbocca Recipe Google Search Result

This makes sense. When you use the term ‘recipe’ in your query you’re looking for, well, recipes.

Query syntax and intent have an increasing influence on search results design and configuration.

But there are times when site quality and relevance aren’t in question and the only reason the rich snippet isn’t rendering seems to be that there are already a number of rich snippets in the results.

The problem is I can’t locate a good example of this signal at the time of writing. I had some examples but now they’re not working as advertised. So am I just reaching here? Maybe, but I don’t think so.

Too Much Of A Good Thing

Too Many Donuts

The concept makes sense and there are recent precedents to support it. Google tweaked the number of authorship images showing up prior to removing them completely. No one wanted to see face after face in their results and certainly not the same face multiple times.

User experience consistency was the reason given for the elimination of authorship images. There’s a prominent mention about cleaning up ‘the visual design of search results’.

In a Google+ Google Webmaster Central hangout shortly afterwards John Mueller (Webmaster Trends Analyst but really so much more than that) seemed to go a bit further and speak to the user experience decisions Google makes with regards to rich snippets.

So for example, if we were to show the authorship photo for all search results, then maybe that would be too much for the majority of the users, even if we had that information. So that’s something where, in the beginning when only very few sites implemented authorship, maybe it made sense to show them all. Maybe now that a lot of sites are implementing authorship, maybe it makes sense to reduce that, or maybe to switch over to the text-based annotation.

Now John is talking about authorship snippets specifically but it seems like this would apply to any visual element in search results. And this isn’t the first time Google’s dialed back images based on user testing and research. The first social annotations Google applied (those small faces under results) weren’t well received by users (pdf) and quickly disappeared.

When everything screams to be looked at, you look at nothing.

Mark Traphagen does a bang-up job teasing it out in his authorship post on Moz and was extremely helpful in pointing me at specific comments. Prior to full removal Google developed a sort of tiered class system for authorship snippets detailed by Mark in his Great Google Authorship Kidnapping piece on Stone Temple.

The first class received the thumbnail image and the second class only got a byline. This might have simply been about site quality and not about the total number of snippets in a result.

Yet coupled with the comments from John after the fact, it makes me wonder if it also served to test visual snippet density.

Choosing Favorites Is Hard

Tough Choices

I think Google is concerned about making the results too cluttered. From the start Google has maintained a type of less is more approach. Just look at their home page.

So at some point it makes sense to me that only a certain number snippets would render per query result, varying by the topic and query syntax. But which results get the rich snippet would make a humongous difference and become a bone of contention.

Do the rich just get richer? Or does the one site that has more topical expertise get the snippet over a larger national brand?

Google hasn’t had to deal with this problem in large part because the adoption of markup has been slow. But as more sites add structured data, how does Google deal with search results with multiple visual elements?

Maybe they don’t.

Knowledge Panels Eat Snippets

As I investigated this topic and went down the rabbit hole I came across an interesting 2010 paper titled How Google is using Linked Data Today and Vision For Tomorrow (pdf). The focus of the paper was on using linked data in rich snippets.

First they looked at how much structured data was currently being used.

Structured Data Usage 2010

The result was a paltry 4.3% using any type of structured data and only 0.7% being used to generate rich snippets. A 2014 report from Searchmetrics indicates that the adoption hasn’t grown much in the intervening time.

But what’s more intriguing are the proposed ‘extended’ rich snippet examples.

Proposed Extended Event Rich Snippet

Proposed Extended Video Rich Snippet

What I can’t help think looking at these is how closely they map to new Knowledge Graph panels. It’s a bit like that old Reese’s Peanut Butter commercial. But here’s the thing. If this was a video from someone other than YouTube and it included links to another site’s content I think the first site’s head would explode.

“How dare Google put links to other sites in my result!”

You can’t think that someone like, say, Last.fm would be keen to have links to Wikipedia or ticketing sites in their result for an artist query. So moving all of that to a centralized location like the Knowledge Panel is almost a necessity.

Google Killed The Radio Star

I’m using Last.fm as an example because from what I can tell Google has eliminated the music rich snippet. I can’t even get one to render using a site: operator, which leads me to believe its been deprecated. If you can get one to render please let me know.

I’m going to use the music snippet example Google provides on their About rich snippets and structured data page.

Google's Rich Snippets Examples

The music snippet here is for Leonard Cohen and from the bold sections of the result I’m assuming the query used to produce it was ‘Leonard Cohen’.  Here’s what the Last.fm result looks like for that query today.

Last.fm Result for Leonard Cohen without a Music Rich Snippet

It’s the same URL but maybe Last.fm just screwed up their markup. I mean, it happens. So let’s run it through the structured data testing tool using my handy bookmarklet. (Seriously, it’ll shave hours of copy and paste work from your life!)

Structured Testing Tool Result for Last.fm

The markup is there. Google just chooses not to render it. Hey, those are the rules. And you can see why if you look at the Knowledge Panel in this search result.

Leonard Cohen Knowledge Panel

The Knowledge Panel has a ‘Songs’ section and a new ad unit to listen to music on multiple platforms. Click on any one of those songs and you get a full blown ‘songs’ carousel result.

Songs Knowledge Carousel

It’s pretty hard not to think this helps line Google’s pockets. It probably does.

The problem here is that sites don’t want competitive links in their Google search results and Google doesn’t want a long line of competing offers like some blinking-neon Las Vegas strip version of search results. Aggregating the various offers into one area of the page is a better user experience.

Knowledge Panels de-dupe, curate and aggregate intent for a better user experience.

The question then is how long until other types of rich snippets go the way of the Knowledge Panel?

Rich Snippets Ticket To Ride

Remember nearly 2,000 words ago when I mentioned Amazon had lost their review aggregate snippet. I took a screengrab of a specific instance of that about a week or so ago for my upcoming presentation on rich snippets.

No Reviews Snippet for Amazon

Instead of focusing on Amazon look at the two other rich snippets on the page from Goodreads and Barnes & Nobel and how they also appear in the Knowledge Panel. Now lets see how this same search looks today (August 19. 2014).

Rich Snippets Gets You Into The Knowledge Panel

Goodreads lost their rich snippet and with it their link in the Knowledge Panel. The Goodreads result changed to one doesn’t have the review aggregate snippets markup. That’s a kick in the pants!

The review aggregate rich snippet gets you access to the Knowledge Panel unit. At least for the book vertical. And if you didn’t realize, that link to Barnes & Nobel is … a link to Barnes & Nobel. External folks!

Google doesn’t play favorites in ordering. The order in the Knowledge Panel is dictated by the order they appear in search results.

Confederacy of Dunes Google Knowledge Panel

Blind Assassin Google Knowledge Panel

Accelerando Google Knowledge Panel Result

I included the last one here to show that other sites do qualify if they get their review aggregate rich snippet on the first page. ManyBooks is 8th on the ‘accelerando’ result.

But looking further down the road might Google simply remove all the rich snippets and aggregate them in the Knowledge Panel unit? Or maybe they’d only do that if the query was more specific and contained the word ‘review’. On a lark I tried ‘blind assassin reviews’.

Blind Assassin Reviews Google Onebox

Will you look at that! Now both Goodreads and Barnes & Nobel have a starred result front and center. The rich snippets still show up in the individual results but it’s almost immaterial given this presentation. How about another?

The Eyre Affair Review Google Knowledge Panel Result

All three sites that have review aggregate rich snippets on page one also get this monstrous book reviews unit. I don’t know about you but it certainly feels like change is coming.

It’s easy with books because there is one representation of this ‘work’. The connection between the entity represented in the snippet and the Knowledge Panel is straight-forward.

But there is not just one funny cat video! However, could you decide that there is one representation for a ‘dish’? Might a new recipe Knowledge Panel include one big image and links to individual recipes from sites using the recipe rich snippet?

It doesn’t seem so far-fetched to me.

Rich Snippets Redux

Pulling myself out of the rabbit hole here’s what I’ve learned.

The Rich Snippets Algorithm Got Smarter

The new rich snippets algorithm clearly draws on site quality signals and may also be looking for topical expertise. Sites impacted by Panda will see both a reduction in rank and a suppression of any rich snippets.

Query Syntax Changes Search UX

Google is adopting new user interfaces for query syntax that indicate specific intent. The number of rich snippets and other visual elements change based on certain modifiers. Knowledge Panels in particular serve to de-dupe, curate and aggregate user intent.

Rich Snippets Are Linked To Knowledge Panels

In some instances rich snippets are being deprecated in lieu of Knowledge Panels (such as music) while other times rich snippets provide access to prime Knowledge Panel real estate.

So while the landscape continues to shift beneath our feet I believe implementing structured data is one of the smartest moves you can make given Google’s clear and continuing efforts around entities, the knowledge graph and Knowledge Panels.

You Won’t Remember That Infographic

June 25 2014 // Marketing + SEO // 45 Comments

Infographics are (still) popular. Clients ask me about them all the time. I ask them to tell me about the last three infographics they remembered.

The response is generally full of stammering as they grope for an answer. Rarely do I get specifics. Even when I do they say things like ‘that infographic about craft beer’. When I ask where the infographic came from? Crickets.

Can you name the brands associated with infographics? The brands that come up most often are Mint and OK Cupid. Everyone else is an also ran. And that’s the thing. For all of their popularity, you won’t remember that infographic.

Or, at least, you won’t remember it the right way.

Triangle Of Memory

To understand why infographics are so problematic we need to look at how we remember content.

Triangle of Memory

The triangle of memory is a variant of the project management triangle that includes better, faster and cheaper attributes, of which you can only have two at any given time. You can have a project fast and cheap but it won’t be better. You can have a project fast and better but it will cost you an arm and a leg.

In terms of memory, we don’t have a massive tag based annotation system in our brains. (That’s what Delicious is for.) Instead, we remember content at a very basic level: site, author and topic. This is why I tell clients to make their content cocktail party ready.

Because you remember ‘that post on Moz about Hummingbird‘ or ‘Danny Sullivan’s analysis of New York Times subscription costs‘.

It’s site and topic, but not the author. It’s author and topic but not site. Rarely it is author and site but not topic. Examples of this might be ‘the latest column by Krugman in the New York Times’ or ‘last week’s episode of John Oliver on HBO’.

I’m not saying you never get all three. You hit the three cherries jackpot once in a while. But it’s rare. Counting on it is like counting on winning at the casino.

The Infographics Monster

Infographics Monster

The problem with infographics is that they destroy the triangle of memory. They gobble up one of those three memory attributes leaving you with only one left to use. It’s always ‘that infographic’. And like it or not the attribute most people select is the topic, resulting in the phrase ‘that infographic about …’.

That means your site or brand disappears! And no. No one remembers (and may not even see) your logo that you’ve slapping on there.

‘That infographic about AdWords conversion rates’ is done by who exactly? Where do I find it again? Ah, never mind. Or worse yet they search for it and they find something or someone else instead.

If users don’t remember that it’s your brand or site, have you really succeeded?

Wasted Attention

Chocolate Covered Donut

Not only are infographics often costly (both in time and money) but you’ve wasted that sliver of attention you’ve worked so hard to earn.

Here you’ve got the eyeballs of a user and they leave without remembering who you are or where they saw it. Heck they might even attribute it to the platform where they discovered it such as Facebook, Pinterest or Google+.

Winning the attention auction isn’t easy and when you do win it you better ensure you’re using that attention wisely. I’d argue an infographic is wasted attention. It’s attention without any lasting value. It’s empty (branding) calories.

When Infographics Work

LOL Cat vs OMG Cat

By and large I steer clients away from infographics and prefer to have them work on other content initiatives where they’ll build brand equity. But that’s not to say that infographics can’t work. They can. But it takes a serious commitment and attention to execution.

Doing one or two infographics is like flushing a fist full of hundred dollar bills down the toilet. If you’re going to do infographics, do infographics. Commit to producing one every month for 18 months.

Consistent engaging infographics is what makes your brand stick. It’s why Mint and OK Cupid succeeded where so many others failed.

I’d also argue that infographics must make users LOL or OMG. If they don’t provoke one of those two reactions then you’re not going to gain traction or attention.

The other way to go is to leverage the infographic into other channels and make it repeatable. Search Engine Land’s Periodic Table of SEO Success Factors (a bit of a mouthful) was printed and handed out at SMX Advanced and has been updated three (?) times now.

It’s an iconic piece pushed through multiple marketing channels to reinforce the site and brand. That’s how you do it.

Don’t Talk To Me About Links

I know some of you are about flexing your fingers about to type out a comment about how your infographic obtained 12 links with an average DA of 49.

Velma Says You Stop That!

Links aren’t the goal of your infographic campaign. Your customers don’t care if you’re on some cheesoid infographic aggregator site. Instead I want to know if that infographic won the brand more true fans. Did it increase the brand’s visibility? Because those things will lead to long-term authority and, by the way, downstream links.

If you’re in such desperate need of links there are far better and cheaper ways to earn them than the branding black hole known as the infographic.

Visibility

Zero Visibility

Another argument for infographics is that they provide you with more visibility. If I see an infographic and then I see a Slideshare deck and then I search and I find a blog post over the course of weeks or months, then perhaps the brand or site begins to sink in.

In principle, I agree. But that only works if I associate that infographic with the other pieces of content and that I have those other pieces of content, which all support my site or brand.

In other words, you better have a comprehensive content strategy (including promotion) that doesn’t rely on just one tactic or medium. I like Jason Miller’s idea around Big Rock Content, though I think the missing ingredient is being memorable.

TL;DR

Infographics are a poor way to build your brand and earn true fans because they destroy the triangle of memory. A successful infographic campaign must be part of a larger content strategy, focusing on repeatable efforts that make people LOL or OMG and can be pushed through multiple marketing channels to reinforce the site or brand.