Is Click Through Rate A Ranking Signal?

June 24 2015 // SEO // 46 Comments

Signs Point To Yes!

Are click through rates on search results a ranking signal? The idea is that if the third result on a page is clicked more often than the first that it will, over time, rise to the second or first result.

I remember this question being asked numerous times when I was just starting out in the industry. Google representatives employed a potent combination of tap dancing and hand waving when asked directly. They were so good at doing this that we stopped hounding them and over the last few years I rarely hear people talking about, let alone asking, this question.

Perhaps it’s because more and more people aren’t focused on the algorithm itself and are instead focused on developing sites, content and experiences that will be rewarded by the algorithm. That’s actually the right strategy. Yet I still believe it’s important to understand the algorithm and how it might impact your search efforts.

Following is an exploration of why I believe click-through rate is a ranking signal.

Occam’s Razor

Pile of Pennies

Though the original principle wasn’t as clear cut, today’s interpretation of Occam’s Razor is that the simplest answer is usually the correct one. So what’s more plausible? That Google uses click-through rate as a signal or that the most data driven company in the world would ignore direct measurement from their own product?

It just seems like common sense, doesn’t it? Of course, we humans are often wired to make poor assumptions. And don’t get me started on jumping to conclusions based on correlations.

The argument against is that even Google would have a devil of a time using click-through rate as a signal across the millions of results for a wide variety of queries. Their resources are finite and perhaps it’s just too hard to harness this valuable but noisy data.

The Horse’s Mouth

It gets more difficult to make the case against Google using click-through rate as a signal when you get confirmation right from the horse’s mouth.

That seems pretty close to a smoking gun doesn’t it?

Now, perhaps Google wants to play a game of semantics. Click-through rate isn’t a ranking signal. It’s a feedback signal. It just happens to be a feedback signal that influences rank!

Call it what you want, at the end of the day it sure sounds like click-through rate can impact rank.

[Updated 7/22/15]

Want more? I couldn’t find this quote the first time around but here’s Marissa Mayer in the FTC staff report (pdf) on antitrust allegations.

According to Marissa Mayer, Google did not use click-through rates to determine the position of the Universal Search properties because it would take too long to move up on the SERP on the basis of user click-through rate.

In other words, they ignored click data to ensure Google properties were slotted in the first position.

Then there’s former Google engineer Edmond Lau in an answer on Quora.

It’s pretty clear that any reasonable search engine would use click data on their own results to feed back into ranking to improve the quality of search results. Infrequently clicked results should drop toward the bottom because they’re less relevant, and frequently clicked results bubble toward the top. Building a feedback loop is a fairly obvious step forward in quality for both search and recommendations systems, and a smart search engine would incorporate the data.

So is Google a reasonable and smart search engine?    

The Old Days

Steampunk Google Logo

There are other indications that Google has the ability to monitor click activity on a query by query basis, and that they’ve had that capability for dog years.

Here’s an excerpt from a 2007 interview with Marissa Mayer, then VP of Search Products, on the implementation of the OneBox.

We hold them to a very high click through rate expectation and if they don’t meet that click through rate, the OneBox gets turned off on that particular query. We have an automated system that looks at click through rates per OneBox presentation per query. So it might be that news is performing really well on Bush today but it’s not performing very well on another term, it ultimately gets turned off due to lack of click through rates. We are authorizing it in a way that’s scalable and does a pretty good job enforcing relevance.

So way back in 2007 (eight years ago folks!) Google was able to create a scalable solution to using click-through rate per query to determine the display of a OneBox.

That seems to poke holes in the idea that Google doesn’t have the horsepower to use click-through rate as a signal.

The Bing Argument

Anything You Can Do I Can Do Better

Others might argue that if Bing is using click-through rate as a signal that Google surely must be as well. Here’s what Duane Forrester, Senior Product Manager for Bing Webmaster Outreach (or something like that) said to Eric Enge in 2011.

We are looking to see if we show your result in a #1, does it get a click and does the user come back to us within a reasonable timeframe or do they come back almost instantly?

Do they come back and click on #2, and what’s their action with #2? Did they seem to be more pleased with #2 based on a number of factors or was it the same scenario as #1? Then, did they click on anything else?

We are watching the user’s behavior to understand which result we showed them seemed to be the most relevant in their opinion, and their opinion is voiced by their actions.

This and other conversations I’ve had make me confident that click-through rate is used as a ranking signal by Bing. The argument against is that Google is so far ahead of Bing that they may have tested and discarded click-through rate as a signal.

Yet as other evidence piles up, perhaps Google didn’t discard click-through rate but simply uses it more effectively.

Pogosticking and Long Clicks

Duane’s remarks also tease out a little bit more about how click-through rate would be used and applied. It’s not a metric used in isolation but measured in terms of time spent on that clicked result, whether they returned to the SERP and if they then refined their search or clicked on another result.

When you really think about it, if pogosticking and long clicks are real measures then click-through rate must be part of the equation. You can’t calculate the former metrics without having the click-through rate data.

And when you dig deeper Google does talk about ‘click data’ and ‘click signals’ quite a bit. So once again perhaps it’s all a game of semantics and the equivalent to Bill Clinton clarifying the meaning of ‘is’.

Seeing Is Believing

A handful of prominent SEOs have tested whether click-through rate influences rank. Rand Fishkin has been leading that charge for a number of years.

Back in May of 2014 he performed a test with some interesting results. But it was a long-tail term and other factors might have explained the behavior.

But just the other day he ran another version of the same test.

However, critics will point out that the result in question is once again at #4, indicating that click-through rate isn’t a ranking signal.

But clearly the burst of searches and clicks had some sort of effect, even if it was temporary, right? So might Google have developed mechanisms to combat this type of ‘bombing’ of click-through rate? Or perhaps the system identifies bursts in query and clicks and reacts to meet a real time or ‘fresh’ need?

Either way it shows that the click-through behavior is monitored. Combined with the admission from Udi Manber it seems like the click-through rate distribution has to be consistently off of the baseline for a material amount of time to impact rank.

In other words, all the testing in the world by a small band of SEOs is a drop in the ocean of the total click stream. So even if we can move the needle for a small time, the data self-corrects.

But Rand isn’t the only one testing this stuff. Darren Shaw has also experimented with this within the local SEO landscape.

User Behavior and Local Search – State of Search 2014

Darren’s results aren’t fool proof either. You could argue that Google representatives within local might not be the most knowledgable about these things. But it certainly adds to a drumbeat of evidence that clicks matter.

But wait, there’s more. Much more.

Show Me The Patents

She Blinded Me With Science

For quite a while I was conflicted about this topic because of one major stumbling block. You wouldn’t be able to develop a click-through rate model based on all the various types of displays on a result.

The result that had a review rich snippet gets a higher click-through rate because the eye gravitates to it. Google wouldn’t want to reward that result from a click-through rate perspective just because of the display.

Or what happens when the result has an image result or a answer box or video result or any number of different elements? There seemed to be too many variations to create a workable model.

But then I got hold of two Google patents titled Modifying search result ranking based on implicit user feedback and Modifying search result ranking based on implicit user feedback and a model of presentation bias.

The second patent seems to build from the first with the inventor in common being Hyung-Jin Kim.

Hyung-Jin Kim

Both of these are rather dense patents and it reminds me that we should all thank Bill Slawski for his tireless work in reading and rendering patents more accessible to the community.

I’ll be quoting from both patents (there’s a tremendous amount of overlap) but here’s the initial bit that encouraged me to put the headphones on and focus on decoding the patent syntax.

The basic rationale embodied by this approach is that, if a result is expected to have a higher click rate due to presentation bias, this result’s click evidence should be discounted; and if the result is expected to have a lower click rate due to presentation bias, this result’s click evidence should be over-counted.

Very soon after this the patent goes on to detail the number of different types of presentation bias. So this essentially means that Google saw the same problem but figured out how to deal with presentation bias so that it could rely on ‘click evidence’.

Then there’s this rather nicely summarized 10,000 foot view of the issue.

In general, a wide range of information can be collected and used to modify or tune the click signal from the user to make the signal, and the future search results provided, a better fit for the user’s needs. Thus, user interactions with the rankings presented to the users of the information retrieval system can be used to improve future rankings.

Again, no one is saying that click-through rate can be used in isolation. But it clearly seems to be one way that Google has thought about re-ranking results.

But it gets better as you go further into these patents.

The information gathered for each click can include: (1) the query (Q) the user entered, (2) the document result (D) the user clicked on, (3) the time (T) on the document, (4) the interface language (L) (which can be given by the user), (5) the country (C) of the user (which can be identified by the host that they use, such as www-google-co-uk to indicate the United Kingdom), and (6) additional aspects of the user and session. The time (T) can be measured as the time between the initial click through to the document result until the time the user comes back to the main page and clicks on another document result. Moreover, an assessment can be made about the time (T) regarding whether this time indicates a longer view of the document result or a shorter view of the document result, since longer views are generally indicative of quality for the clicked through result. This assessment about the time (T) can further be made in conjunction with various weighting techniques.

Here we see clear references to how to measure long clicks and later on they even begin to use the ‘long clicks’ terminology. (In fact, there’s mention of long, medium and short clicks.)

But does it take into account different classes of queries? Sure does.

Traditional clustering techniques can also be used to identify the query categories. This can involve using generalized clustering algorithms to analyze historic queries based on features such as the broad nature of the query (e.g., informational or navigational), length of the query, and mean document staytime for the query. These types of features can be measured for historical queries, and the threshold(s) can be adjusted accordingly. For example, K means clustering can be performed on the average duration times for the observed queries, and the threshold(s) can be adjusted based on the resulting clusters.

This shows that Google may adjust what they view as a good click based on the type of query.

But what about types of users. That’s when it all goes to hell in a hand basket right? Nope. Google figured that out.

Moreover, the weighting can be adjusted based on the determined type of the user both in terms of how click duration is translated into good clicks versus not-so-good clicks, and in terms of how much weight to give to the good clicks from a particular user group versus another user group. Some user’s implicit feedback may be more valuable than other users due to the details of a user’s review process. For example, a user that almost always clicks on the highest ranked result can have his good clicks assigned lower weights than a user who more often clicks results lower in the ranking first (since the second user is likely more discriminating in his assessment of what constitutes a good result).

Users are not created equal and Google may weight the click data it receives accordingly.

But they’re missing the boat on topical expertise, right? Not so fast!

In addition, a user can be classified based on his or her query stream. Users that issue many queries on (or related to) a given topic (e.g., queries related to law) can be presumed to have a high degree of expertise with respect to the given topic, and their click data can be weighted accordingly for other queries by them on (or related to) the given topic.

Google may identify topical experts based on queries and weight their click data more heavily.

Frankly, it’s pretty amazing to read this stuff and see just how far Google has teased this out. In fact, they built in safeguards for the type of tests the industry conducts.

Note that safeguards against spammers (users who generate fraudulent clicks in an attempt to boost certain search results) can be taken to help ensure that the user selection data is meaningful, even when very little data is available for a given (rare) query. These safeguards can include employing a user model that describes how a user should behave over time, and if a user doesn’t conform to this model, their click data can be disregarded. The safeguards can be designed to accomplish two main objectives: (1) ensure democracy in the votes (e.g., one single vote per cookie and/or IP for a given query-URL pair), and (2) entirely remove the information coming from cookies or IP addresses that do not look natural in their browsing behavior (e.g., abnormal distribution of click positions, click durations, clicks_per_minute/hour/day, etc.). Suspicious clicks can be removed, and the click signals for queries that appear to be spammed need not be used (e.g., queries for which the clicks feature a distribution of user agents, cookie ages, etc. that do not look normal).

As I mentioned, I’m guessing the short-lived results of our tests are indicative of Google identifying and then ‘disregarding’ that click data. Not only that, they might decide that the cohort of users who engage in this behavior won’t be used (or their impact will be weighted less) in the future.

What this all leads up to is a rank modifier engine that uses implicit feedback (click data) to change search results.

How Google Uses Click Data To Modify Rank

Here’s a fairly clear description from the patent.

A ranking sub-system can include a rank modifier engine that uses implicit user feedback to cause re-ranking of search results in order to improve the final ranking presented to a user of an information retrieval system.

It tracks and logs … everything and uses that to build a rank modifier engine that is then fed back into the ranking engine proper.

But, But, But

Castle Is Speechless

Of course this type of system would get tougher as more of the results were personalized. Yet, the way the data is collected seems to indicate that they could overcome this problem.

Google seems to know the inherent quality and relevance of a document, in fact of all documents returned on a SERP. As such they can apply and mitigate the individual user and presentation bias inherent in personalization.

And it’s personalization where Google admits click data is used. But they still deny that it’s used as a ranking signal.

Perhaps it’s a semantics game and if we asked if some combination of ‘click data’ was used to modify results they’d say yes. Or maybe the patent work never made it into production. That’s a possibility.

But looking at it all together and applying Occam’s Razor I tend to think the click-through rate is used as a ranking signal. I don’t think it’s a strong signal but it’s a signal none the less.

Why Does It Matter?

You might be asking, so freaking what? Even if you believe click-through rate is a ranking signal, I’ve demonstrated that manipulating it may be a fool’s errand.

The reason click-through rate matters is that you can influence it with changes to your title tag and meta description. Maybe it’s not enough to tip the scales but trying is better than not isn’t it?

Those ‘old school’ SEO fundamentals are still important.

Or you could go the opposite direction and build your brand equity through other channels to the point where users would seek out your brand in search results irrespective of position.

Over time, that type of behavior could lead to better search rankings.


The evidence suggests that Google does use click-through rate as a ranking signal. Or, more specifically, Google uses click data as an implicit form of feedback to re-rank and improve search results.

Despite their denials, common sense, Google testimony and interviews, industry testing and patents all lend credence to this conclusion.

Do You Even Algorithm, Google?

June 19 2015 // SEO // 18 Comments

It has been 267 days since the last Panda update. That’s 8 months and 25 days.

Adventure Time Lemongrab Unacceptable

Where’s My Panda Update?

Obviously I’m a bit annoyed that there hasn’t been a Panda update in so long because I have a handful of clients who might (fingers crossed) benefit from having it deployed. They were hit and they’ve done a great deal of work cleaning up their sites so that they might get back into Google’s good graces.

I’m not whining about it (much). That’s the way the cookie crumbles and that’s what you get when you rely on Google for a material amount of your traffic.

Google shouldn’t be concerned about specific sites caught in limbo based on their updates. The truth, hard as it is to admit, is that very few sites are irreplaceable.

You could argue that Panda is punitive and that not providing an avenue to recovery is cruel and unusual punishment. But if you can’t do the time, don’t do the crime.

Do You Even Algorithm, Google?

Why haven’t we seen a Panda update in so long? It seemed to be one of Google’s critical components in ensuring quality search results, launched in reaction to a rising tide of complaints from high-profile (though often biased) individuals.

Nine months is a long time. I’m certain there are sites in Panda jail right now that shouldn’t be and other sites that may be completely new or have risen dramatically in that time that deserve to be Pandalized.

In an age of agile development and the two week sprint cycle, nine months is an eternity. Heck, we’ve minted brand new spanking humans in that span of time!

Fewer Panda updates equal lower quality search results.

Google should want to roll out Panda updates because without them search results get worse. Bad actors creep into the results and reformed sites that could improve results continue to be demoted.

The Panda Problem

Does the lack of Panda updates point to a problem with Panda itself? Yes and no.

My impression is that Panda continues to be a very resource intensive update. I have always maintained that Panda aggregates individual document scores on a site.

panda document scores

The aggregate score determines whether you are below or above the Panda cut line.

As Panda evolved I believe the cut line has become dynamic based on the vertical and authority of a site. This would ensure that sites that might look thin to Google but are actually liked by users avoid Panda jail. This is akin to ensuring the content equivalent of McDonald’s is still represented in search results.

But think about what that implies. Google would need to crawl, score and compute every site across the entire web index. That’s no small task. In May John Mueller related that Google was working to make these updates faster. But he said something very similar about Penguin back in September of 2014.

I get that it’s a big task. But this is Google we’re talking about.

Search Quality Priorities

I don’t doubt that Google is working on making Panda and Penguin faster. But it’s clearly not a priority. If it was, well … we’d have seen an update by now.

Because we’ve seen other updates. There’s been Mobilegeddon (the Y2K of updates) a Doorway Page Update, The Quality Update and the Colossus Update just the other day. And there’s a drum beat of advancements and work to leverage entities for both algorithmic ranking and search display.

The funny thing is, the one person who might have helped boost Panda as a priority is no longer there. That’s right, Matt Cutts no longer attends the weekly search quality meeting.

Google Search Quality Meeting Screencap

As the industry’s punching bag, Matt was able to bring our collective ire and pain to the Googleplex.

Now, I’m certain John Mueller and Gary Illyes both get an earful and are excellent ambassadors. But do they have the pull that Matt had internally? No way.

Eating Cake

Cat Eating Cake

We keep hearing that these updates are coming soon. That they’ll be here in a month or a few weeks. There are only so many times you can hear this before you start to roll your eyes and silently say ‘I’ll believe it when I see it.’

What’s more, if Panda still improves search quality then the lack of an update means search quality is declining. Other updates may have helped stem the tide but search quality isn’t optimized.

You can quickly find a SERP that has a thin content site ranking well. (In fact, I encourage you to find and post links to those results in the comments.)

Perhaps Google wants to move away from Panda and instead develop other search quality signals that better handle this type of content. That would be fine, yet it’s obvious that Panda is still in effect. So logically that means other signals aren’t strong enough yet.

At the end of the day it’s not about my own personal angst or yours. It’s not about personal stories of Panda woe as heartbreaking as some of them may be. This is about search quality and putting your money (resources) where your mouth is.

You can’t have your cake and eat it too.


It’s been nearly nine months since the last Panda update. If Panda improves search quality then the prolonged delay means search quality is declining.

Why Growth Hacking Works

April 12 2015 // Marketing // 3 Comments

The reason growth hacking works has nothing to do with growth hacking and everything to do with blowing up organizational silos.

What Is Growth Hacking?

M.C. Escher Drawing Hands

Marketers love to market. We’re forever repackaging and rebranding, even when it comes to our own profession. Can you blame us really? We’re frequently the last ones to get the credit and the first ones to get the blame and corresponding pink slip.

Nevertheless these redefinitions often induce a sigh and eye-roll from yours truly. Do we really have to go through all this again? Perhaps I’m just cranky and getting old.

Sean Ellis was the first to bring the term growth hacking to the mainstream in his 2010 post.

A growth hacker is a person whose true north is growth.  Everything they do is scrutinized by its potential impact on scalable growth.

They must have the creativity to figure out unique ways of driving growth in addition to testing/evolving the techniques proven by other companies.

An effective growth hacker also needs to be disciplined to follow a process of prioritizing ideas (their own and others in the company), testing the ideas, and being analytical enough to know which tested growth drivers to keep and which ones to cut.

His piece compares this to a rather bloated and generic job description for a marketing hire. Perhaps those descriptions exist but this would point toward a larger problem of simply not understanding online marketing.

Andrew Chen followed up with a post that emphasized this division between a ‘traditional marketer’ and a growth hacker.

Let’s be honest, a traditional marketer would not even be close to imagining the integration above – there’s too many technical details needed for it to happen. As a result, it could only have come out of the mind of an engineer tasked with the problem of acquiring more users from Craigslist.

Who is this traditional marketer? I worked as a marketer through both Web 1.0 and Web 2.0, hustling for ‘growth’ wherever I could find it.

Maybe I had an advantage because I came from a direct marketing background. I believe in data. And I love digging into technology.

No OLAP tool? Teach myself SQL. Want to change something on my blog? Learn PHP, HTML and CSS. Need a handy bookmarklet? Learn a bit of JavaScript.

I was (and still am) looking at emerging platforms and tactics to get eyeballs on a brand and productive clicks to a site. And you better be measuring the right way. Get the measurement wrong and you might not achieve real growth.

There are plenty of marketers figuring this stuff out. And plenty of marketers who aren’t.

The Lazy Marketer

Lazy Marketers

I’m frequently hard on marketers as a group because there are a number of them who seem more consumed by expensing dinner and covering their asses while being able to point at the well-regarded vendors they hired to do the work than actually understanding and doing the work themselves.

Lazy marketers piss me off because they give all marketers a bad reputation. So I understand why folks like Sean and Andrew might want to create an artificial construct that excludes lazy marketers.

But the truth is that marketers have been growth hacking for decades. You don’t think sophisticated RFM campaigns aren’t a form of growth hacking? I could tell you a thing or two about the strange value of the R dimension.

Brand marketers use data to understand aided and unaided recall. And I remember being shocked as a young account coordinator at an advertising agency at the calculations used to determine the value of sponsorships based on the seconds of TV exposure it generated.

Growth hacking is really just a rejection of lazy marketing.

Because … Growth

Moar LOLcat

I see little distinction between talented online marketers who use technology and data to secure gains and the newly minted growth hackers. They’re drawing on the same skills and mindset.

I’ve been lucky to get a peek into a decent number of organizations over the last few years. What I’ve come to realize is growth hacking works or … can work. But it has everything to do with how an organization integrates growth.

The secret to growth hacking success is the ability to go anywhere in the organization to achieve growth.

A good growth hacker can push for traditional SEO changes, then hop over to the email team and tweak life cycle campaigns, then go to design and push for conversion rate optimization tests, then engage engineering and demand that the site get faster and then approach product with ideas to improve gradual engagement.

When that growth hacker gets pushback from any of these teams they can simply fallback on the central mantra. Why should we do X, Y and Z? Because … growth!

Organize For Growth

Road Painted With Many Lanes

As much as I hate to admit it, the term growth hacker often provides a once constrained marketer with greater opportunity to effect change in an organization. A growth hacker with the same skills but a marketing title would be rebuffed or seen as over-stepping their responsibilities.

“Stay in your lane.”

That’s what many talented marketers are told. You’re in marketing so don’t go mucking around in things you don’t understand. It can be wickedly frustrating, particularly when many of the other teams aren’t relying as heavily on data to guide their decisions.

The beauty of the term ‘growth hacker’ is that it doesn’t really fit anywhere in a traditional sense. They’re automatically orbiting the hairball. But the organization must support ventures into all areas of the company for that individual (or team) to succeed.

Simply hiring a growth hacker to work in marketing won’t have the desired impact. I see many companies doing this. They want the results growth hacking can deliver but they aren’t willing to make the organizational change to allow it to happen.

Growth Hackers

Don't Be A Dick Batman

Hopefully I’ve convinced you that organizations need to change for growth hacking to be successful. But what about the growth hackers themselves?

The job requires a solid rooting in data and technology with an equal amount of curiosity and creativity to boot. Where the rubber really hits the road is in communication and entrepreneurial backbone.

A good growth hacker needs a fair amount of soft skills so they can effectively communicate and work with other teams. Because even if the organization supports cross-functional growth, those teams aren’t always pooping rainbows when the growth hacker knocks on their proverbial door.

Amid these grumbles, growth hackers are often under a bit of a microscope. As the cliche goes, with great power comes great responsibility. So the growth hacker better be ready to show results.

That doesn’t always mean that what they try works. Failure or ‘accelerated data-informed learning’ is a valuable part of growth hacking. You just better be able to manage the ebb and flow of wins and not lose the confidence of teams when you hit a losing streak.

Frankly, good growth hackers are very hard to find.


Growth hacking skills are nothing new but simply a rebranding exercise for tech-savvy marketers sick of being marginalized. But growth hacking only works when an organization blows up functional silos and allows these individuals to seek growth anywhere in the company.

My Favorite SEO Tool

March 24 2015 // SEO // 32 Comments

My favorite SEO tool isn’t an SEO tool at all. Don’t get me wrong, I use and like plenty of great SEO tools. But I realized that I was using this one tool all the time.

Chrome Developer Tools how I love thee, let me count the ways.

Chrome Developer Tools

The one tool I use countless times each day is Chrome Developer Tools. You can find this handy tool under the View -> Developer menu in Chrome.


Or you can simply right click and select Inspect Element. (I suppose the latter is actually easier.) Here’s what it looks like (on this site) when you open Chrome Developer Tools.

Chrome Developer Tools In Action

There is just an incredible amount of functionality packed into Chrome Developer Tools. Some of it is super technical and I certainly don’t use all of the features. I’m only going to scratch the surface with this post.

But hopefully you’re not overwhelmed by it all because there are some simple features that are really helpful on a day-to-day basis.

Check Status Codes

One of the simplest things to do is to use the Network tab to check on the status code of a page. For instance, how does a site handle domain level canonicalization.

Chrome Developer Tools Network Tab

With the Network tab open I go directly to the non-www version of this site and I can see how it redirects to the www version. In this case it’s doing exactly what it’s supposed to do.

If I want more information I can click on any of these line items and see the headers information.

Chrome Developer Tools Network Detail

You can catch some pretty interesting things by looking at what comes through the Network tab. For instance, soon after a client transitioned from http to https I noted the following response code chain.

An https request for a non-www URL returned a 301 to the www http version (domain level canonicalization) and then did another 301 to the www https version of that URL.

The double 301 and routing from https to http and back again can (and should) be avoided by doing the domain level canonicalization and https redirect at the same time. So that’s what we did … in the span of an hour!

I won’t get into the specifics of what you can tease out of the headers here because it would get way too dense. But suffice to say it can be a treasure of information.

Of course there are times I fire up something more detailed like Charles or Live HTTP Headers, but I’m doing so less frequently given the advancements in Chrome Developer Tools.

Check Mobile

There was a time when checking to see how a site would look on mobile was a real pain in the ass. But not with Chrome Developer Tools!

Chrome Developer Tools Viewport Rendering

The little icon that looks like mobile phone is … awesome. Click it!

Chrome Developer Tools Select Mobile Device

Now you can select a Device and reload the page to see how it looks on that device. Here’s what this site looks like on mobile.

Chrome Developer Tools Mobile Test

The cool thing is you can even click around and navigate on mobile in this interface to get a sense of what the experience is really like for mobile users without firing up your own phone.

A little bonus tip here is that you can clear the device by clicking the icon to the left and then use the UA field to do specific User Agent (UA) testing.

Chrome Developer Tools Override

For instance, without a Device selected what happens when Googlebot Smartphone hits my site. All I have to do is use the UA override and put in the Googlebot Smartphone User Agent.

Chrome Developer Tools UA Override Example

Sure enough it looks like Googlebot Smartphone will see the page correctly. This is increasingly important as we get closer to the 4/21/15 mopocalypse.

You can copy and paste from the Google Crawlers list or use one of a number of User Agent extensions (like this one) to do this. However, if you use one of the User Agent extensions you won’t see the UA show up in the UA field. But you can confirm it’s working via the headers in the Network tab.

Show Don’t Tell

The last thing I’ll share is how I use Chrome Developer Tools to show instead of tell clients about design and readability issues.

If you go back to some of my older posts you’ll find that they’re not as readable. I had to figure this stuff out as I went along.

Show Don't Tell Irony

This is a rather good post about Five Foot Web Design, which pretty much violates a number of the principles described in the piece. I often see design and readability issues and it can be difficult for a client to get that feedback, particularly if I’m just pointing out the flaws and bitching about it.

So instead I give them a type of side-by-side comparison by editing the HTML in Chrome Developer Tools and then taking a screen capture of the optimized version I’ve created.

You do this by using the Elements tab (1) and then using the Inspect tool (2) to find the area of the code you want to edit.

Chrome Developer Tools Elements Tab

The inspect tool is the magnifying glass if you’re confused and it just lets you sort of zero in on the area of that page. It will highlight the section on the page and then show where that section resides in the code below.

Now, the next step can be a bit scary because you’re just wading into the HTML to tweak what the page looks like.

Chrome Developer Tools Edit HTML

A few things to remember here. You’re not actually changing the code on that site or page. You can’t hurt that site by playing with the code here. Trust me, I screw this up all the time because I know just enough HTML and CSS to be dangerous.

In addition, if you reload this page after you’ve edited it using Chrome Developer Tools all of your changes will vanish. It’s sort of like an Etch-A-Sketch. You doodle on it and then you shake it and it disappears.

So the more HTML you know the more you can do in this interface. I generally just play with stuff until I get it to look how I want it to look.

Chrome Developer Tools HTML Edit Result

Here I’ve added a header of sorts and changed the font size and line height. I do this sort of thing for a number of clients so I can show them what I’m talking about. A concrete example helps them understand and also gives them something to pass on to designers and developers.


Chrome Developer Tools is a powerful suite of tools that any SEO should be using to make their lives easier and more productive.

Non-Linking URLs Seen As Links

March 20 2015 // SEO // 27 Comments

(This post has been updated so make sure you read all the way to the bottom.)

Are non-linking URLs (pasted URLs) seen as links by Google? There’s long been chatter and rumor that they do among various members of the SEO community. I found something the other day that seems to confirm this.

Google Webmaster Tools Crawl Errors

I keep a close eye on the Crawl Errors report in Google Webmaster Tools with a particular focus on ‘Not found’ errors. I look to see if they’re legitimate and whether they’re linked internally (which is very bad) or externally.

The place to look for this information is in the ‘Linked from’ tab of a specific error.

Linked From Tab on 404 Error

Now, all too often the internal links presented here are woefully out-of-date (and that’s being generous.) You click through, search for the link in the code and don’t find it. Again and again and again. Such was the case here. This is extremely annoying but is a topic for another blog post.

Instead let’s focus on that one external link. Because I figured this was the reason Google continued to return the page as an error even though 1stdibs had stopped linking to it ages ago.

Pasted URL Seen As Link?

That’s not a link! It’s a pasted URL but it’s not a link. (Ignore the retargeted ad.) Looking at the code there’s no <a> tag. Maybe it was there and then removed but that … doesn’t seem likely. In addition, I’ve seen a few more examples of this behavior but didn’t capture them at the time and have since marked those errors as fixed. #kickingmyself

Google (or a tool Google provides) is telling me that the page in question links to this 404 page.

Non-Linking URLs Treated As Links?

This Is Not A Link

It’s not a stretch to think that Google would be able to recognize the pattern of a URL in text and, thus, treat it as a link. And there are good reasons why they might want to since many unsophisticated users botch the HTML.

By treating pasted URLs as links Google can recover those citations, acknowledge the real intent and pass authority appropriately. (Though it doesn’t look like they’re doing that but instead using it solely for discovery.)

All of this is interesting from an academic perspective but doesn’t really change a whole lot in the scheme of things. Hopefully you’re not suddenly thinking that you should go out and try to secure non-linking URLs. (Seriously, don’t!)

What’s your take? Is this the smoking gun proof that Google treats non-linking URLs as links?


Apparently John Mueller confirmed this in a Google+ Hangout back in September of 2013. So while seeing it in Google Webmaster Tools might be new(ish), Google clearly acknowledges and crawls non-linked URLs. Thanks to Glenn Gabe for pointing me to this information.

In addition, Dan Petrovic did a study to determine if non-linking URLs influenced rankings and found it likely that they did not. This makes a bit of sense since you wouldn’t be able to nofollow these pasted URLs, opening the door to abuse via blog comments.

Aggregating Intent

March 13 2015 // SEO // 14 Comments

Successful search engine optimization strategies must aggregate intent. This is something I touched on in my What Is SEO post and also demonstrated in my Rich Snippets Algorithm piece. But I want to talk about it in depth because it’s that important.

Aggregating Intent

Many of Google’s Knowledge Cards aggregate intent. Here’s the Knowledge Card displayed when I search for ‘va de vi’.

Knowledge Card Aggregates Intent

Google knows that Va de Vi is a restaurant. But they don’t quite know what my intent is behind such a broad query. Before Knowledge Cards Google would rely on providing a mixture of results to satisfy different intents. This was effective but inefficient and incomplete. Knowledge Cards makes aggregating intent a breeze.

What type of restaurant is it? Is it expensive? Where is it? How do I get there? What’s their phone number? Can I make a reservation? What’s on the menu? Is the food good? Is it open now? What alternatives are nearby?

Just look at that! In one snapshot this Knowledge Card satisfies a multitude of intents and does so quickly.

It’s not just restaurants either. Here’s a Knowledge Card result for ‘astronautalis’.

Aggregating Intent in Google Knowledge Cards

Once again you can see a variety of intents addressed by this Knowledge Card. Who is Astronautalis? Can I listen to some of his music? Where is he playing next? What are some of his popular songs? How can I connect with him? What albums has he released?

Google uses Knowledge Cards to quickly aggregate multiple intents and essentially cover all their bases when it comes to entity based results. If it’s good enough for Google shouldn’t it be good enough for you?

Active and Passive Intent

Aggregating Intent

So how does this translate into the search strategies you and I can implement? The easiest way to think about this is to understand that each query comes with active and passive intent.

Active intent is the intent that is explicitly described by the query syntax. A search for ‘bike trails in walnut creek’ is explicitly looking for a list of bike trails in walnut creek. (Thank you captain obvious.)

You must satisfy active intent immediately.

If a user doesn’t immediately see that their active intent has been satisfied they’re going to head back to search results. Trust me, you don’t want that. Google doesn’t like pogosticking. This means that at a glance users must see the answer to their active intent.

One of the mistakes I see many making is addressing active and passive intent equally. Or simply not paying attention to query syntax and decoding intent properly. More than ever, your job as an SEO is to extract intents from query syntax.

Passive intent is the intent that is implicitly described by the query syntax. A search for ‘bike trails in walnut creek’ is implicitly looking for trail maps, trail photos, trail reviews and attributes about those trails such as difficulty and length to name a few.

You create value by satisfying passive intent.

When you satisfy passive intent you’ll see page views per session and time on site increase. You’re ensuring that your site generates long clicks, which is incredibly important from a search engine perspective. It also happens to be the way you build your brand, convert users and ween yourself from being overly dependent on search engine traffic.

I think one of the best ways to think about passive intent is to ask yourself what the user would search for next … over and over again.

Intent Hierarchy

First You Looked Here, Then Here

It’s essential to understand the hierarchy of intent so you can deliver the right experience. This is where content and design collide with “traditional” search. (I use the quotes here because I’ve never really subscribed to search being treated as a niche tactic.)

SEO is a user centric activity in this context. The content must satisfy active and passive intent appropriately. Usually this means that there is ample content to address active intent and units or snippets to satisfy passive intent.

The design must prominently feature active intent content while providing visual cues or a trail of sorts to show that passive intent can also be satisfied. These things are important to SEO.

We can look at Google’s Knowledge Cards to see how they prioritize intent. Sometimes it’s the order in which the content is presented. For instance, usually the ‘people also search for’ is at the bottom of the card. These alternatives always represent passive intent.

For location based entities the map and directions are clearly given more priority by being at the top (and having a strong call to action). While the reviews section is often presented later on, it takes up a material amount of real estate, signaling higher (and potentially active) intent. Those with more passive intent (address, phone, hours etc.) are still available but are not given as high a weight visually.

For an artist (such as Astonautalis) you’ll see that listening options are presented first. Yes, it’s an ad based unit but it also makes sense that this would be an active intent around these queries.

It’s up to us to work with content and design teams to ensure the hierarchy of intent is optimized. Simply putting everything on the page at once or with equal weight will distract or overwhelm the user and chase them back to search results or a competitor.

Decoding Intent

Decoding Intent

While the days of having one page for every variant of query syntax are behind us, we’re still not at the point where one page can address every query syntax and the intents behind them.

If I search for ‘head like a hole lyrics’ the page I reach should satisfy my active intent and deliver the lyrics to this epic NIN song. To serve passive intent I’d want to see a crosslink unit to other songs from Pretty Hate Machine as well as other NIN albums. Maybe there’s another section with links to songs with similar themes.

But if I search for ‘pretty hate machine lyrics’ the page I reach should have a list of songs from that album with each song linking to a page with its lyrics. The crosslink unit on this page would be to other NIN albums and potentially other similar artists albums.

By understanding the query syntax (and in this case query classes) you can construct different page types that address the right hierarchy of intent.

Target the keyword, optimize the intent.


Aggregating intent and understanding how to decode, identify and present active and passive intent from query syntax is vital to success in search and beyond.

Roundup Posts

February 26 2015 // Marketing + Rant // 35 Comments

I’m increasingly conflicted about roundup posts. You know, the kind where 23 experts answer one burning question and their answers are all put together in one long blog post. Instant content! I don’t produce roundup posts, rarely read them and infrequently contribute to them.

Roundup Dynamics

Silence of the Lambs Quid Pro Quo

The dynamics of a roundup post are pretty clear. The person aggregating the answers gets what is essentially free content for their site. Yes, I know you had to email people and potentially format the responses but the level of effort isn’t particularly high.

In exchange, the person providing the answers gets more exposure and gains some authority by being labeled an expert. Even better if your name is associated with other luminaries in the field. It’s an interesting and insidious form of social proof.

Flattery Will Get You Everywhere

Leo DiCaprio You're Awesome

It feels good to be asked to participate in roundup posts. At least at first. You’ve been selected as an expert. Talk about an ego boost!

The beauty of it is that there will always be people who want that recognition. So even if some tire of participating there is a deep reservoir of ego out there ready to be tapped. No matter what I think or write I’m certain we’ll continue to see roundup posts.

I still prefer individual opinion and thought pieces. I like when people step out on the ledge and take a stand one way or the other. Even if I disagree with you, I recognize the effort invested and bravery displayed.

Saturation Marketing Works

Times Square Advertising

I’m a marketer with an advertising background. I know saturation marketing works. So participating in roundup posts seems like a smart strategy. People see your name frequently and you’re always being portrayed in a positive light.

No matter where people turn they’re running into your name and face and you’re being hailed as an expert. Whoo-hoo! What’s wrong with that?

What’s The Frequency Kenneth?

How good is the content in these roundup posts? How much effort are these experts expending? I’m sure some spend a good deal of time on their contribution, if for no other reason than the desire to have the most insightful, provocative or humorous entry. I can’t be alone in thinking this way.

But at some point, as the number of requests rises (and they will since success begets success), you may realize that it’s just about the contribution. Showing up is 90% of the game. It’s not that the responses are bad, but they’re more like off-the-cuff answers than well thought out responses.

Remember Sammy Jankis

Memento Tattoo

Of course, I’m always thinking about how these contributions are being remembered. In a large roundup post is my name and contribution going to be remembered? I somehow doubt it. At least not the specifics.

So the only thing I really gain is installing (yes I do think of the brain like software) the idea of expertise and authority in a larger group of people. Because if you see my name enough times you’ll make those connections.

That’s powerful. No doubt about it.

Why So Serious?

Heath Ledger Joker

I ask myself why I bristle at roundup posts. Why am I increasingly reticent to contribute given my understanding of the marketing value? Am I somehow sabotaging my own success?

All too often I feel like roundup posts don’t deliver enough value to users. The content is uneven and often repetitive from expert to expert, exacerbating scanning behavior. It’s content that makes me go ‘meh’.

I might be dead wrong and could be committing the cardinal sin of marketing by relying on myself as the target market. Yet I don’t think I’m alone. I’ve spoken to others who skip these posts or, worse, have a dim view of those contributing.

Bud Light or Ruination IPA

Beer vs Beer

The top selling beer in the US last year was Bud Light. For many, achieving Bud Light status is the pinnacle of success. The thing is … I don’t want to be Bud Light. Or more to the point, I don’t provide services that match the Bud Light audience.

Lets see if I can express this next part without sounding like a douchebag.

I don’t run a large agency. I’m not in the volume business. Many of my clients are dubious of the public discourse taking place on digital marketing. They rely on their professional networks to connect them to someone who can make sense of it all and sort fact from fiction. Because, and here’s the hard truth, they don’t really believe all those people are experts.

My clients are those who crave a deliciously bitter Ruination IPA. And the way to find and appeal to those people is different. Budweiser spent gobs on Super Bowl advertising. Stone Brewing? Not so much.

So, I’m left thinking about the true meaning of authority and expertise. It’s subjective. Obviously a lot of people dig Bud Light. That’s cool. But that’s not my audience. I’m seeking authority from a different audience.

Roundup Posts

Roundup Posts

I’ll still participate in roundup posts from time to time, though I may have just shot myself in the foot with this piece. I’m inclined to contribute to posts that cover a topic I might not normally write about or to site that has a different audience.

My goal is to ensure I maintain some visibility, without going overboard, while securing authority with new audiences that match my business goals. Your business goals might be different, so contributing to lots and lots of roundup posts might be right up your alley.


There’s nothing inherently wrong with roundup posts as a part of your content marketing strategy. But you should understand whether this tactic reaches your target market and aligns with your business goals.

We Want To Believe

January 20 2015 // Marketing + SEO + Social Media // 5 Comments

Fake news and images are flourishing and even Snopes can’t hold back the tide of belief. Marketers should be taking notes, not to create their own fake campaigns but to understand the psychology that makes it possible and connect that to digital marketing trends.

We Want To Believe

I Want To Believe Poster

Agent Mulder, of the X-Files, was famous for his desire to believe in aliens and all sorts of other phenomena. The truth is, we all want to believe. Maybe not in aliens but a host of other things. It’s not that we’re gullible, per se, but that we are inherently biased and seek what feels like the truth.

One of the pieces that fooled some of my colleagues was the story about a busy restaurant who’d commissioned research on why service ratings had declined over time.

Restaurant Service Cellphones Fake Research

This was a post on Craigslist in the rants & raves section. Think about that for a moment. This is not a bastion of authenticity. But the post detailed patrons’ obsession with their phones and the inordinate amount of time they took texting and taking pictures of their food.

This self-absorbed, technology-obsessed customer was the real problem. Many reported this ‘research’ as fact because the findings were ones that people wanted to believe. Too many of us have witnessed something similar. We have experience that creates a bias to believe.

We wanted the story to be true because it felt right and matched our preconceptions and beliefs.

The Subversive Jimmy Kimmel

While Jimmy Fallon may be the more affable of late night hosts, Jimmy Kimmel has been doing what I think is some ground-breaking work. His most popular pranks have exposed our desire to believe.

Twerking was jumping the shark and this video tapped into our collective eye-roll of the practice. But less than a week later Kimmel revealed that it was all a hoax.

He didn’t stop there though. The next time he enlisted Olympian Kate Hansen to post a video that purportedly showed a wolf wandering the halls at the Sochi Olympics.

Once again, Kimmel revealed that while it was a wolf, it wasn’t anywhere near Russia. I’m not sure people give Kimmel enough credit. He made people believe there were wolves roaming the halls at the Olympics!

Now, why did we believe? We believed because the narrative was already set. Journalists were complaining about the conditions at Sochi. So when the wolf video appeared and it was endorsed by an Olympic athlete no less, well, we fell for it. It matched our expectations.

It’s not about the truth, it’s about it making sense.

Experience, Belief and Marketing

Adventure Time Demon Cat

So how does our desire to believe connect to marketing? Marketers should be figuring out how to create a narrative and set expectations.

Content marketing is popular right now because it provides us the opportunity to shape expectations.

I’ve written quite a bit about how to maximize attention. If you only remember one thing from that piece it’s that we’re constantly rewriting our memory.

Every interaction we have with a site or brand will cause us to edit that entry in our head, if even just a little. Each time this happens marketers have an opportunity to change the narrative and reset expectations.

For a restaurant this means that a bad meal, even after having numerous good ones in the past, can have a serious impact on that patron’s perception and propensity to return. I used to love eating at Havana, a nearby Cuban restaurant. My wife and I had many great meals (and Mojitos) there. But about a year ago we had a sub-par dinner.

Because we’d had so many good meals before we wrote it off as an aberration. This is an important thing to understand. Because what it really means is that we felt like our experience didn’t match our expectation. But instead of changing our expectation we threw away that experience. You should get a raise if you’re able to pull this off as a marketer.

We returned a few months later and it was another clunker. This time we came to the conclusion that the food quality had simply taken a nose dive. We haven’t been back since. Our perception and expectation changed in the span of two bad experiences.

Content, in any form, follows the same rules. Consistently delivering content that reinforces or rewrites a positive brand expectation is vital to success.

Know The Landscape

Beached Whale Revealed In Painting

Our experiences create context and a marketer needs to understand that context, the landscape, before constructing a content strategy. Because it’s not about the truth. It’s about what people are willing to believe.

All too often I find companies who struggle with this concept. They have the best product or service out there but people are beating a path to their competitor(s) instead. It’s incomprehensible. They’re indignant. Their response is usually to double-down on the ‘but we’re the best’ meme.

Nearly ten years ago I was working at Alibris, a used, rare and out-of-print book site. Within the bookselling community the Alibris name was mud. The reason could be traced back to when Alibris entered the market. The Alibris CEO was blunt, telling booksellers that they would be out of business if they didn’t jump on the band wagon.

He was right. But the way the message was delivered, among other things, led to a general negative perception of the brand among booksellers, a notoriously independent bunch. #middlefingersraised

How could I change this negative brand equity? Did I just tell sellers that we were awesome? No. Instead I figured out the landscape and used content and influencer marketing to slowly change the perception of the brand.

Our largest competitor was Abebooks. So I signed up as a seller there, which also gave me access to their community forum. It was here that I carefully read seller complaints about the industry and about Abebooks itself. What I came to realize was that many of their complaints were actually areas where Alibris excelled. Sellers just weren’t willing to see it because of their perception (or expectation) of the brand.

So every month in our seller newsletter I would talk about an Alibris feature that I knew would hit a nerve. I knew that it was a pain point for the industry or an Abebooks pet peeve. Inevitably, these newsletter items were talked about in the forums. At first the response went a little like this. “Alibris is still evil, but at least they’re doing something about this one thing.”

At the same time I identified vocal detractors of our brand and called them on the phone. I wanted them to vent and asked them what it would take for them to give Alibris a try. My secret goal was to change their perception of the brand, to humanize it, and neutralize their contribution to the negative narrative in the community.

It didn’t happen overnight but over the course of a year the narrative did change. Booksellers saw us as a brand trying to do right by them, perhaps ‘seeing the error of our ways’ and forging a new path. They gave us the benefit of the doubt. They grudgingly told stories about how sales on Alibris were similar to those on Abebooks.

I’d changed the narrative about the brand.

I didn’t do this through cheerleading. Instead, I led the community to content that slowly rewrote their expectations of Alibris. I never told them Alibris was better, I simply presented content that made them re-evaluate their perception of ‘Abebooks vs. Alibris’.

Influencer Marketing

Why do some of these fake stories take hold so quickly? The Sochi wolf had a respected Olympic athlete in on the gag. She was a trusted person, an influencer, with no real reason to lie.

Fake NASA Weightless Tweet

People wouldn’t have believed this false weightless claim if it hadn’t been delivered as a (spoofed) Tweet from NASA’s official Twitter account. Our eyes told us that someone in authority, the ultimate authority in this case, said it was true. That and we wanted to believe. Maybe this time in something amazing. Not aliens exactly but close.

So when we talk about influencer marketing we’re talking about enlisting others who can reinforce the narrative of your brand. These people can act as a cementing agent. It’s not so much about their reach (though that’s always nice) but the fact that it suddenly makes sense for us to believe because someone else, someone we trust or respect, agrees.

At that point we’re more willing to become evangelizers of the brand. That’s the true value of influencer marketing. People will actively start passing along that positive narrative to their friends, family and colleagues. If you’re familiar with the Net Promoter concept you can think of influencer marketing as a way to get people from passives (7-8) to promoters (9-10).

Influencer marketing converts customers into evangelizers who actively spread your brand narrative.

Justin Timberlake Is A Nice Guy?

Dick In a Box Sceenshot

Take my opinion (and probably yours) of Justin Timberlake. He seems like a really nice guy, right? But I don’t know Justin. I’ve never met him and odds are neither have you. For all we know, he could be a raging asshole. But I think he isn’t because of a constant drip of content that has shaped my opinion of him.

He’s the guy who is willing to do crazy stuff and poke fun at himself on SNL. He goes on prom dates. He’s the sensitive guy who encourages a musician in a MasterCard commercial. He celebrates at Taco Bell. I don’t even like his music but I like him.

The next thing I want to say is that it probably helps that he really is a nice guy. But I honestly don’t know that! I want to believe that but I’m also sure he has a very savvy PR team.

Uber Is Evil?

Skepticism Intensifies

Uber is a great example of when you lose control of the narrative. A darling of the ‘sharing economy’ Uber might torpedo that movement because they’re suddenly seen as an uber-villain. (Sorry, I couldn’t help it.)

Once again, it’s about consistency. It’s about rewriting that perception. So taking a brand down doesn’t happen, generally, with just one gaff. You have to step in it over and over again.

Uber’s done that. From placing fake orders or other dirty tricks against competitors, to threatening journalists, to violating user privacy, to surge pricing, to sexual assault to verbal abuse of a cancer patient.

Suddenly, every Uber story fits a new narrative and expectation. Uber is evil. Is that the truth? Not really. Is it what we want to believe? Yup.

Uber screwed up numerous times but their negative brand equity is partly due to the landscape. There are enough people (me included) who aren’t keen on the sharing economy that took Uber’s missteps as an opportunity to float an alternate narrative, attacking the sharing economy by proxy.

Either way, it became increasingly easy to get stories published that met this new expectation and increasingly difficult for positive experiences to see the light of day. This is explained incredibly well in a case study on Internet celebrity brought to my attention by Rand Fishkin.

The video is 19 minutes long, which is usually an eternity in my book. But this video is worth it. Every marketer should watch it all the way through.

A Content Marketing Framework

I realize that I use a number of terms almost interchangeably throughout this piece. In truth, there are wrinkles and nuance to these ideas. If they weren’t confusing then everyone would be a marketing god. But I want to provide a strawman framework for you to remember and try out.

Why Content Marketing Works

Our experience with content creates context or bias that changes our belief or perception of that brand resulting in a new expectation when we encounter the brand again.

At any point in this journey a person can be exposed to a competitor’s content which can change context and bias. In addition, influencer marketing and social proof can help reinforce context and cement belief.

I’d love to hear your feedback on this framework and whether it helps you to better focus your marketing efforts.


The lesson marketers should be taking from the proliferation of fake news and images isn’t to create our own fake stories or products. Instead we should be deciphering why people believe and use that knowledge to construct more effective digital marketing campaigns.

Google Autocomplete Query Personalization

January 14 2015 // SEO // 22 Comments

The other day a friend emailed me asking if I’d ever seen behavior where Google’s autocomplete suggestions would change based on a prior query.

Lucifer from Battlestar Galatica

I’ve seen search results change based on prior queries but I couldn’t recall the autocomplete suggestions changing in the way he detailed. So I decided to poke around and see what was going on. Here’s what I found.

Query Dependent Autocomplete Example

Here’s the example I was sent. The individual was cleaning up an old computer and didn’t quite know the purpose of a specific program named ‘WineBottler’.

Search Result for WineBottler

Quickly understanding that he didn’t need this program anymore he began to search for ‘uninstall winebottler’ but found that Google’s autocomplete had beat him to it.

Query Dependent Google Autocomplete

There it was already listed as an autocomplete suggestion. This is very different from doing the uninstall query on a fresh session.

Normal Autocomplete Suggestions

I was intrigued. So I started to try other programs in hopes of replicating the query dependent functionality displayed. I tried ‘SnagIt’ and ‘Photoshop’ but each time I did I got the same generic autocomplete suggestions.

Query Class Volume

Coincidentally I was also chatting with Barbara Starr about an old research paper (pdf) that Bill Slawski had brought to my attention. The subject of the paper was in identifying what I call query classes, or a template of sorts, which is expressed as a root term plus a modifier. Easy examples might be ‘[song] lyrics’ or ‘[restaurant] menu’.

So what does this have to do with autocomplete suggestions? Well, my instinct told me that there might be a query class of ‘uninstall [program]’. I clicked over to Ubersuggest to see if I just hadn’t hit on the popular ones but the service was down. Instead I landed on SERPs Suggest which was handy since it also brought in query volume for those autocomplete suggestions.

I searched for ‘uninstall’ and scrolled to where the results were making the most sense to me.

SERPs Suggests Keyword Tool

Quite obviously there is a query class around ‘uninstall [program]’. Now it was time to see if those with high volume (aka intent) would trigger the query class based autocomplete suggestions.

Query Class Based Autocomplete Suggestions

The scourge of the pop-under world, MacKeeper, jumped out at me so I gave that one a try.

MacKeeper Search Result

Google Autocomplete for Uninstall after MacKeeper query

Sure enough the first autocomplete suggestion is uninstall mackeeper. It’s also interesting to note the prior query is kept in reference in the URL. This isn’t new. It’s been like that for quite some time but it makes this type of scenario far easier to explain.

At random I tried another one from my target list.

Parallels Search Results

Uninstall Autocomplete after Parallels Query

Yup. Same thing.

Classes or Attributes?

It got me thinking though whether it was about query classes or just attributes of an entity.  So I poked around a bit more and was able to find examples in the health field. (Sorry to be a debbie downer.) Here’s a search for lymphoma.

Lymphoma Search Results

Followed by a search for treatment.

Autocomplete for Treatment after Lymphoma Query

This differs from a clean search for ‘treat’.

Treat Autocomplete Suggestions

Treatment is an attribute of the entity Lymphoma. Then again ‘treatment of [ailment]’ is also a fairly well-defined query class. So perhaps I’m splitting hairs in trying to pry apart classes from attributes.

It Doesn’t Always Work

I figured I could find more of these quickly and selected a field that I thought had many query classes: music. Search for a band, then search for something like ‘tour dates’ or ‘tickets’ and see if I could get the query dependent autocomplete suggestions to fire.

I tried Kasabian.

Kasabian Search Results

And then tour dates.

Tour Dates Autocomplete Suggestions

Nothing about Kasabian at all. Just generic tour dates autocomplete suggestions. I tried this for many other artists including the ubiquitous Taylor Swift and got the same results, or lack thereof.

I had a few theories of why music might be exempted but it would all just be conjecture. But it did put a bit of a dent into my next leap in logic, which would have been to conversational search.

Not Conversational Search

One of the bigger components of Hummingbird was the ability to perform conversational search that, often, wouldn’t require the user to reference the specific noun again. The classic example being ‘How tall is the Eiffel Tower?’ ‘Who built it?’

Now in the scheme of things conversational search is, in part, built upon identifying query classes and how people string them together in a query session. So it wouldn’t be a shock if this started showing up in Google’s autocomplete suggestions. Yet that’s not what appears to be happening.

Because you can do a voice search using Google Now for ‘Kasabian’ and then follow up with ‘tickets for them’ and get a very different and relevant set of results. They figure out the pronoun reference and substitute appropriately to generate the right query: ‘Kasabian Tickets’.

What Does Google Say?

Of course it pays to see what Google says about their Autocomplete suggestions predictions.

About Google Autocomplete Predictions

I find it interesting that they call them predictions and not suggestions. It’s far more scientific. More Googly. But I’m not changing my references throughout this piece!

But here we can see a probable mash-up of “search activity of users” (aka query classes) and “relevant searches you’ve done in the past” (aka query history). Previously, the query history portion was more about ensuring that my autocomplete for ‘smx’ might start with ‘smx east’.

Personalized Autocomplete

While the autocomplete for someone unaffiliated with search wouldn’t get that suggestion.

Nonpersonalized Autocomplete

So I’m left to think that this new session based autocomplete personalization is relatively new but may have been going on for quite some time without many people noticing.

There’s a lot more research that could be done here so please let me know if and when you’ve noticed this feature as well as any other examples you might have of this behavior.

For Google the reason for doing this is easy. It’s just one more way that they can reduce the time to long click.


Google is personalizing autocomplete suggestions based on a prior query when it matches a defined query class or entity attribute.

Image Blind

December 16 2014 // Analytics + SEO // 15 Comments

Images are an increasingly important part of the Internet landscape. Yet marketers are provided very little in the way of reliable metrics to allow us to understand their power and optimize accordingly. This is doubly strange given the huge amount of research going on regarding images within search engine giants such as Google.

Image Tracking In Google Analytics

There is none. Or at least there is no image search tracking in Google Analytics unless you create filters based on referrers. I wrote about how to track image search in Google Analytics in March of 2013 and updated that post in April of 2014.

The problem with this method is that it is decreasing in usefulness. I still use it and recommend it because some visibility is better than none. But when Chrome removed the referrer completely from these clicks earlier this year it really hurt the accuracy of the filter.

Who cares you might be asking. I care because image search intent and the resulting user behavior is often wildly different than web search.

Google Image Search Traffic Behavior

The users coming to the site above via web search have vastly different behavior metrics than those coming from image search. I’ve highlighted the dramatic pages per visit and time on site metrics. Shouldn’t we be building user stories and personas round this type of user?

For a while I explained away the reasons for not providing image search tracking in Google Analytics under the umbrella of privacy. I understand that Google was pretty much forced to move to ‘not provided’ because of lawsuits, Gaos v. Google Inc. in particular. I get it.

But I’m with Chris Messina. Privacy shouldn’t be a four letter word. And the one company who has the best chance of changing the conversation about it is Google. But let’s not go down the privacy rabbit hole. Because we don’t have to.

Right now Google Analytics provides other data on how people search. They break things down by mobile or tablet. We can even get down to the device level.

Google Analytics by Device

Are we really saying that knowing the user came in via image search is more identifiable than what device they were using? They simply explain different meta data on how a user searched.

Furthermore, on both web and image search I can still drill down and see what page they landed on. In both instances I can make some inferences on what term was used to get them to that page.

There is no inherent additional data being revealed by providing image search as a source.

Image Clicks in Google Webmaster Tools

I wouldn’t be as frothed up about this if it was just Google Analytics. Because I actually like Google Analytics a lot and like the people behind it even more.

But then we’ve got to deal with Google Webmaster Tools data on top and that’s an even bigger mess. First let’s talk about the dark pattern where when you look at your search queries data it automatically applies the Web filter. #notcool

Default Web Filter for Search Queries in GWT

I’m sure there’s an argument that it’s prominent enough and might even draw the user’s attention. I could be persuaded. But defaults are dangerous. I’d hazard there are plenty of folks who don’t even know that you can see this data with other filters.

And a funny thing happens with sites that have a lot of images (think eCommerce) when you look at this data. It doesn’t make an ounce of sense.

What happens if I take a month’s worth of image filtered data and a month’s worth of web filtered data and then compare that to the actual data reported in Google Analytics?

Here’s the web filtered data which is actually from November 16 to December 14. It shows 369,661 Clicks.

GWT Web Filter Example

Now here the image filtered data from the same time frame. It shows 965,455 Clicks.

GWT Image Filter Traffic Graph

Now here’s what Google Analytics reports for the same timeframe.

Google Analytics Traffic Comparison

For those of you slow on the uptake, the image click data from Google Webmaster Tools is more than the entire organic search reported! Not just Google but organic search in total. Put web and image together and we’re looking at 1.3 million according to Google Webmaster Tools.

I’m not even going to get into the ratio of image clicks versus web clicks and how they don’t have any connection to reality when looking at the ratio in Google Analytics. Even taking the inaccuracy of the Google Analytics filters into account it points to one very clear truth.

The image click data in Google Webmaster Tools is wonky.

So that begs the question. What exactly is an image click? It doesn’t seem to be limited to clicks from image search to that domain. So what does it include?

This blog is currently number three for the term ‘cosmic cat’ in image search (#proud) so I’ll use that as an example.

What Is an Image Click?

Do image clicks include clicks directly to the image, which are generally not on that domain and not counted in most traffic packages including Google Analytics? Maybe. But that would mean a lot of people were clicking on a fairly small button. Not impossible but I’d put it in the improbable bucket.

Or do image clicks include any time a user clicks to expand that image result? This makes more sense given what I’m seeing.

But that’s lunacy. That’s comparing apples to oranges. How does that help a marketer? How can we trust the data in Google Webmaster Tools when we encounter such inconsistencies.

Every webmaster should be inquiring about the definition of an image click.

The definition (of sorts) provided by Google in their support documentation doesn’t help.

GWT Search Queries FAQ

The first line is incorrect and reflects that this document hasn’t been updated for some time. (You know, I hear care and attention to detail might be a quality signal these days.) There’s a line under devices that might explain the image click bloat but it’s not contained in that section and instead is attributed to devices.

Long story short, the documentation Google Webmaster Tools provides on this point isn’t helpful. (As an aside, I’d be very interested in hearing from others who have made the comparison of image filter and web filter clicks to Google Analytics traffic.)

Images During HTTPS Conversion

These problems came to a head during a recent HTTP to HTTPS conversion. Soon after the conversion the client involved saw a decent decline in search traffic. Alarm bells went off and we all scrambled to figure out what was going on.

This particular client has a material amount of images so I took the chart data from both HTTP and HTTPS for web and image clicks and graphed them together.

Exasperated Picard

In doing so the culprit in the decline post conversion was clearly image traffic! Now, some of you might be thinking that this shows how the Google Webmaster Tools data is just fine. You’re be wrong! The data there is still incorrect. It’s just wrong consistently enough for me to track fluctuations. I’m glad I can do it but relying on consistently bad data isn’t something I’m cheering about.

The conclusion here seems to be that it takes a long time to identify HTTPS images and match them to their new HTTPS pages. We’re seeing traffic starting to return but it’s slower than anyone would like. If Google wants sites to convert to HTTPS (which they do) then fixing this image search bottleneck should be a priority.

Image Blind?

I'm Mad as Hell And ...

The real problem here is that I was blindsided due to my lack of visibility into image search. Figuring out what was going on took a fair amount of man hours because the metrics that would have told us what was going on weren’t readily available.

Yet in another part of the Googleplex they’re spending crazy amounts of time on image research.

Google Image Advancements

I mean, holy smokes Batman, that’s some seriously cool work going on. But then I can’t tell image search traffic from web search traffic in Google Analytics and the Google Webmaster Tools data often shows more ‘image clicks’ to a site than total organic traffic to the site in the same time period. #wtf

Even as Google is appropriately moving towards the viewable impressions metric for advertisers (pdf), we marketers can’t make heads or tails of images, one of the most important elements on the web. This needs to change.

Marketers need data that they can both rely on and trust in to make fact based decisions.


Great research is being done by Google on images but they are failing marketers when it comes to image search metrics. The complete lack of visibility in Google Analytics coupled with ill defined image click data in Google Webmaster Tools leaves marketers in the dark for an increasingly important type of Internet content.