2012 Internet, SEO and Technology Predictions

December 27 2011 // Analytics + SEO + Technology // 8 Comments

It's time again to gaze into my crystal ball and make some predictions for 2012.

Crystal Ball Technology Predictions

2012 Predictions

For reference, here are my predictions for 2011, 2010 and 2009. I was a bit too safe last year so I'm making some bold predictions this time around.

Chrome Becomes Top Browser

Having already surpassed Firefox this year, Chrome will see accelerated adoption, surpassing Internet Explorer as the top desktop browser in the closing weeks of 2012.

DuckDuckGo Cracks Mainstream

Gabriel Weinberg puts new funding to work and capitalizes on the 'search is about answers' meme. DuckDuckGo leapfrogs over AOL and Ask in 2012, securing itself as the fourth largest search engine.

Google Implements AuthorRank

Google spent 2011 building an identity platform, launching and aggressively promoting authorship while building an internal influence metric. In 2012 they'll put this all together and use AuthorRank (referred to in patents as Agent Rank) as a search signal. It will have a more profound impact on search than all Panda updates combined.

Image Search Gets Serious

Pinterest. Instagram. mlkshk. We Heart It. Flickr. Meme Generator. The Internet runs on images. Look for a new image search engine, as well as image search analytics. Hopefully this will cause Google to improve (which is a kind word) image search tracking within Google Analytics.

SEO Tool Funding

VCs have been sniffing around SEO tool providers for a number of years. In 2012 one of the major SEO tool providers (SEOmoz or Raven) will receive a serious round of funding. I actually think this is a terrible idea but ... there it is.

Frictionless Check-Ins

For location based services to really take off and reach the mainstream they'll need a near frictionless check-in process. Throughout 2012 you'll see Facebook, Foursquare and Google one-up each other in providing better ways to check-in. These will start with prompts and evolve into check-out (see Google Wallet) integrations.

Google+ Plateaus

As much as I like Google+ I think it will plateau in mid-2012 and remain a solid second fiddle to Facebook. That's not a knock of Google+ or the value it brings to both users and Google. There are simply too many choices and no compelling case for mass migration.

HTML5 (Finally) Becomes Important

After a few years of hype HTML5 becomes important, delivering rich experiences that users will come to expect. As both site adoption and browser compatibility rise, search engines will begin to use new HTML5 tags to better understand and analyze pages.

Schema.org Stalls

Structured mark-up will continue to be important but Schema.org adoption will stall. Instead, Google will continue to be an omnivore, happy to digest any type of structured mark-up, while other entities like Facebook will continue to promote their own proprietary mark-up.

Mobile Search Skyrockets

Only 40% of U.S. mobile users have smartphones. That's going to change in a big way in 2012 as both Apple and Google fight to secure these mobile users. Mobile search will be the place for growth as desktop search growth falls to single digits.

Yahoo! Buys Tumblr

Doubling down on content Yahoo! will buy Tumblr, hoping to extend their contributor network and overlay a sophisticated, targeted display advertising network. In doing so, they'll quickly shutter all porn related Tumblr blogs.

Google Acquires Topsy

Topsy, the last real-time search engine, is acquired by Google who quickly shuts down the Topsy API and applies the talent to their own initiatives on both desktop and mobile platforms.

Delicious Turns Sour

December 19 2011 // Rant + Technology + Web Design // 8 Comments

In April, the Internet breathed a sigh of relief when Delicious was sold to AVOS instead of being shut down by Yahoo. In spite of Yahoo's years of neglect, Delicious maintained a powerful place in the Internet ecosystem and remained a popular service.

Users were eager to see Delicious improve under new management. Unfortunately the direction and actions taken by Delicious over the last 8 months make me pine for the days when it was the toy thrown in the corner by Yahoo!

Where Did Delicious Go Wrong?

Delicious Dilapidated Icon

I know new management means well and have likely poured a lot of time and effort into this enterprise. But I see problems in strategy, tactics and execution that have completely undermined user trust and loyalty.


The one mission critical feature which fuels the entire enterprise falls into disrepair. Seriously? This is unacceptable. The bookmarklets that allow users to bookmark and tag links were broken for long stretches of time and continue to be rickety and unreliable. This lack of support is akin to disrespect of Delicious users.


Here’s how they work. Select some related links, plug them into a stack and watch the magic happen. You can customize your stack by choosing images to feature, and by adding a title, description and comment for each link. Then publish the stack to share it with the world. If you come across another stack you like, follow it to easily find it again and catch any updates.

Instead of the nearly frictionless interaction we've grown accustomed to, we're now asked to perform additional and duplicative work. I've already created 'stacks' by bookmarking links with appropriate tags. Want to see a stack of links about SEO, look at my bookmarks that are tagged SEO. It doesn't get much more simple than that.

Not only have they introduced complexity into a simple process, they've perverted the reason for bookmarking links. The beauty of Delicious was that you were 'curating' without trying. You simply saved links by tags and then one day you figured out that you had a deep reservoir of knowledge on a number of topics.

Stacks does the opposite and invites you to think about curation. I'd argue this creates substantial bias, invites spam and is more aligned with the dreck produced by Squidoo.

Here's another sign that you've introduced unneeded complexity into a product.

Delicious Describes Stacks

In just one sentence they reference stacks, links, playlists and topics. They haven't even mentioned tags! Am I creating stacks or playlists? If I'm a complete novice do I understand what 'stack links' even means?

Even if I do understand this, why do I want to do extra work that Delicious should be doing for me?


Design over Substance

The visual makeover doesn't add anything to the platform. Do pretty pictures and flashy interactions really help me discover content? Were Delicious users saying they would use the service more if only it looked prettier? I can't believe that's true. Delicious had the same UI for years and yet continued to be a popular service.

Delicious is a utilitarian product. It's about saving, retrieving and finding information.

Sure, Flipboard is really cool but just because a current design pattern is in vogue doesn't mean it should be applied to every site.


There are a number of UX issues that bother me but I'll highlight the three that have produced the most ire. The drop down is poorly aligned causing unnecessary frustration.

Delicious Dropdown Alignment

More than a few times I've gone across to to click on one of the drop down links only to have it disappear before I could finish the interaction.

The iconography is non-intuitive and doesn't even have appropriate hover text to describe the action.

Delicious Gray Icons

Delicious Icons are Confusing

Does the + sign mean bookmark that link? What's the arrow? Is that a pencil?

Now, I actually get the iconography. But that's the problem! I'm an Internet savvy user, yet the new design seems targeted at a more mainstream user. Imagine if Pinterest didn't have the word 'repin' next to their double thumbtack icon?

Finally, the current bookmarklet supports the tag complete function. You begin typing in a tag and you can simply select from a list of prior tags. This is a great timesaver. It even creates a handy space at the end so you can start your next tag. Or does it?

Delicious Tag Problems

WTF!? Why is my tag all muddled together?

Delicious improved tagging by allowing spaces in tags. That means that all tags have to be separated by commas. I get that. It's not the worst idea either. But the tag complete feature should support this new structure. Because it looks like it functions correctly by inserting a space after the tag. I mean, am I supposed to use the tag complete feature and then actually backspace and add a comma?

It's not the best idea to make your users feel stupid.


Delicious Unavailable Page

The service has been unstable, lately as poor as it was at the height of Twitter's fail whale problem. I've seen that empty loft way too much.

What Should Delicious Do Instead?

It's easy to bitch but what could Delicious have done instead? Here's what I think they should have (and still could) do.


An easy first step to improve Delicious would be to provide a better way to filter bookmarks. The only real way to do so right now is by adding additional tags. It would have been easy to introduce time (date) and popularity (number of times bookmarked) facets.

They could have gone an extra step and offered the ability to group bookmarks by source. This would let me see the number of bookmarks I have by site by tag. How many times have I bookmarked a Search Engine Land article about SEO? Not only would this be interesting, it maps to how we think and remember. You'll hear people say something like: "It was that piece on management I read on Harvard Business Review."

There are a tremendous number of ways that the new team could have simply enhanced the current functionality to deliver added value to users.


Recommendation LOLcat

Delicious could create recommendations based on current bookmark behavior and tag interest. The data is there. It just needs to be unlocked.

It would be relatively straightforward to create a 'people who bookmarked this also bookmarked' feature. Even better if it only displayed those I haven't already bookmarked. That's content discovery.

This could be extended to natural browse by tag behavior. A list of popular bookmarks with that tag but not in my bookmarks would be pretty handy.

Delicious could also alert you when it saw a new bookmark from a popular tag within your bookmarks. This would give me a quick way to see what was 'hot' for topics I cared about.

Recommendations would put Delicious in competition with services like Summify, KnowAboutIt, XYDO and Percolate. It's a crowded space but Delicious is sitting on a huge advantage with the massive amount of data at their disposal.

Automated Stacks

Instead of introducing unnecessary friction Delicious could create stacks algorthmically using tags. This could be personal (your own curated topics) or across the entire platform. Again, why Delicious is asking me to do something that they can and should do is a mystery to me.

Also, the argument that people could select from multiple tags to create more robust stacks doesn't hold much water. Delicious knows which tags appear together most often and on what bookmarks. Automated stacks could pull from multiple tags.

The algorithm that creates these stacks would also constantly evolve. They would be dynamic and not prone to decay. New bookmarks would be added and bookmarks that weren't useful (based on age, lack of clicks or additional bookmarks) would be dropped.

Delicious already solved the difficult human element of curation. It just never applied appropriate algorithms to harness that incredible asset.

Social Graph Data

Delicious could help order bookmarks and augment recommendations by adding social graph data. The easiest thing to do would be to determine the number of Likes, Tweets and +1s each bookmark received. This might simply mirror bookmark popularity though. So you would next look at who saved the bookmarks and map their social profiles to determine authority and influence. Now you could order bookmarks that were saved by thought leaders in any vertical.

A step further, Delicious could look at the comments on a bookmarked piece of content. This could be used as a signal in itself based on the number of comments, could be mined to determine sentiment or could provide another vector for social data.

Trunk.ly was closing in on this since they already aggregated links via social profiles. Give them your Twitter account and they collect and save what you Tweet. This frictionless mechanism had some drawbacks but it showed a lot of promise. Unfortunately Trunk.ly was recently purchased by Delicious. Maybe some of the promise will show up on Delicious but the philosophy behind stacks seems to be in direct conflict with how Trunk.ly functioned.


Delicious could have provided analytics to individuals as to the number of times their bookmarks were viewed, clicked or re-bookmarked. The latter two metrics could also be used to construct an internal influence metric. If I bookmark something because I saw your bookmark, that's essentially on par with a retweet.

For businesses, Delicious could aggregate all the bookmarks for that domain (or domains), providing statistics on the most bookmarked pieces as well as when they are viewed and clicked. A notification service when your content is bookmarked would also be low-hanging fruit.


Delicious already has search and many use it extensively to find hidden gems from both the past and present. But search could be made far better. In the end Delicious could have made a play for being the largest and best curated search engine. I might be biased because of my interest in search but this just seems like a no-brainer.


Building a PPC platform seems like a good fit if you decide to make search a primary feature of the site. It could even work (to a lesser extent) if you don't feature search. Advertisers could pay per keyword search or tag search. I doubt this would disrupt user behavior since users are used to this design pattern thanks to Google.

Delicious could even implement something similar to StumbleUpon, allowing advertisers to buy 'bookmark recommendations'. This type of targeted exposure would be highly valuable (to users and advertisers) and the number of bookmarks could provide long-term traffic and benefits. Success might be measured in a new bookmarks per impression metric.


The new Delicious is a step backward, abandoning simplicity and neglecting mechanisms that build replenishing value. Instead management has introduced complexity and friction while concentrating on cosmetics. The end result is far worse than the neglect Delicious suffered at the hands of Yahoo.

Author Stats

December 15 2011 // SEO // 3 Comments

Yesterday Google launched a new Authorship home page and Author stats within Google Webmaster Tools. The continuing emphasis on Authorship is a clear signal of the importance of this feature within Google.

Before reading up on Author stats, take a moment to learn how easy it is to implement Google authorship on your site or blog.

Author Stats

Author stats is available directly from the home page of Google Webmaster Tools under Labs.

How To View Google Author Statistics

Click on Author stats and you'll see statistics for pages for which you are the verified author.

Google+ Posts in Author Stats

I'm showing you page 2 of my own Author stats in part because it makes it easier to demonstrate that Google is assigning authorship to Google+ posts. Not only that, but they're showing you that these Google+ posts are being presented in search, gathering both impressions and clicks.

I vaguely knew this was happening but it makes it a lot more real when you see the numbers and real impact.

Stats by Profile not by Site

A bit giddy with this new source of information, I wanted to see what this looked like for one of my clients who has multiple authors.

Google Author Statistics In a Site Profile

There again is the Author stats link under Labs. But when I clicked on it, I got the same pages from my own personal site. I followed up with Javier Tordable, Google Software Engineer, who confirmed that Author stats are by profile and are not aggregated by site.

The Author Stats feature is independent of the site (that is the reason it appears in Home, before selecting a site). It also appears in the Labs menu for a site, but that's only for ease of use, rather than because it depends on the site.

That makes sense though I am putting in a request now for an aggregated view of all authors by site. That would make it easier to see the impact and more compelling for sites to implement authorship.

Specific Author Statistics

The statistics shown under Author stats are Impressions, Clicks, CTR and Average Position with the percentage change for each in that given timeframe.  These are nice basic numbers.

However, it's clear based on the average position number (very high) that a wide variety of terms and platforms (specifically image search) are being included here. While you can filter by platform you still don't have the ability to see the average position by query term.

In addition, the big metric everyone is looking for is the impact an Author result has on CTR, similar to what Google attempts to do with the +1 Metrics search impact report.

Google +1 Metrics Search Impact

My posts haven't reached a statistical level of significance but I appreciate what Google is trying to provide here. I'm not sure Author stats search impact would work the same way since that would mean Authorship would need to be turned off for a substantial set of users. I can think of a few ways they might quantify the impact but it may expose too much data to users.

Don't get me wrong, this is a great start and Google seems committed to improving Author stats.

This is an experimental feature so we’re continuing to iterate and improve, but we wanted to get early feedback from you. You can e-mail us at authorship-pilot@google.com if you run into any issues or have feedback.

I'm happy to see these Author stats and look forward to future improvements.


Author stats are now available in Google Webmaster Tools, showing statistics for pages for which you are the verified author. The continuing emphasis on Authorship shows the importance Google places on the feature and how Authorship might be used to improve search quality.

The Truth Doesn’t Matter

December 14 2011 // SEO // 2 Comments

Matt Cutts says good content is more important than SEO.

Good Content?

There is actually a lot of truth to that. The problem is that too many people don't understand the definition of good content. This goes double if it's content you've produced. Nobody likes to hear that their baby is ugly.

This video set off a number of anti-SEO threads with the most egregious being from ReadWriteWeb. Adam Singer's reaction to this post is at once both hilarious and sad.

But that's the thing. People will take this video (or the writing of pundits who will selectively extract what they want from it) and misconstrue Matt's message, deciding to avoid SEO and instead crank out content. Gobs and gobs of content. Much of that content will be unfocused, poorly formatted and have no sense of what query intent it is supposed to fulfill.

Then these same people will wonder why they're not getting a lot of Google love.

The Truth Doesn't Matter

Jack Nicholson in A Few Good Men

What Matt says in this video is true, but the truth doesn't matter. Because it's how people interpret and execute on this information that will ultimately make the difference. Sadly, most won't do a good enough job. I might not be making many friends with that statement but I call them like I see them.

It's the same reason why I dislike the stern advice people give to 'write for people'. The problem? Most don't really know how to do that effectively. Instead, I tell people to write for search engines. The result? People write better content for people and, by extension, for search engines.


A good SEO serves as a guide to help you to both produce and get the most out of content, ensuring that it is valuable and satisfies query intent.

Query Synonyms

December 12 2011 // SEO // 8 Comments

The fact that Google frequently uses synonyms to boost search quality is nothing new. But Dan Petrovic brought an interesting example to my attention via Google+ which spawned a dialog that included Bill Slawski, Wissam Dandan and Steven Baker, Principal Software Engineer on the Search Ranking team.

It is conversations like these that make search so enjoyable. Hopefully you agree.

The Query

Dan's question revolved around the query 'the dreaming void plot'.

The Dreaming Void Plot Google Search Result

This query returned results for The Temporal Void as well as The Dreaming Void, both books by Peter F. Hamilton. The question was why?

Bold Words

First things first. Bold words in search results usually reflect the query terms. It's one of the strongest signals of relevance that Google can provide to the user. Your eye naturally gravitates to those bolded words and they reinforce the fact that the result(s) matched your query.


However, Google has also been bolding synonyms when they're returned in search results. The easiest way to see this is to combine a synonym operator (~) with a negative operator (-).

Google Synonyms Example

Here it's easy to see that fantasy and sleep are bolded and are thus synonyms to dream according to Google. This makes complete sense.

The Diagnosis

Here's where it gets interesting. The terms dreaming and temporal are not ... regular synonyms. By that I mean that if you try the operator scenario above for dreaming you will not see temporal in bold.

A cursory look at your favorite dictionary will also tell you that these are not 'grammatical' synonyms.

The next thing I did was conduct a search using the root query: The Dreaming Void. The result did not yield results for The Temporal Void. I then looked at related searches, one of my favorite search features.

Google Related Searches for The Dreaming Void

Lo and behold the 'first' related search is 'temporal void'. This tells me that Google sees a very strong relationship between these two terms based on query patterns.

The related search for the full 'the dreaming void plot' query does not yield any temporal void terms. That's not entirely unexpected for reasons I won't go into here for the sake of brevity. Finally, I remove the related filter and then test the query using the new verbatim search.

Verbatim Results for The Dreaming Void Plot Query

Poof. All results for 'The Temporal Void' disappear. Though obvious, this confirms that the results for 'The Temporal Void' are either synonyms or match similar terms.

Query Synonyms

This is what I refer to as a query synonym. The science behind these is actually incredibly interesting and complex. Because synonyms are not just about simple grammar, they're about language, syntax and context as well.

Wissam Dandan offered this excerpt from a recent Google blog post on search quality changes.

Related query results refinements: Sometimes we fetch results for queries that are similar to the actual search you type. This change makes it less likely that these results will rank highly if the original query had a rare word that was dropped in the alternate query. For example, if you are searching for [rare red widgets], you might not be as interested in a page that only mentions “red widgets.”

Could this be related to Dan's query? It might. The idea behind related queries is similar to synonyms. (Irony, huh?) The example provided by Google is that it will return results for 'floral delivery' when you search for 'flower shops'. The change above will reduce the likelihood of false positives which may allow Google to increase the use of related query results refinements.

In the case of 'the dreaming void plot' there don't seem to be any rare query terms. In fact, most documents in the content corpus contain all of these words and the word 'temporal' as well. There's a high degree of co-occurrence for the terms 'dreaming' and 'temporal' which makes sense since they are part of a series of books.

But that's the thing, what seems easy and straightforward to us is actually quite difficult for a machine.

The Science of Synonyms

Then the always smart Bill Slawski joined the conversation providing more examples of why synonyms are so difficult.

For instance, while we may often consider the words "auto" and "car" to be synonyms, that's not the case when you set an alarm on "auto." Even within longer phrases, words that we might consider to be synonyms might not be. So, "automobile" and "car" are synonyms when we search for a [ford car], but not when we search for a [railroad car].

Bill went on to reference a number of patents that describe how Google might approach synonyms and related query refinement, five of which list Steven Baker as a co-inventor.

Search queries improved based on query semantic information

Identifying a synonym with N-gram agreement for a query phrase

Determining query term synonyms within query context

Identifying common co-occurring elements in lists

Longest-common-subsequence detection for common synonyms

Document-based synonym generation

Machine Translation for Query Expansion

While Bill and I sought out other science fiction series that might display this same behavior Steven joined the conversation. While he wasn't able to provide much detail he did reference his blog post on synonyms.

An irony of computer science is that tasks humans struggle with can be performed easily by computer programs, but tasks humans can perform effortlessly remain difficult for computers. We can write a computer program to beat the very best human chess players, but we can't write a program to identify objects in a photo or understand a sentence with anywhere near the precision of even a child.

The last statement is a odd sort of synonym for my own SEO philosophy and name of this blog. The post also answered my question as to whether query synonyms are provided the same bold treatment. (They are.)


Google is actively using complex methods to identify synonyms and related queries to improve search results. While this type of query results refinement is usually spot on and unnoticeable it can sometimes be flawed. In those instances, you can remove these results using the verbatim search tool.

The Knuckleball Problem

December 08 2011 // Marketing + Rant + Web Design // 4 Comments

The knuckleball is a very effective pitch if you can throw it well. But not many do. Why am I talking about arcane baseball pitches? Because the Internet has a knuckleball problem.


Image from The Complete Pitcher

The Knuckleball Problem

I define the knuckleball problem as something that can be highly effective but is also extremely difficult. The problem arises when people forget about the latter (difficulty) and focus solely on the former (potential positive outcome).

Individuals, teams and organizations embark on a knuckleball project with naive enthusiasm. They're then baffled when it isn't a rousing success. In baseball terms that means instead of freezing the hitter, chalking up strikeouts and producing wins you're tossing the ball in the dirt, issuing walks and running up your ERA.

If a pitcher can't throw the knuckleball effectively, they don't throw the knuckleball. But in business, the refrain I hear is 'X isn't the problem, it's how X was implemented'.

This might be true, but the hidden meaning behind this turn of phrase is the idea that you should always attempt to throw a knuckleball. In reality you should probably figure out what two or three pitches you can throw to achieve success.

Difficulty and Success

The vast majority of pitchers do not throw the knuckleball because it's tough to throw and produces a very low success rate. Most people 'implement' or 'execute' the pitch incorrectly. Instead pitchers find a mix of pitches that are less difficult and work to perfect them.

Yet online, a tremendous number of people try to throw knuckleballs. They're trying something with a high level of difficulty instead of finding less difficult (perhaps less sexy or trendy) solutions. And there is a phalanx of consultants and bloggers who seem to encourage and cheer this self-destructive behavior.


In general I think mega menus suck. Of course there are exceptions but they are few and far between. The mega menu is a knuckleball. Sure you can attempt it, but the odds are you're going to screw it up. And there are plenty of other ways you can implement navigation that will be as or even more successful.

When something has such a high level of difficulty you can't just point to implementation and execution as the problem. When a UX pattern is widely misapplied is it really that good of a UX pattern?

Personas also seem to be all the rage right now. Done the right way personas can sometimes deliver insight and guidance to a marketing team. But all too often the personas are not rooted in real customer experiences and devolve into stereotypes that are then used as weapons in cross-functional arguments meetings. "I'm sorry, but I just don't think this feature speaks to Concerned Carl."

Of course implementation and execution matter. But when you consistently see people implementing and executing something incorrectly you have to wonder whether you should be recommending it in the first place.

Pitching coaches aren't pushing the knuckleball on their pitching staffs.

Can You Throw a Knuckleball?

Cat Eats Toy Baseball Players

The problem is most people think they can throw the online equivalent of the knuckleball. And unlike the baseball diamond the feedback mechanism online is far from direct.

Personas are created and used to inform your marketing strategy and there is some initial enthusiasm and some minor changes but over time people get tired of hearing about these people and the whole thing peters out along with the high consulting fees which are also conveniently forgotten.

The hard truth is most people can't throw the knuckleball. And that's okay. You can still be a Cy Young Award winner. Tim Lincecum does not throw a knuckleball.

How (and When) To Throw The Knuckleball

This doesn't mean you shouldn't be taking risks or attempt to throw a knuckleball once in a while. Not at all.

However, you shouldn't attempt the knuckler simply because it is difficult or 'more elegant' or the hottest new fad. You can take plenty of risks throwing the slider or curve or change up, all pitches which have a higher chance of success. In business terms the risk to reward ratio is far more attractive.

If you're going to start a knuckleball project you need to be clear about whether you have a team that can pull it off. Do you really have a team of A players or do you have a few utility guys on the team?

Once you clear that bit of soul searching you need to be honest about measuring success. A certain amount of intellectual honesty is necessary so that you can turn to the team and say, you tossed that one in the dirt. Finally, you need a manager who's willing to walk to the mound and tell the pitcher to stop futzing with the knuckleball and start throwing some heat.


The Internet has a knuckleball problem. Too many are attempting the difficult without understanding the high probability of failure while ignoring the less difficult that could lead to success.

Google Changed My Title

December 04 2011 // SEO // 14 Comments

I recently blogged about Google changing my Title tag and using the URL instead. While this particular variant was new to me, I've been tracking how Google changes Titles for quite some time.

Google reserves the right to change your Title and has been experimenting with different Title algorithms for at least eighteen months. Here's a quick primer on when and why Google changes Titles.

The Title Tag

First things first. What is the Title tag? The <title> tag is placed in the <head> to define the title of that document (aka web page.)

Title Tag HTML Example

The Title determines what is shown in a browser tab and is prominently displayed in search engine results.

How the Title Tag Shows Up in Google Search Results

The Title shows up as the blue link in search results. Not only is the Title a very strong search engine signal, it's what users see first when scanning search results. Getting your Title right should be near the top of your SEO checklist.

Why Google Changes Titles

The reason Google changes Titles is almost always to better serve the query and aide the user. Sometimes these changes are made for obvious reasons and other times the reasons are more complex.

No Title Tag

Sometimes people screw up (big time) and a page doesn't have a Title. If the content is solid and useful, Google steps in to provide you with a Title.

Thank You Captain Obvious

Duplicate Title Tag

The bane of many an SEO, sometimes each page on a site has the same Title. Once again, Google steps in to provide assistance for this blunder while the SEO curses the developer.

Generic Title Tag

Sometimes Google feels like it knows better and will replace a generic title tag with something it believes is more appropriate. For instance if your Title for the home page is, in fact, 'Home Page' then Google may decide to generate a more specific Title that will be more useful for users.

This is probably how Google began testing their Title algorithms, starting with the least focused Titles and seeing how they could change them to better match queries and increase click-through rates.

Title Tag Append

At times, Google won't completely change your Title but instead will add to it by putting the domain name at the end of your Title. The notion here is that the domain provides some additional and valuable context to users.

This is more important then it looks in my opinion. It tells me that the URL is not being used by mainstream users. They're simply not seeing the URL most of the time because they're scanning the results, not reading them.

Moving the URL directly below the Title (something Google did recently) means that it is likely more important than the meta description. The domain can be a signal of trust if a user has an affinity for that site through personal experience or other marketing efforts.

The domain append is Google's attempt to help you brand your result.

Specific Title Tag

That finally leaves us with the last and most drastic Title change. Google will actually switch a very specific Title tag with something it believes might be better for the user. This means they're changing a perfectly good Title you probably spent time carefully crafting.

Googlebot Wants To Help You

Specific Title tag changes are most often related to the query. Google is looking to increase the perceived relevance of that result by using the search term in the title, much as PPC professionals understand the need to have keyword terms in their ads.

This practice takes advantage of the natural scanning behavior of users. They're not reading every search result, they're scanning those results and are simply looking for their search term.

If your Title doesn't have the search term (but it is a match for that query based on the content) Google wants to give that result a fighting chance.

Without the search term in the Title, a substantial number of users will simply not see your result. They'll skip over it since it doesn't seem like it's relevant. Remember, users are doing this at breakneck speed and making nearly instantaneous decisions as to whether each result is relevant or not.

Google changes your Title because they think it'll help increase the click-through rate on your result.

Of course, I've also seen Google change Titles even when the keywords were present in the original Title. Most often they replaced a shorter keyword with a keyword phrase. I haven't seen much of this lately so this may have been a test that didn't pan out.

How Google Changes Titles

Google is changing Titles based on a series of on-going algorithmic tests. While I don't know the specifics, I do know that they are first looking for a candidate pool - documents that should be returned for a query based on their content (but aren't) or documents that score well in relevance but have very low click-through rates for specific queries.

These are but a few ideas of how Google might be defining a candidate pool, but the object is to find under-performing but valuable content and see if a different Title improves user satisfaction. This might be measured for that specific result or for the entire SERP for that query.

Once Google identifies a candidate pool they work on constructing their own Title. Most often this is done by extracting words from the on-page content of that page. This is similar to what Google will do when they write their own meta description.

Of course, we've now seen that Google might also use the URL to construct a Title. Perhaps this is part of Google's on-going Title algorithm experimentation? Creating readable Titles from on-page content isn't easy. So maybe Google's thinking the URL might be a shortcut when it includes the target keyword. A parsed URL might actually conform to natural language better than extracted and combined keywords or keyword phrases.

The research performed for the URL Titles post also shows that Google can dynamically change the Title based on the query. So unless you're really paying attention, Google could be changing your Titles and you wouldn't even know it.

Is Changing Titles Good or Bad?

Should you be outraged or thanking Google for changing Titles? Both probably.

Google is only doing this because it wants to improve search quality and user satisfaction. Not only that but Google can measure the impact of these changes in a very holistic way. It's not just about improving click-through rate. They're looking at the pogosticking behavior and other user satisfaction signals to calculate the real impact of these Title changes.

This means you might get better and more focused traffic to your page because Google is refining and calibrating the Title.

On the other hand, Google is essentially providing help to certain pages within a SERP. So the site that can't figure out how to create proper Titles might wind up getting more traffic because Google took pity on them. (Sure the user is better served but ... cold comfort for you eh?)

You're also trusting that Google does know best. Sometimes they do and sometimes they don't. Unfortunately we don't have transparency as to how or how many times our Titles are changed, for what queries and to what outcome.

This may also drive marketing managers absolutely bananas since they want complete control over their brand. (You know the type.) That lack of control could be troublesome and also send the wrong signal to site owners. The last thing you should come away with is to think Google will simply fix your poorly conceived Titles.


Google changes your Title for a number of reasons when it believes it can improve relevance and user satisfaction. The emphasis on changing the Title, particularly in matching the Title to the query term, reinforces its importance and supports the scanning behavior users employ on search results.

URL Titles

December 02 2011 // SEO // 29 Comments

The other day I noticed something strange happening. Google was using my URL as the Title instead of my own Title tag.

Not Provided Keyword Google Search

Upon seeing this I kind of freaked out and immediately went to check the Title settings on this post. Everything was in order but I was using the original 'Stop Whining About (Not Provided)' Title tag.

At the time I was not the first result for this query. But I changed the Title to 'Not Provided Keyword In Google Analytics' and a day or so later I bounced up to number one for this term. The URL as Title still remains though, which is pretty annoying.

URL Titles

So I started to poke around looking for other examples of this URL as Title behavior. It didn't take me long to find one.

Cut Up Learning Google Search Result

I checked to make sure I hadn't botched the Title and found , again, that everything was in order. The Title I had for the post was 'Is Information Overload Really a Problem?' But here's the thing, I can get that Title to display on a search result.

Information Overload Not a Problem Google Search Result

That's the same post but I used the search term 'information overload not a problem' instead. So what's going on here?

Google Title Match

Google wants to match the Title of a result to the query when it believes the content of that result is relevant to the query. So if someone is actually searching for 'cut up learning' Google has determined that my post is highly relevant. However they replace my Title, which has none of those keywords in it, with my URL which actually does.

Here's another example.

Influence Metric Google Search Result

My Title tag does not include the word 'metric' so Google decides to use my URL for the Title instead. Again, I can get my Title to display using a different query.

Titles Matter

If you haven't figured it out yet, Titles matter ... a lot. So much so that when Google wants to return a result it will change the Title to better match the query. The reason for this is simple. Users scan for and assign higher relevance to Titles that include their query.

Just between you and me, I believe that exact match query Titles are perhaps the most underrated SEO tactic. I've actually got some research to back that up which I'm hoping I might get to share in the future.

Can't Google Parse URLs?

While I appreciate that Google is trying to do me a solid here and get my post in front of the 'right' queries, it would be nice if they could parse the URL and make it readable.

So cut-up-learning would become 'Cut up learning' or 'Cut Up Learning' if they used title casing. This would certainly be a better experience for users who are quickly scanning search results. Playing my own devil's advocate here, the odd URL as Title could actually break the visual flow and create more emphasis but ... I doubt it.

How about it Google, can we render the URL as Titles so they're a bit more readable?

Using URL Titles

At this point you might be interested or outraged depending on your perspective, but what can you do with this newly acquired information?

First off, you should look at the keyword clusters for your popular content. What you're looking for are terms that aren't in your Title but might be in your URL. Based on what you find you can then change your Title so that it is capturing a greater breadth of matching queries.

The other interesting idea is to use this as a dual targeting tactic. You can deliberately target one keyword term or modifier in the Title and another in the URL. Then watch to see which one drives more traffic and adjust accordingly (or not if you're happy with things the way they are.)

At the end of the day when you see this URL as Title behavior Google is telling you, clearly, that it wants to return your content for that query. So pretend Google is EF Hutton and listen ... closely.


Google is replacing Titles with the URL when the URL delivers more relevance based on the user query. This URL as Title behavior reveals just how important Titles are to users and, by extension, to SEO.

Not Provided Keyword Not A Problem

November 21 2011 // Analytics + Rant + SEO // 15 Comments

Do I think Google's policy around encrypting searches (except for paid clicks) for logged-in users is fair? No.

Fair Is Where You Get Cotton Candy

But whining about it seems unproductive, particularly since the impact of (not provided) isn't catastrophic. That's right, the sky is not falling. Here's why.

(Not Provided) Keyword

By now I'm sure you've seen the Google Analytics line graph that shows the rise of (not provided) traffic.

Not Provided Keyword Google Analytics Graph

Sure enough, 17% of all organic Google traffic on this blog is now (not provided). That's high in comparison to what I see among my client base but makes sense given the audience of this blog.

Like many others (not provided) is also my top keyword by a wide margin. I think seeing this scares people but it makes perfect sense. What other keyword is going to show up under every URL?

Instead of staring at that big aggregate number you have to look at the impact (not provided) is having on a URL by URL basis.

Landing Page by Keywords

To look at the impact of (not provided) for a specific URL you need to view your Google organic traffic by Landing Page. Then drill down on a specific URL and use Keyword as your secondary dimension. Here's a sample landing page by keywords report for my bounce rate vs exit rate post.

Landing Page by Keyword Report with Not Provided

In this example, a full 39% of the traffic is (not provided). But a look at the remaining 61% makes it pretty clear what keywords bring traffic to this page. In fact, there are 68 total keywords in this time frame.

Keyword Clustering Example

Clustering these long-tail keywords can provide you with the added insight necessary to be confident in your optimization strategy.

(Not Provided) Keyword Distribution

The distribution of keywords outside of (not provided) gives us insight into the keyword composition of (not provided). In other words, the keywords we do see tell us about the keywords we don't.

Do we really think that the keywords that make up (not provided) are going to be that different from the ones we do see? It's highly improbable that a query like 'moonraker steel teeth' is driving traffic under (not provided) in my example above.

If you want to take things a step further you can apply the distribution of the clustered keywords against the pool of (not provided) traffic. First you reduce the denominator by subtracting the (not provided) traffic from the total. In this instance that's 208 - 88 which is 120.

Even without any clustering you can take the first keyword (bounce rate vs. exit rate) and determine that it comprises 20% of the remaining traffic (24/120). You can then apply that 20% to the (not provided) traffic (88) and conclude that approximately 18 visits to (not provided) are comprised of that specific keyword.

Is this perfectly accurate? No. Is it good enough? Yes. Keyword clustering will further reduce the variance you might see by specific keyword.

Performance of (Not Provided) Keywords

The assumption I'm making here is that the keyword behavior of those logged-in to Google doesn't differ dramatically from those who are not logged-in. I'm not saying there might not be some difference but I don't see the difference being large enough to be material.

If you have an established URL with a history of getting a steady stream of traffic you can go back and compare the performance before and after (not provided) was introduced. I've done this a number of times (across client installations) and continue to find little to no difference when using the distribution method above.

Even without this analysis it comes down to whether you believe that query intent changes based on whether a person is logged-in or not? Given that many users probably don't even know they're logged-in, I'll take no for 800 Alex.

What's even more interesting is that this is information we didn't have previously. If by chance all of your conversions only happen from those logged-in, how would you have made that determination prior to (not provided) being introduced? Yeah ... you couldn't.

While Google has made the keyword private they've actually broadcast usage information.

(Not Provided) Solutions

Keep Calm and SEO On

Don't get me wrong. I'm not happy about the missing data, nor the double standard between paid and organic clicks. Google has a decent privacy model through their Ads Preferences Manager. They could adopt the same process here and allow users to opt-out instead of the blanket opt-in currently in place.

Barring that, I'd like to know how many keywords are included in the (not provided) traffic in a given time period. Even better would be a drill-down feature with traffic against a set of anonymized keywords.

Google Analytics Not Provided Keyword Drill Down

However, I'm not counting on these things coming to fruition so it's my job to figure out how to do keyword research and optimization given the new normal. As I've shown, you can continue to use Google Analytics, particularly if you cluster keywords appropriately.

Of course you should be using other tools to determine user syntax, identify keyword modifiers and define query intent. When keyword performance is truly in doubt you can even resort to running a quick AdWords campaign. While this might irk you and elicit tin foil hat theories you should probably be doing a bit of this anyway.


Google's (not provided) policy might not be fair but is far from the end of the world. Whining about (not provided) isn't going to change anything. Figuring out how to overcome this obstacle is your job and how you'll distance yourself from the competition.

Mozilla Search Showdown

November 15 2011 // SEO + Technology // 5 Comments

Mozilla's search partnership with Google expires at the end of November. What happens next could change search engine and browser market share as well as the future of Mozilla.

The Mozilla Google Search Partnership

Originally entered into in November 2004 and renewed in 2006 (for 2 years) and 2008 (for 3 years), the search partnership delivers a substantial amount of their revenue to Mozilla. In fact, in 2010 98% of the $121 million in revenue came from search related activity.

The majority of Mozilla's revenue is generated from search functionality included in our Firefox product through all major search partners including Google, Bing, Yahoo, Yandex, Amazon, Ebay and others.

Most of that search revenue comes specifically from Google. The 'Concentrations of Risk' section in Mozilla's 2009 (pdf) and 2010 (pdf) consolidated financial statements put Google's contribution to revenue at 91% in 2008, 86% in 2009 and 84% in 2010.

Using the 2010 numbers, Mozilla stands to 'lose' $3.22 per second if the partnership expires. Mozilla is highly dependent on search and Google in particular. There's just no way around that.

What does Google get for this staggering amount of money?

Firefox Start Page

Google is the default search bar search engine as well as the default home page. This means that Firefox drives search after search to Google instead of their competitors.

Browser Share

Clearly browsers are an important part of the search landscape since they can influence search behavior based on default settings. As Mozilla points out, in 2002 over 90% of the browser market was controlled by Internet Explorer. At the time it made perfect sense for Google to help Mozilla break the browser monopoly.

The rise of Firefox helped Google to solidify search dominance and Mozilla was paid handsomely for this assistance.

However, it doesn't look like Google was comfortable with this lack of control. Soon after the announced renewal of the search partnership in 2008 Google launched their own browser. At the time, I wrote that Chrome was about search and taking share from Internet Explorer.

Browser Market Share 2011

I still think Chrome is about search and the trend seems to indicate that Chrome is taking share (primarily) away from Internet Explorer. In short, Google sought to control its own destiny and speed the demise of Internet Explorer.

Mission accomplished.

Chrome is now poised to overtake Firefox as the number two browser. That's important because three years ago Google had no other way to protect their search share. Chrome's success changes this critical fact.


Toolbars were the first attempt by search engines to break the grip of Internet Explorer. Both Google and Yahoo! used toolbars as a way to direct traffic to their own search engines.

What happened along the way was an amazing amount of user confusion. Which box were you supposed to search in? The location (or address) bar, the search box or the toolbar?

This confusion created searches in the location bar and URL entries in the search bar. Savvy users understood but it never made much sense to most.

Location Bar Search

The result? For those that figured it out there is evidence that people actually enjoyed searching via the location bar.

How many searches are conducted per month via the address bar? MSN wouldn't release those figures, but it did say that about 10 to 15 percent of MSN Search's overall traffic comes from address bar queries.

The company has analyzed the traffic from users who search via the address bar and discovered both that the searches appear intentional in nature, rather than accidental, and that those making use of address bar searching do so frequently.

This data from 2002 indicates that the location bar default might be very valuable. Sure enough, the location bar default is part of the search partnership Mozilla has with Google.

Firefox Location Bar Search Default

This also happens to be the most difficult setting to change. You can change the search bar preference with a click and the home page with two clicks, but the location bar is a different (and convoluted) story.

Firefox About:Config Warning

Most mainstream users aren't going to attempt entering about:config into their location bar, but if they do this first screen will likely scare them off.

I recently had to revisit the location bar default because I took Firefox for Bing for a spin. This add-on, among other things, changes the location bar default to Bing and it remains that way even after the add-on is removed. That's a serious dark pattern.

All of this makes me believe that the location bar might be the most valuable piece of real estate.


Having helped create confusion with their toolbar (now no longer supporting Firefox 5+) and seen the value of location bar searches, Chrome launched the omnibox, a combined location and search bar. The omnibox reduced confusion and asked users to simply type an address or search into one bar. Google would do the rest. Of course, the default for those searches is Google.

The omnibar seems to be a popular feature and why wouldn't it be? Users don't care what field they're typing in, they just want it to work. You know who else thinks this is a good idea? The Firefox UX Team.

Firefox Omnibar

While these mockups are for discussion purposes only, it's pretty clear what the discussion is about. According to CNET, a combined Firefox search-and-location bar is being held up by privacy issues. That was in March and the latest release of Firefox (just last week) still didn't have this functionality.

Back in late 2009 Asa Dotzler had a lot to say about the independence of Firefox and how they serve the user.

Mozilla’s decisions around defaults are driven by what’s best for the largest number of users and not what’s best for revenue.

It’s not about the money. The money’s there and Mozilla isn’t going to turn it down, but it’s not about the money. It’s about providing users with the best possible experience.

Great words but have they been backed up with action? Both users and the Firefox UX Team are lobbying for an omnibox, the Firefox for Bing add-on is a clear dark pattern and the ability to change the default location bar search engine is still overly complicated.

Is this really what's best for users?

Don't Count On Inertia

If Mozilla were to switch horses and cut a search deal with Bing, they'd be counting on inertia to retain users and their current search behavior. The problem is that Firefox was marketed as the solution to browser inertia.

Before Firefox many users didn't even understand they could browse the Internet with anything but Internet Explorer. Those same users are now more likely to switch.

It's sort of like being the other woman right? If he cheats with you, he's also liable to cheat on you.

With a search bar still in place users can easily change that default. Firefox would be counting on location bar searches and the difficulty in changing this default to drive revenue. You might get some traction here but I'm guessing you'd see browser defection, increased search bar usage and more direct traffic to the Google home page.

With an omnibar in place Firefox would be running a very risky proposition. Many mainstream users would likely migrate to another browser (probably Chrome). More advanced Firefox users would simply change the defaults.

You could move to an omnibar and make the default easy to change, but both Firefox and users have made it abundantly clear that they prefer Google. So how much would a Bing search partnership really be worth at that point?

Can Bing Afford It?

Bing is losing money hand over fist so it's unclear whether Bing can actually pony up this type of money anyway. If they did, it could cause browser defection and other behavior that would rob the search partnership of any real value and put Firefox at risk.

Even if Bing pirated half of the searches coming from Firefox, that's not going to translate into a real game changer from a search engine market share perspective.

Mozilla could partner with Bing but I don't think either of them would like the results.

Mozilla in a Pickle

Mozilla In a Pickle

If Google is the choice of users (as Firefox claims) installing a competing default search engine may hasten the conversion to Chrome. This time around Mozilla needs Google far more than Google needs Mozilla. I'm not saying that Google doesn't want the search partnership to continue, but I'm betting they're driving a very hard bargain.

Google no longer has a compelling need to overpay for a search default on a competing browser. I have to believe Mozilla is being offered a substantially lower dollar amount for the search partnership.

I don't pretend to know exactly how the partnership is structured and whether it's volume or performance based but it really doesn't matter. Google paid Barry Zito like prices back in 2008 at the height of the economic bubble but the times have changed and Google's got Tim Lincecum (Chrome) mowing down the competition.

Mozilla and Google are playing a high stakes game of chicken. The last renewal took place three months prior to the expiration. We're down to two weeks now.

This time the money might not be there.


The search partnership between Mozilla and Google expires at the end of November. The success of Chrome gives Google little incentive to overpay for a search default on Firefox. This puts Mozilla, who receives more than 80% of their revenue through the Google search partnership, in a poor position with few options.