You Are Browsing The Technology Category

Google Now Topics

November 26 2013 // SEO + Technology // 16 Comments

Have you visited your Google Now Topics page? You should if you want to get a peek at how Google is translating queries into topics, which is at the core of the Hummingbird Update.

Google Now Topics

If you are in the United States and have Google Web History turned on you can go to your Google Now Topics page and see your query and click behavior turned into specific topics.

Google Now Topics Example

This is what my Google Now Topics page looked like a few weeks back. It shows specific topics that I’ve researched in the last day, week and month. If you’re unfamiliar with this page this alone might be eye opening. But it gets even more interesting when you look at the options under each topic.

Topic Intent

The types of content offered under each topic is different.

Why is this exciting? To me it shows that Google understands the intent behind each topic. So the topic of New York City brings up ‘attractions and photos’ while the topic of Googlebot just brings up ‘articles’. Google clearly understands that Back to the Future is a movie and that I’d want reviews for the Toyota Prius Plug-in Hybrid.

In essence, words map to a topic which in turn tells Google what type of content should most likely be returned. You can see how these topics were likely generated by looking back at Web History.

Search of Google Web History for Moto X

This part of my web history likely triggered a Moto X topic. I used the specific term ‘Moto X’ a number of times in a query which made it very easy to identify. (I did wind up getting the Moto X and love it.)

Tripping Google Now Topics

When I first saw this page  back in March and then again in June I wanted to start playing around with what combination of queries would produce a Google Now Topic. However, I’ve been so busy with client work that I never got a chance to do that until now.

Here’s what I did. Logged into my Google account and using Chrome I tried the following series of queries (without clicking through on any results) at 1:30pm on November 13th.

the stranger
allentown
downeaster alexa
big shot
pressure
uptown girl
piano man

But nothing ever showed up in Google Now Topics. So I took a similar set of terms but this time engaged with the results at 8:35am on November 16th.

piano man (clicked through on Wikipedia)
uptown girl (clicked through on YouTube)
pressure (no click)
big shot (clicked through on YouTube)
the stranger lyrics (clicked through on atozlyrics, then YouTube)
scenes from an italian restaurant (no click)

Then at 9:20am a new Google Now Topic shows up!

Google Now Topic for Billy Joel Songs

Interestingly it understands that this is about music but it hasn’t made a direct connection to Billy Joel. I had purposefully not used his name in the queries to see if Google Now Topics would return him as the topic instead of just songs. Maybe Google knows but I had sort of hoped to get a Billy Joel topic to render and think that might be the better result.

YouTube Categories

Engagement certainly seems to count based on my limited tests. But I couldn’t help but notice the every one of the songs in that Google Now Topic was also a YouTube click. Could I get a Google Now Topic to render without a YouTube click.

The next morning I tried again with a series of queries at 7:04am.

shake it up (no click)
my best friend’s girl (lyricsfreak click)
let the good times roll (click on Wikipeida, click to disambiguated song)
hello again (no click)
just what i needed (lastfm click)
tonight she comes (songmeanings click)
shake it up lyrics (azlyrics click)

At 10:04 nothing showed up so I decided to try another search.

let the good times roll (clicked on YouTube)

At 10:59 nothing showed up and I was getting antsy, which was probably not smart. I should have waited! But instead I performed another query.

the cars (clicked on knowledge graph result for Ric Ocasek)

And at 12:04 I get a new Google Now Topic.

Let The Good Times Roll Google Now Topic

I’m guessing that if I’d waited a bit longer after my YouTube click that this would have appeared, regardless of the click on the knowledge graph result. It seems that YouTube is a pretty important part of the equation. It’s not the only way to generate a Google Now Topic but it’s one of the faster ways to do so right now.

Perhaps it’s easier to identify the topic because of the more rigid categorization on YouTube?

The Cars on YouTube

I didn’t have time to do more research here but am hoping others might begin to compile a larger corpus of tests so we can tease out some conclusions.

Topic Stickiness

I got busy again and by the time I was ready to write this piece I found that my topics had changed.

New Google Now Topics

It was fairly easy to deduce why each had been produced, though the Ice Bath result could have been simply from a series of queries. But what was even more interesting was what my Google Now Topics looked like this morning.

My Google Now Topics Today

Some of my previous topics are gone! Both Ice Bath and Let The Good Times Roll are nowhere to be found. This seems to indicate that there’s a depth of interaction and distance from event (time) factor involved in identifying relevant topics.

It would make sense for Google to identify intent that was more consistent from intent that was more ephemeral. I was interested in ice baths because my daughter has some plantar fascia issues. But I’ve never researched it before and likely (fingers crossed) won’t again. So it would make sense to drop it.

There are a number of ways that Google could determine which topics are more important to a user, including frequency of searching, query chains, depth of interaction as well as type and variety of content.

Google Now Topics and Hummingbird

OMG It's Full of Stars Cat

My analysis of the Hummingbird Update focused largely on the ability to improve topic modeling through a combination of traditional text analysis natural and entity detection.

Google Now Topics looks like a Hummingbird learning lab.

Watching how queries and click behavior turn into topics (there’s that word again) and what types of content are displayed for each topic is a window into Google’s evolving abilities and application of entities into search results.

It may not be the full picture of what’s going on but there’s enough here to put a lot of paint on the canvass.

TL;DR

Google Now Topics provide a glimpse into the Hummingbird Update by showing how Google takes words, queries and behavior and turns them into topics with defined intent.

What Does The Hummingbird Say?

November 07 2013 // SEO + Technology // 29 Comments

What Does The Fox Say Video Screencap

Dog goes woof
Cat goes meow
Bird goes tweet
and mouse goes squeak

Cow goes moo
Frog goes croak
and the elephant goes toot

Ducks say quack
and fish go blub
and the seal goes ow ow ow ow ow

But theres one sound
That no one knows
What does the hummingbird say?

What Does The Hummingbird Say?

For the last month or so the search industry has been trying to figure out Google’s new Hummingbird update. What is it? How does it work? How should you react.

There’s been a handful of good posts on Hummingbird including those by Danny SullivanBill Slawski, Gianluca Fiorelli, Eric Enge (featuring Danny Sullivan), Ammon Johns and Aaron Bradley. I suggest you read all of these given the chance.

I share many of the views expressed in the referenced posts but with some variations and additions, which is the genesis of this post.

Entities, Entities, Entities

Are you sick of hearing about entities yet? You probably are but you should get used to it because they’re here to stay in a big way. Entities are at the heart of Hummingbird if you parse statements from Amit Singhal.

We now get that the words in the search box are real world people, places and things, and not just strings to be managed on a web page.

Long story short, Google is beginning to understand the meaning behind words and not just the words themselves. And in August 2013 Google published something specifically on this topic in relation to an open source toolkit called word2vec, which is short for word to vector.

Word2vec uses distributed representations of text to capture similarities among concepts. For example, it understands that Paris and France are related the same way Berlin and Germany are (capital and country), and not the same way Madrid and Italy are. This chart shows how well it can learn the concept of capital cities, just by reading lots of news articles — with no human supervision:

Example of Getting Meaning Behind Words

So that’s pretty cool isn’t it? It gets even cooler when you think about how these words are actually places that have a tremendous amount of metadata surrounding them.

Topic Modeling

It’s my belief that the place where Hummingbird has had the most impact is in the topic modeling of sites and documents. We already know that Google is aggressively parsing documents and extracting entities.

When you type in a search query — perhaps Plato – are you interested in the string of letters you typed? Or the concept or entity represented by that string? But knowing that the string represents something real and meaningful only gets you so far in computational linguistics or information retrieval — you have to know what the string actually refers to. The Knowledge Graph and Freebase are databases of things, not strings, and references to them let you operate in the realm of concepts and entities rather than strings and n-grams.

Reading this I think it becomes clear that once those entities are extracted Google is then performing a lookup on an entity database(s) and learning about what that entity means. In particular Google wants to know what topic/concept/subject to which that entity is connected.

Google seems to be pretty focused on that if you look at the Freebase home page today.

Freebase Topic Count

Tamar Yehoshua, VP of Search, also said as much during the Google Search Turns 15 event.

So the Knowledge Graph is great at letting you explore topics and sets of topics.

One of the examples she used was the search for impressionistic artists. Google returned a list of artists and allowed you to navigate to different genres like cubists. It’s clear that Google is relating specific entities, artists in this case, to a concept or topic like impressionist artists, and further up to a parent topic of art.

Do you think that having those entities on a page might then help Google better understand what the topic of that page is about? You better believe it.

Based on client data I think that the May 2013 Phantom Update was the first application of a combined topic model (aka Hummingbird). Two weeks later it was rolled back and then later reapplied with some adjustments.

Hummingbird refined the topic modeling of sites and pages that are essential to delivering relevant results.

Strings AND Things

Hybrid Car

This doesn’t mean that text based analysis has gone the way of the do-do bird. First off, Google still needs text to identify entities. Anyone who thinks that keywords (or perhaps it’s easier to call them subjects) in text isn’t meaningful is missing the boat.

In almost all cases you don’t have as much labeled data as you’d really like.

That’s a quote from a great interview with Jeff Dean and while I’m taking the meaning of labeled data out of context I think it makes sense here. Writing properly (using nouns and subjects) will help Google to assign labels to your documents. In other words, make it easy for Google to know what you’re talking about.

Google can still infer a lot about what that page is about and return it for appropriate queries by using natural language processing and machine learning techniques. But now they’ve been able to extract entities, understand the topics to which they refer and then feed that back into the topic model. So in some ways I think Hummingbird allows for a type of recursive topic modeling effort to take place.

If we use the engine metaphor favored by Amit and Danny, Hummingbird is a hybrid engine instead of a combustion or electric only engine.

From Caffeine to Hummingbird

Electrical Outlet with USB and Normal Sockets

One of the head scratching parts of the announcement was the comparison of Hummingbird to Caffeine. The latter was a huge change in the way that Google crawled and indexed data. In large part Caffeine was about the implementation of Percolator (incremental processing), Dremel (ad-hoc query analysis) and Pregel (graph analysis). It was about infrastructure.

So we should be thinking about Hummingbird in the same way. If we believe that Google now wants to use both text and entity based signals to determine quality and relevance they’d need a way to plug both sources of data into the algorithm.

Imagine a hybrid car that didn’t have a way to recharge the battery. You might get some initial value out of that hybrid engine but it would be limited. Because once out of juice you’d have to take the battery out and replace it with a new one. That would suck.

Instead, what you need is a way to continuously recharge the battery so the hybrid engine keeps humming along. So you can think of Hummingbird as the way to deliver new sources of data (fuel!) to the search engine.

Right now that new source of data is entities but, as Danny Sullivan points out, it could also be used to bring social data into the engine. I still don’t think that’s happening right now, but the infrastructure may now be in place to do so.

The algorithms aren’t really changing but the the amount of data Google can now process allows for greater precision and insight.

Deep Learning

Mr. Fusion Home Reactor

What we’re really talking about is a field that is being referred to as deep learning, which you can think of as machine learning on steroids.

This is a really fascinating (and often dense) area that looks at the use of labeled and unlabeled data and the use of supervised and unsupervised learning models. These concepts are somewhat related and I’ll try to quickly explain them, though I may mangle the precise definitions. (Scholarly types are encouraged to jump in an provide correction or guidance.)

The vast majority of data is unlabeled, which is a fancy way of saying that it hasn’t been classified or doesn’t have any context. Labeled data has some sort of classification or identification to it from the start.

Unlabeled data would be the tub of old photographs while labeled data might be the same tub of photographs but with ‘Christmas 1982′, ‘Birthday 1983′, ‘Joe and Kelly’ etc. scrawled in black felt tip on the back of each one. (Here’s another good answer to the difference between labeled and unlabeled data.)

Why is this important? Let’s return to Jeff Dean (who is a very important figure in my view) to tell us.

You’re always going to have 100x, 1000x as much unlabeled data as labeled data, so being able to use that is going to be really important.

The difference between supervised learning and unsupervised learning is similar. Supervised learning means that the model is looking to fit things into a pre-conceived classification. Look at these photos and tell me which of them are cats. You already know what you want it to find. Unsupervised learning on the other hand lets the model find it’s own classifications.

If I have it right, supervised learning has a training set of labeled data where a unsupervised learning has no initial training set. All of this is wrapped up in the fascinating idea of neural networks.

The different models for learning via neural nets, and their variations and refinements, are myriad. Moreover, researchers do not always clearly understand why certain techniques work better than others. Still, the models share at least one thing: the more data available for training, the better the methods work.

The emphasis here is mine because I think it’s extremely relevant. Caffeine and Hummingbird allow Google to both use more data and to process that data quickly. Maybe Hummingbird is the ability to deploy additional layers of unsupervised learning across a massive corpus of documents?

And that cat reference isn’t just because I like LOLcats. A team at Google (including Jeff Dean) was able to use unlabeled, unsupervised learning to identify cats (among other things) in YouTube thumbnails (PDF).

So what does this all have to do with Hummingbird? Quite a bit if I’m connecting the dots the right way. Once again I’ll refer back the Jeff Dean interview (which I seem to get something new out of each time I read it).

We’re also collaborating with a bunch of different groups within Google to see how we can solve their problems, both in the short and medium term, and then also thinking about where we want to be four years, five years down the road. It’s nice to have short-term to medium-term things that we can apply and see real change in our products, but also have longer-term, five to 10 year goals that we’re working toward.

Remember at the end of Back to The Future when Doc shows up and implores Marty to come to the future with him? The flux capacitor used to need plutonium to reach critical mass but this time all it takes is some banana peels and the dregs from some Miller Beer in a Mr. Fusion home reactor.

So not only is Hummingbird a hybrid engine but it’s hooked up to something that can turn relatively little into a whole lot.

Quantum Computing

So lets take this a little bit further and look at Google’s interest in quantum computing. Back in 2009 Hartmut Neven was talking about the use of quantum algorithms in machine learning.

Over the past three years a team at Google has studied how problems such as recognizing an object in an image or learning to make an optimal decision based on example data can be made amenable to solution by quantum algorithms. The algorithms we employ are the quantum adiabatic algorithms discovered by Edward Farhi and collaborators at MIT. These algorithms promise to find higher quality solutions for optimization problems than obtainable with classical solvers.

This seems to have yielded positive results because in May 2013 Google upped the ante and entered into a quantum computer partnership with NASA. As part of that announcement we got some insight into Google’s use of quantum algorithms.

We’ve already developed some quantum machine learning algorithms. One produces very compact, efficient recognizers — very useful when you’re short on power, as on a mobile device. Another can handle highly polluted training data, where a high percentage of the examples are mislabeled, as they often are in the real world. And we’ve learned some useful principles: e.g., you get the best results not with pure quantum computing, but by mixing quantum and classical computing.

A highly polluted set of training data where many examples are mislabeled? Makes you wonder what that might be doesn’t it? Link graph analysis perhaps?

Are quantum algorithms part of Hummingbird? I can’t be certain. But I believe that Hummingbird lays the groundwork for these types of leaps in optimization.

What About Conversational Search?

Dog Answering The Phone

There’s also a lot of talk about conversational search (pun intended). I think many are conflating Hummingbird with the gains in conversational search. Mind you, the basis of voice and conversational search is still machine learning. But Google’s focus on conversational search is largely a nod to the future.

We believe that voice will be fundamental to building future interactions with the new devices that we are seeing.

And the first area where they’ve made advances is the ability to resolve pronouns in query chains.

Google understood my context. It understood what I was talking about. Just as if I was having a conversation with you and talking about the Eiffel Tower, I wouldn’t have to keep repeating it over and over again.

Does this mean that Google can resolve pronouns within documents? They’re getting better at that (there a huge corpus of research actually) but I doubt it’s to the level we see in this distinct search microcosm.

Conversational search has a different syntax and demands a slightly different language model to better return results. So Google’s betting that conversational search will be the dominant method of searching and is adapting as necessary.

What Does Hummingbird Do?

What's That Mean Far Field Productions

This seems to be the real conundrum when people look at Hummingbird. If it affects 90% of searches worldwide why didn’t we notice the change?

Hummingbird makes results even more useful and relevant, especially when you ask Google long, complex questions.

That’s what Amit says of Hummingbird and I think this makes sense and can map back to the idea of synonyms (which are still quite powerful). But now, instead of looking at a long query and looking at word synonyms Google could also be applying entity synonyms.

Understanding the meaning of the query might be more important than the specific words used in the query. It reminds me a bit of Aardvark which was purchased by Google in February 2010.

Aardvark analyzes questions to determine what they’re about and then matches each question to people with relevant knowledge and interests to give you an answer quickly.

I remember using the service and seeing how it would interpret messy questions and then deliver a ‘scrubbed’ question to potential candidates for answering. There was a good deal of technology at work in the background and I feel like I’m seeing it magnified with Hummingbird.

And it resonates with what Jeff Dean has to say about analyzing sentences.

I think we will have a much better handle on text understanding, as well. You see the very slightest glimmer of that in word vectors, and what we’d like to get to where we have higher level understanding than just words. If we could get to the point where we understand sentences, that will really be quite powerful. So if two sentences mean the same thing but are written very differently, and we are able to tell that, that would be really powerful. Because then you do sort of understand the text at some level because you can paraphrase it.

My take is that 90% of the searches were affected because documents that appear in those results were re-scored or refined through the addition of entity data and the application of machine learning across a larger data set.

It’s not that those results have changed but that they have the potential to change based on the new infrastructure in place.

Hummingbird Response

Le homard et le chat

How should you respond to Hummingbird? Honestly, there’s not a whole lot to do in many ways if you’ve been practicing a certain type of SEO.

Despite the advice to simply write like no one’s watching, you should make sure you’re writing is tight and is using subjects that can be identified by people and search engines. “It is a beautiful thing” won’t do as well as “Picasso’s Lobster and Cat is a beautiful painting”.

You’ll want to make your content easy to read and remember, link out to relevant and respected sources, build your authority by demonstrating your subject expertise, engage in the type of social outreach that produces true fans and conduct more traditional marketing and brand building efforts.

TL;DR

Hummingbird is an infrastructure change that allows Google to take advantage of additional sources of data, such as entities, as well as leverage new deep learning models that increase the precision of current algorithms. The first application of Hummingbird was the refinement of Google’s document topic modeling, which is vital to delivering relevant search results.

Closing Google Reader Is Dangerous

March 14 2013 // Social Media + Technology // 39 Comments

I’m a dedicated Google Reader user, spending hours each day using it to keep up on any number of topics. So my knee-jerk reaction to the news that Google will close the service as of July 1, 2013 was one of shock and anger.

I immediately Tweeted #savegooglereader and posted on Google+ in hopes of getting it to trend or go hot. These things are silly in the scheme of things. But what else is there to do?

I’ve written previously that the problem with RSS readers is marketing. I still believe that (it’s TiVo for web content people!) but in the end that’s not why closing Google Reader is so dangerous. And it is dangerous.

Google Reader Fuels Social

Google Reader Is The Snowpack of Social

Photo via double-h

The announcement indicates that, while having a loyal following, usage has declined. That’s a rather nebulous statement, though I don’t truly expect Google to provide the exact statistics. But it’s who is still using Google Reader that is important, is it not?

Participation inequality, often called the 90-9-1 principle, should be an important factor in analyzing Google Reader usage. Even if you believe that the inequality isn’t as pronounced today, those that are contributing are still a small bunch.

Studies on participation on Twitter have shown this to be true, both from what content is shared and who is sharing it. That means that the majority of the content shared is still from major publications and that we get that information through influencers. But where do they get it?

Google Reader.

RSS readers are the snowpack of social networks.

Organizing Information

Jigsaw Puzzle Pieces

Google’s mission is to organize the world’s information and make it universally accessible and useful. By extension that is what Google Reader lets power-users do. Make no mistake, Google Reader is not a mainstream product. Google (and many others) have screwed up how to market time-shifted online reading.

The result is that those using Google Reader are different. They’re the information consumers. They’re the ones sifting through the content (organizing) and sharing it with their community (accessible) on platforms like Twitter, Facebook and Google+ (useful).

Google Reader allows a specific set of people to help Google fulfill their mission.

Losing Identity

AJ Kohn Cheltenham High School ID

There are replacements to Google Reader such as Feedly. So you can expect that the people who fuel social networks will find other ways to obtain and digest information so they can filter it for their followers. Problem solved, right? Wrong.

Why exactly does Google want to hand over this important part of the ecosystem to someone else? With Google Reader they know who I am, what feeds I subscribe to, which ones I read and then which ones I wind up sharing on Google+.

Wouldn’t knowing that dynamic, of understanding how people evaluate content and determine what is worthy of sharing, be of interest to Google? It should be. It’s sort of what they want to excel at.

Not only that but because Google Reader has product market fit (see how I got that buzzword in there) with influencers or experts, you’re losing an important piece of the puzzle if you’re thinking about using social sharing and Authorship as search signals.

Data Blind

Data Blind

In the end, I’m surprised because it makes Google data blind. As I look at Unicorn, Facebook’s new inverted-index system, I can’t help but think that Facebook would love to have this information. Mining the connections and activity between these nodes seems messy but important.

What feeds do I subscribe to? That social gesture could be called a Like in some ways. What feeds do I read? That’s a different level of engagement and could even be measured by dwell time. What feeds and specific content do I share? These are the things that I am endorsing and promoting.

By having Google Reader integrated into the Google+ ecosystem, they can tell when I consumed that information and when I then shared it, not just on Google+ but on other platforms if Google is following the public social graph (which we all know they are.)

Without Google Reader, Google loses all of that data and only sees what is ultimately shared publicly. Never mind the idea that Google Reader might be powering dark social which could connect and inform influencers. Gone is that bit of insight too.

Multi-Channel Social

Daft Punk Discovery

As a marketer I’m consumed with attribution and Google Analytics clearly understands the importance of multi-channel modeling. We even see the view-through metric in Google Adwords display campaigns.

The original source and exposure of content is of huge importance. Google might have Ripples but that only tells them how the content finally entered Google+ not how that content was discovered.

I’m certain that users will find alternatives because there is a need for this service. Google just won’t know what new sites influencers might be reading more of or which sites might be waning with subject matter experts. Google will only see the trailing indicators, not the leading ones.

TL;DR

Google Reader allows information consumers – influencers and subject matter experts – to fuel social networks and help fulfill Google’s core mission. Closing Google Reader will put that assistance in the hands of another company or companies and blinds Google to human evaluation data for an important set of users.

Google’s Evil Plan

January 27 2013 // Technology // 80 Comments

Google’s evil plan is simple and not so evil.

Don’t Be Evil

Soon LOLcat

Any successful company is going to draw criticism. Google probably gets more of it than others because of their ‘Don’t Be Evil’ motto. Algorithm changes shuffle branded sites higher and people shout ‘evil!’ Google begins to disintermediate certain verticals and people shout ‘evil!’

Most of the posts about Google’s evil ways focus around these two themes. So much time and energy is spent raging against changes that are simply a reflection of us – the user. When we collectively stop shopping at branded stores over smaller boutiques then we’ll see that reflected in our search results.

And the last time I checked no one was mourning the demise of the milk man or shedding tears over Tower Records or Blockbuster. It sucks if you’re the business getting disintermediated but do you really want to go to another website to get the current weather?

Evil? It’s not Google, it’s you.

Google’s Evil Plan

Instead of talking about all of these natural business moves and conjuring up some nefarious plot, I want to talk about Google’s real strategy. Here’s the truth. Here’s Google’s plan.

Get people to use the Internet more.

That’s it. The more time people spend on the Internet the more time they’ll engage in revenue generating activities such as viewing and clicking display ads and performing searches.

The way Google executes on this strategy is to improve speed and accessibility to the Internet. Google wants to shorten the distance between any activity and the Internet. Lets look at some of Google’s initiatives with this in mind.

Chrome

Speed Racer Car #5

Firefox was doing a bang up job of breaking Internet Explorer’s browser monopoly. Chrome certainly hastened IE’s decline and helped secure more search volume. Yet Chrome developers have long said that their goal isn’t market share but to make the browsing experience faster.

In a very nearsighted way, making browsers faster is the goal. Yet, the faster the web experience, the more page views people rack up and the more searches they’ll perform.

Chrome is about reducing the friction of browsing the Internet.

SPDY

60s Spiderman Flying Car

Google can only do so much with Chrome to speed up the web. Enter SPDY, an open networking protocol, which looks to be the basis for HTTP 2.0.

Its goal is to reduce the latency of web pages.

That’s technical speak for making the web faster. This is what users want. This is what makes users happy. Milliseconds matter when it comes to user satisfaction. And satisfying the user is great for business.

Android

Android Robot

Similar to Chrome, Google saw that users would increasingly access the Internet via phones. They learned from their web browser experience and decided to jump into the vertical early and it’s paid off. Google now commands nearly 54% of the smartphone market.

Android doesn’t have to make money directly. It provides unfettered access to revenue generating activities and allows Google to push the industry forward in terms of speed.

Motorola Mobility

Motorola Mobility

Not content to simply push the envelope with software, Google decided to grab Motorola Mobility and improve on hardware too. The rumors around the Google X phone are increasing.

Long battery life and wireless charging are two of the more tantalizing possibilities  These are clearly features that would greatly benefit users but … they also ensure that you’ll nearly always be able to connect to the Internet. See how that works?

Google Now

Psychic Search?

Not using the Internet enough? Google Now can help change that by automagically serving up useful cards based on your search history and behavior. Don’t get me wrong. I like Google Now and find it to be more and more valuable as they add more functionality.

But it’s no mystery that predictive search is also about stimulating more Internet activity.

Google Fiber

Google Fiber

Many seem to think Google is crazy to pursue fiber. It’s massive. It’s expensive. But it’s also exactly in line with their goal of increasing Internet usage. In fact, they’re pretty clear in the messaging on the Google Fiber page.

Google Fiber starts with a connection speed 100 times faster than today’s broadband. Instant downloads. Crystal clear high definition TV. And endless possibilities. It’s not cable. And it’s not just Internet. It’s Google Fiber.

It’s not that Google would control the transmission (though that’s a nice side benefit), it’s that the friction to using the Internet would be nearly zero.

Google WiFi

 

WiFi Logo

Google already provides free WiFi in Mountain View, wanted to do it in San Francisco as far back as 2005 until it was torpedoed by politics and paranoia. Now Google provides free WiFi in the Chelsea neighborhood of New York. In addition, Google has been futzing with white space and a super-dense LTE network.

Can it be any more clear? Google wants ubiquitous Internet access.

Google Drive

cloud

I often see people argue that the cloud is Google’s big picture strategy. I think that’s still missing the point. The cloud is a means to an end.

Giving people the ability to access files from anywhere simply keeps them online longer. You don’t have the browser off working on your document, instead your online editing and saving your document. You’re searching for those documents.

You’re just a browser tab away from areas of the Internet where Google makes money. In short, Google Drive shortens the distance between work and activities that produce revenue for Google.

Chromebook

Chromebook

Taken to the extreme, Chromebook is essentially a computer that runs off the Internet and cloud. Everything is done online.

A new type of computer designed to help you get things done faster and easier.

Faster. There’s that word again. And easier is just a friendly way of saying ‘reduce friction’. At $199 and $249 Google is hoping that this new type of computer will start to find a market. This strikes me as the ultimate lock-in.

Google+

Aldous Huxley

So what about Google+? At first blush, it doesn’t seem to fit.

I still believe a substantial reason for building Google+ was to develop better social signals and increase search personalization. However, I think the time spent in places where Google couldn’t reach (aka Facebook) was troubling.

Google needed to break the stranglehold Facebook had on social attention. They’ve certainly made inroads there and that’s really all they needed to do to ensure attention didn’t pool and persist in a Google dead zone.

Self Driving Cars

Google Self Driving Car

I’m shocked that people don’t see the brilliance of a self-driving car. The average commute time in the US is 25 minutes (pdf). So that’s nearly an hour each day that people can’t be actively on the Internet. Yet, they obviously want to be.

If you play Ingress (like I do) you can see where XM (roughly phone usage) is highest. It’s super high in parks and doctor’s offices and movie theaters. But it’s also concentrated at intersections. A red light and we’re diving for our phones.

Now imagine a self-driving car and how much more time you’d have to … be on the Internet. I’m just talking about commuting which is less than 20% of the driving done in this country!

A self driving car unlocks a vast amount of time that could be spent on the Internet.

Google Glass

Google Glass Skydive

I know the latest big thing is Sergey on the Subway but to me his skydive was more transformative. The message? Even if you’re falling out of the sky you can still use the Internet.

Google Glass could be the ultimate way to keep you connected to the Internet.

Perhaps we’ll reach a point where much of our consciousness is actually online. Why waste your time remembering useless things when you can simply retrieve them from your personal cloud? Sometimes the future in Charles Stross’ Accelerando seems almost inevitable.

Mind you, at times I feel the urge to live in a cabin in the woods but it’s usually quickly followed with a caveat of ‘with good satellite coverage or Internet access.’

Google TV

Google TV Logo

I think YouTube was initially thought to be the future of TV. The problem is that we’re very entrenched in traditional TV and inertia (and a lack of proper execution by Google TV) has allowed traditional TV to catch up.

This is the one place where Google is behind. Maybe Google TV picks up steam, or Google Fiber is the wedge into homes or Google acquires someone big like TiVo or Netflix.

Twitter is also both a major rival and potential acquisition target because of their position as the glue between screens.

Share of Time

Salvador Dali Dripping Clocks

I’m surprised that no one has compared Google’s strategy to Coke’s now abandoned ‘share of stomach‘ strategy. Google wants people to spend more of their time on the Internet. Think about that.

Once again it comes down to the ‘Don’t Be Evil’ motto. Coke didn’t care if they were creating a health epidemic as they rang up profits. Google, on the other hand, believes their services can improve our lives.

That kind of belief is what the tin foil hat conspiracy folks should really be worried about. It’s not any small tactical gaffe that could be chalked up in the evil column. It’s that Google believes they’re doing good. I sort of think so too.

TL;DR

Google’s strategy is to get people to use the Internet more. The more time people spend on the Internet the more time they’ll engage in revenue generating activities. As such, nearly every Google effort is focused on increasing Internet speed and access with the goal to shorten the distance between any activity and the Internet.

2013 Internet, SEO and Technology Predictions

December 31 2012 // Advertising + Marketing + SEO + Social Media + Technology // 15 Comments

I’ve made predictions for the past four years (2009, 2010, 2011, 2012) and think I’ve done pretty well as a prognosticator.

I’m sometimes off by a year or two and many of my predictions are wrong where my predictions were more like personal wishes. But it’s interesting to put a stake in the ground so you can look back later.

2013 Predictions

2013 Predictions Crystal Ball

Mobile Payment Adoption Soars

If you follow my Marketing Biz column you know I’m following the mobile payments space closely. Research seems to indicate that adoption of mobile payments will take some time in the US based on current attitudes.

I believe smartphone penetration and the acceptance of other similar payments such as app store purchases and Amazon Video on Demand will smooth the way for accelerated mobile payment adoption. Who wins in this space? I’m still betting on Google Wallet.

Infographics Jump The Shark

Frankly, I think this has already happened but perhaps it’s just me. So I’m going to say I’m the canary in the coal mine and in 2013 everyone else will get sick and tired of the glut of bad Infographics.

Foursquare Goes Big

The quirky gamification location startup that was all about badges and mayorships is growing up into a mature local search portal. I expect to see Foursquare connect more dots in 2013, making Yelp very nervous and pissing off Facebook who will break their partnership when they figure out that Foursquare is eating their local lunch.

Predictive Search Arrives

Google Now is a monster. The ability to access your location and search history, combined with personal preferences allows Google to predict your information needs. Anyone thinking about local optimization should be watching this very closely.

Meme Comments

A new form of comments and micro-blogging will emerge where the entire conversation is meme based. Similar to BuzzFeed’s reactions, users will be able to access a database of meme images, perhaps powered by Know Your Meme, to respond and converse.

Search Personalization Skyrockets

Despite the clamor from filter bubble and privacy hawks, Google will continue to increase search personalization in 2013. They’ll do this through context, search history, connected accounts (Gmail field trial) and Google+.

The end result will be an ever decreasing uniformity in search results and potential false positives in many rank tracking products.

Curation Marketing

Not content with the seemingly endless debate of SEO versus Inbound Marketing versus Content Marketing versus Growth Hacking we’ll soon have another buzzword entering the fray.

Curation marketing will become increasingly popular as a way to establish expertise and authority. Like all things, only a few will do it the right way and the rest will be akin to scraped content.

Twitter Rakes It In 

I’ve been hard on Twitter in the past and for good reason. But in 2013 Twitter will finally become a massive money maker as it becomes the connection in our new multi-screen world. As I wrote recently, Twitter will win the fight for social brand advertising dollars.

De-pagination

After spending years and literally hundreds of blog posts about the proper way to paginate we’ll see a trend toward de-paginating in the SEO community. The change will be brought on by the advent of new interfaces and capabilities. (Blog post forthcoming.)

Analytics 3.0 Emerges

Pulling information out of big data will be a trend in 2013. But I’m even more intrigued by Google’s Universal Analytics and location analytics services like Placed. Marketers are soon going to have a far more complete picture of user behavior, Minority Report be damned!

Ingress Becomes Important

I’m a bit addicted to Ingress. At first you think this is just a clever way for Google to further increase their advantage on local mapping. And it is.

But XM is essentially a map Android usage. You see a some in houses, large clusters at transit stops, movie theaters and doctor’s offices, essentially anywhere there are lines. You also see it congregate at intersections and a smattering of it on highways.

Ingress shows our current usage patterns and gives Google more evidence that self-driving cars could increase Internet usage, which is Google’s primary goal these days.

Digital Content Monetization

For years we’ve been producing more and more digital content. Yet, we still only have a few scant ways to monetize all of it and they’re rather inefficient when you think about it. Someone (perhaps even me) will launch a new way to monetize digital content.

I Will Interview Matt Cutts

No, I don’t have this lined up. No, I’m not sure I’ll be able to swing it. No, I’m not sure the Google PR folks would even allow it. But … I have an idea. So stay tuned.

Ripples Bookmarklet

July 20 2012 // SEO + Social Media + Technology // 28 Comments

Who shared your post and how did it spread on Google+? That’s what Ripples can tell you, allowing you to find influencers and evangelists.

Google+ Ripples

You can find Ripples in the drop down menu on public posts.

Google Plus Ripples Drop Down

But I noticed that there was also a small URL entry field on the Ripples page.

Google Ripples URL Field

Sure enough you can drop in a URL and see Ripples for any page.

Google Ripples Example

(Interesting how each of my shares of this post are shown separately.)

Ripples Bookmarklet

I didn’t want to go traipsing back and forth to enter URLs, so I created a bookmarklet.

Find Ripples

Drag the link above to your bookmarks bar. Then click the bookmark whenever you want to see Ripples for the page you’re on. [Clarification] This is for non-Google+ URLs only. Ripples for Google+ URLs are only available via the drop-down menu.

So stop wondering and find out who’s sharing your content (or any content) on Google+.

Twitter Cards Are Rich Snippets For Tweets

June 18 2012 // SEO + Social Media + Technology // 28 Comments

On Thursday Twitter announced something called Twitter Cards. What are Twitter Cards? They’re essentially rich snippets for Tweets and I predict they’re going to be essential for making your content more portable.

Twitter Cards

There are actually three different types of cards: summary, photo and player. The summary is the default card while the photo and player cards are specifically for images and videos. Here’s the example Twitter provides for a summary card.

Twitter Card Example

Yes Twitter, you definitely have my attention.

Transforming Twitter?

Twitter Cards could transform Twitter from the text based default it has languished in for years to one that will compete with the more appealing and popular visual feeds like Instagram, Path, Foursquare, Tumblr, Google+ and Facebook, the latter two most notably on mobile.

If the summary card is open by default your Twitter stream would look vastly different. It might also change the behavior of those using Twitter and cause people to trim the number of those they follow.

Twitter desperately needs to capture more time and attention to fully realize their advertising business. Transforming the feed through Twitter Cards could be a big step in the right direction.

Twitter Card Properties

All of the cards support some basic properties.

Basic Twitter Card Properties

You can optionally (and ideally) also include attribution in your Twitter Card.

Twitter Card Attribution

The summary card is probably the easiest one of the three with very few required properties.

Twitter Summary Card Properties

Note that you can only have one card per post. If you have the time, I recommend you read through the Twitter Card documentation.

Twitter and Open Graph Tags

You might be thinking to yourself, good god, I have to figure out another set of markup? Well, not exactly. Twitter will actually fall back on Open Graph tags should you already have those in place.

But the Open Graph tags aren’t comprehensive. So if you’ve got Open Graph tags in place then you’ll just need to add a few more to get the most out of Twitter Cards. In particular, you won’t get the attribution which is very attractive in my opinion.

As an aside, there’s no mention of whether Twitter will parse schema.org markup or fall back even further to standard markup like the title tag or meta description.

How To Implement Twitter Cards

I have the Open Graph tags on Blind Five Year Old but decided to implement all of the Twitter tags because I want to be certain I have full control over what is being delivered. I think portability is increasingly important so I’m not going to take any chances.

Now, a lot of what I’m going to show you is based on prior hacks and on the plugins I happen to use. So you may not be able to replicate what I do exactly, but it should give you an idea of how you can do it yourself.

Check Your Head

Check Your Head

The first thing to understand is where to put these tags. They go in the <head> of your posts. The <head> is essentially an area (invisible to the user) located before the actual content of a page. It’s where you give instructions to browsers and search engines about the page. This can be all sorts of things from the title to styling of a page. It’s also where you declare the values for all these tags.

Think of it this way, you need special glasses to watch that 3D movie, the <head> is where you’d be given those glasses.

View Page Source

You can see what’s in the <head> by doing a simple right mouse click on any page and selecting ‘View Page Source’.

View Page Source

That will open up a new tab with a whole mess of code for you to review and inspect.

Page Head

My <head> is a bit messy with all the stuff I’ve done and use, but it still works and at some point I’ll come back around to clean it up. Next, we’ll make sure these new Twitter tags show up here.

Edit Your Header

In WordPress, go to your Dashboard and select Appearance > Editor.

WordPress Appearance Editor

Next, select the header file which will likely be header.php.

Edit Header.php File

This is where you’re going to be placing your code.

Now before you go any further, copy all of the code in your header.php and paste it into a text editor. So if you happen to screw things up you can just copy back your old header.php file and start again. (Seriously, do this! I’ve broken my site so many times and it’s that backup copy I have in a text file that often saves the day.)

Drop In The Code

Now it’s time to actually put the code in place. You’re going to put it directly before the closing </head> tag.

Twitter Card Code

I’ve posted a version of the Twitter Card code on Pastebin so you can easily copy and tweak it for your own site. (Do not just copy and paste it into your own file!)

The first line is a comment and does not actually show up on the page nor give any instructions. It just makes it easier for me to see where this code resides once it’s live.

The second line starts with a statement that I only want this on posts. This is accomplished with the if(is_single()) function.

Next I declare the card type (summary) and then the creator (my Twitter handle). I’ve hard coded the creator since I’m the only author on Blind Five Year Old. If you run a single author blog then it’s easy to do this. If you run a multi-author blog or site you’ll have to build in some logic and get the Twitter handle for the author of that post.

To get the URL I simply echo the get_permalink() function. The echo is essentially saying to not only find the permalink but to put what it finds there into the code.

To get the title I echo the get_the_title() function. Yeah, that’s a pretty self explanatory function isn’t it?

For the description I echo the get_post_meta() function which is a collection of meta data about posts. I’m asking for a specific piece of that meta. In this case it’s the _aioseop_description which is the meta description I’ve entered via the All In One SEO Pack.

I sort of cheated by doing a Google search that brought me to a WordPress Support thread that contained the right syntax for this field. If you didn’t know this you’d have to go and find the name of this field in your database via something like phpMyAdmin.

You might also be able to use the_excerpt() or to echo get_the_excerpt() here but I like the specificity since I know I’ve entered something for the meta description myself.

For the image, I’ve essentially replicated what I do to get the Open Graph image but changed the property to name (swapping og for twitter) and content to value. Again, you really don’t need to do this since Twitter says they’ll fall back on the Open Graph image. But I feel better having it explicitly spelled out.

Read through my Snippet Optimization post to learn more about how to use a simple custom field (og_img) to generate a featured image for each post. Seriously, it’s not that hard to do.

After you put your code in you hit update file and then go to a post and view source. Hopefully you see the Twitter Card markup populating correctly. (Check this post for an example.) If not, go back and try again paying close attention to the syntax of your code.

At present Twitter does not have a testing tool like Facebook or Google, but it’s something we may see in the future.

(Please comment if you can improve on, see errors in or can provide additional details such as tips for other platforms or field names for other plugins. A special thanks to Ron Kuris who helped to debug my PHP code.)

A Velvet Rope?

I need To See Some ID LOLcat

It is unclear who exactly will be able to participate in Twitter Cards initially.

To participate in the program, you should (a) read the documentation below, (b) determine whether you wish to support Twitter cards, and then (c) apply to participate. As we roll out this new feature to users and publishers, we are looking for sites with great content and those that drive active discussion and activity on Twitter.

It sounds like Twitter is going to review each site and create a whitelist for those they wish to support. But I have to think that this will become an open standard in short order. So get a jump on things and implement Twitter Cards now.

TL;DR

Twitter Cards are rich snippets for Tweets. Implementing Twitter Cards could transform Twitter into a more appealing visual feed and makes optimizing your Twitter Card an essential part of social portability.

Rich Snippets Testing Tool Bookmarklet

February 12 2012 // SEO + Technology // 68 Comments

Did you implement your Google Authorship markup correctly? Is your review microformat being recognized by Google? The best way to find out is to run it through Google’s Rich Snippets Testing Tool.

Rich Snippets Testing Tool Bookmarklet

I’ve been using Google’s Rich Snippets Testing Tool heavily as I help readers diagnose Authorship markup issues. This morning I was reviewing an interesting post by John Doherty about Google Author Search. In his post he provides a handy bookmarklet.

LOLcat Lightbulb

I realized I should create a Rich Snippets Testing Tool Bookmarklet so I don’t have to continually go to the page manually. So I dusted off my limited javascript skills and after about 10 minutes half an hour of trial and error had it figured out.

Rich Snippets Testing Tool

Drag the link above to your bookmarks bar. Then click the bookmark whenever you want to test a specific page. It will create a new tab with the Rich Snippets Testing Tool results.

Sample Rich Snippets Testing Tool Bookmarklet Result

This makes it ultra-easy to validate any page for rich snippets and has already (in my testing of the bookmarklet) revealed some bugs with the Rich Snippets Testing Tool itself.

Please let me know if you find this helpful and report any incompatibility issues or bugs you might find with my bookmarklet code.

2012 Internet, SEO and Technology Predictions

December 27 2011 // Analytics + SEO + Technology // 8 Comments

It’s time again to gaze into my crystal ball and make some predictions for 2012.

Crystal Ball Technology Predictions

2012 Predictions

For reference, here are my predictions for 2011, 2010 and 2009. I was a bit too safe last year so I’m making some bold predictions this time around.

Chrome Becomes Top Browser

Having already surpassed Firefox this year, Chrome will see accelerated adoption, surpassing Internet Explorer as the top desktop browser in the closing weeks of 2012.

DuckDuckGo Cracks Mainstream

Gabriel Weinberg puts new funding to work and capitalizes on the ‘search is about answers’ meme. DuckDuckGo leapfrogs over AOL and Ask in 2012, securing itself as the fourth largest search engine.

Google Implements AuthorRank

Google spent 2011 building an identity platform, launching and aggressively promoting authorship while building an internal influence metric. In 2012 they’ll put this all together and use AuthorRank (referred to in patents as Agent Rank) as a search signal. It will have a more profound impact on search than all Panda updates combined.

Image Search Gets Serious

Pinterest. Instagram. mlkshk. We Heart It. Flickr. Meme Generator. The Internet runs on images. Look for a new image search engine, as well as image search analytics. Hopefully this will cause Google to improve (which is a kind word) image search tracking within Google Analytics.

SEO Tool Funding

VCs have been sniffing around SEO tool providers for a number of years. In 2012 one of the major SEO tool providers (SEOmoz or Raven) will receive a serious round of funding. I actually think this is a terrible idea but … there it is.

Frictionless Check-Ins

For location based services to really take off and reach the mainstream they’ll need a near frictionless check-in process. Throughout 2012 you’ll see Facebook, Foursquare and Google one-up each other in providing better ways to check-in. These will start with prompts and evolve into check-out (see Google Wallet) integrations.

Google+ Plateaus

As much as I like Google+ I think it will plateau in mid-2012 and remain a solid second fiddle to Facebook. That’s not a knock of Google+ or the value it brings to both users and Google. There are simply too many choices and no compelling case for mass migration.

HTML5 (Finally) Becomes Important

After a few years of hype HTML5 becomes important, delivering rich experiences that users will come to expect. As both site adoption and browser compatibility rise, search engines will begin to use new HTML5 tags to better understand and analyze pages.

Schema.org Stalls

Structured mark-up will continue to be important but Schema.org adoption will stall. Instead, Google will continue to be an omnivore, happy to digest any type of structured mark-up, while other entities like Facebook will continue to promote their own proprietary mark-up.

Mobile Search Skyrockets

Only 40% of U.S. mobile users have smartphones. That’s going to change in a big way in 2012 as both Apple and Google fight to secure these mobile users. Mobile search will be the place for growth as desktop search growth falls to single digits.

Yahoo! Buys Tumblr

Doubling down on content Yahoo! will buy Tumblr, hoping to extend their contributor network and overlay a sophisticated, targeted display advertising network. In doing so, they’ll quickly shutter all porn related Tumblr blogs.

Google Acquires Topsy

Topsy, the last real-time search engine, is acquired by Google who quickly shuts down the Topsy API and applies the talent to their own initiatives on both desktop and mobile platforms.

Delicious Turns Sour

December 19 2011 // Rant + Technology + Web Design // 8 Comments

In April, the Internet breathed a sigh of relief when Delicious was sold to AVOS instead of being shut down by Yahoo. In spite of Yahoo’s years of neglect, Delicious maintained a powerful place in the Internet ecosystem and remained a popular service.

Users were eager to see Delicious improve under new management. Unfortunately the direction and actions taken by Delicious over the last 8 months make me pine for the days when it was the toy thrown in the corner by Yahoo!

Where Did Delicious Go Wrong?

Delicious Dilapidated Icon

I know new management means well and have likely poured a lot of time and effort into this enterprise. But I see problems in strategy, tactics and execution that have completely undermined user trust and loyalty.

Bookmarklets

The one mission critical feature which fuels the entire enterprise falls into disrepair. Seriously? This is unacceptable. The bookmarklets that allow users to bookmark and tag links were broken for long stretches of time and continue to be rickety and unreliable. This lack of support is akin to disrespect of Delicious users.

Stacks

Here’s how they work. Select some related links, plug them into a stack and watch the magic happen. You can customize your stack by choosing images to feature, and by adding a title, description and comment for each link. Then publish the stack to share it with the world. If you come across another stack you like, follow it to easily find it again and catch any updates.

Instead of the nearly frictionless interaction we’ve grown accustomed to, we’re now asked to perform additional and duplicative work. I’ve already created ‘stacks’ by bookmarking links with appropriate tags. Want to see a stack of links about SEO, look at my bookmarks that are tagged SEO. It doesn’t get much more simple than that.

Not only have they introduced complexity into a simple process, they’ve perverted the reason for bookmarking links. The beauty of Delicious was that you were ‘curating’ without trying. You simply saved links by tags and then one day you figured out that you had a deep reservoir of knowledge on a number of topics.

Stacks does the opposite and invites you to think about curation. I’d argue this creates substantial bias, invites spam and is more aligned with the dreck produced by Squidoo.

Here’s another sign that you’ve introduced unneeded complexity into a product.

Delicious Describes Stacks

In just one sentence they reference stacks, links, playlists and topics. They haven’t even mentioned tags! Am I creating stacks or playlists? If I’m a complete novice do I understand what ‘stack links’ even means?

Even if I do understand this, why do I want to do extra work that Delicious should be doing for me?

Design

Design over Substance

The visual makeover doesn’t add anything to the platform. Do pretty pictures and flashy interactions really help me discover content? Were Delicious users saying they would use the service more if only it looked prettier? I can’t believe that’s true. Delicious had the same UI for years and yet continued to be a popular service.

Delicious is a utilitarian product. It’s about saving, retrieving and finding information.

Sure, Flipboard is really cool but just because a current design pattern is in vogue doesn’t mean it should be applied to every site.

UX

There are a number of UX issues that bother me but I’ll highlight the three that have produced the most ire. The drop down is poorly aligned causing unnecessary frustration.

Delicious Dropdown Alignment

More than a few times I’ve gone across to to click on one of the drop down links only to have it disappear before I could finish the interaction.

The iconography is non-intuitive and doesn’t even have appropriate hover text to describe the action.

Delicious Gray Icons

Delicious Icons are Confusing

Does the + sign mean bookmark that link? What’s the arrow? Is that a pencil?

Now, I actually get the iconography. But that’s the problem! I’m an Internet savvy user, yet the new design seems targeted at a more mainstream user. Imagine if Pinterest didn’t have the word ‘repin’ next to their double thumbtack icon?

Finally, the current bookmarklet supports the tag complete function. You begin typing in a tag and you can simply select from a list of prior tags. This is a great timesaver. It even creates a handy space at the end so you can start your next tag. Or does it?

Delicious Tag Problems

WTF!? Why is my tag all muddled together?

Delicious improved tagging by allowing spaces in tags. That means that all tags have to be separated by commas. I get that. It’s not the worst idea either. But the tag complete feature should support this new structure. Because it looks like it functions correctly by inserting a space after the tag. I mean, am I supposed to use the tag complete feature and then actually backspace and add a comma?

It’s not the best idea to make your users feel stupid.

Uptime

Delicious Unavailable Page

The service has been unstable, lately as poor as it was at the height of Twitter’s fail whale problem. I’ve seen that empty loft way too much.

What Should Delicious Do Instead?

It’s easy to bitch but what could Delicious have done instead? Here’s what I think they should have (and still could) do.

Filtering

An easy first step to improve Delicious would be to provide a better way to filter bookmarks. The only real way to do so right now is by adding additional tags. It would have been easy to introduce time (date) and popularity (number of times bookmarked) facets.

They could have gone an extra step and offered the ability to group bookmarks by source. This would let me see the number of bookmarks I have by site by tag. How many times have I bookmarked a Search Engine Land article about SEO? Not only would this be interesting, it maps to how we think and remember. You’ll hear people say something like: “It was that piece on management I read on Harvard Business Review.”

There are a tremendous number of ways that the new team could have simply enhanced the current functionality to deliver added value to users.

Recommendations

Recommendation LOLcat

Delicious could create recommendations based on current bookmark behavior and tag interest. The data is there. It just needs to be unlocked.

It would be relatively straightforward to create a ‘people who bookmarked this also bookmarked’ feature. Even better if it only displayed those I haven’t already bookmarked. That’s content discovery.

This could be extended to natural browse by tag behavior. A list of popular bookmarks with that tag but not in my bookmarks would be pretty handy.

Delicious could also alert you when it saw a new bookmark from a popular tag within your bookmarks. This would give me a quick way to see what was ‘hot’ for topics I cared about.

Recommendations would put Delicious in competition with services like Summify, KnowAboutIt, XYDO and Percolate. It’s a crowded space but Delicious is sitting on a huge advantage with the massive amount of data at their disposal.

Automated Stacks

Instead of introducing unnecessary friction Delicious could create stacks algorthmically using tags. This could be personal (your own curated topics) or across the entire platform. Again, why Delicious is asking me to do something that they can and should do is a mystery to me.

Also, the argument that people could select from multiple tags to create more robust stacks doesn’t hold much water. Delicious knows which tags appear together most often and on what bookmarks. Automated stacks could pull from multiple tags.

The algorithm that creates these stacks would also constantly evolve. They would be dynamic and not prone to decay. New bookmarks would be added and bookmarks that weren’t useful (based on age, lack of clicks or additional bookmarks) would be dropped.

Delicious already solved the difficult human element of curation. It just never applied appropriate algorithms to harness that incredible asset.

Social Graph Data

Delicious could help order bookmarks and augment recommendations by adding social graph data. The easiest thing to do would be to determine the number of Likes, Tweets and +1s each bookmark received. This might simply mirror bookmark popularity though. So you would next look at who saved the bookmarks and map their social profiles to determine authority and influence. Now you could order bookmarks that were saved by thought leaders in any vertical.

A step further, Delicious could look at the comments on a bookmarked piece of content. This could be used as a signal in itself based on the number of comments, could be mined to determine sentiment or could provide another vector for social data.

Trunk.ly was closing in on this since they already aggregated links via social profiles. Give them your Twitter account and they collect and save what you Tweet. This frictionless mechanism had some drawbacks but it showed a lot of promise. Unfortunately Trunk.ly was recently purchased by Delicious. Maybe some of the promise will show up on Delicious but the philosophy behind stacks seems to be in direct conflict with how Trunk.ly functioned.

Analytics

Delicious could have provided analytics to individuals as to the number of times their bookmarks were viewed, clicked or re-bookmarked. The latter two metrics could also be used to construct an internal influence metric. If I bookmark something because I saw your bookmark, that’s essentially on par with a retweet.

For businesses, Delicious could aggregate all the bookmarks for that domain (or domains), providing statistics on the most bookmarked pieces as well as when they are viewed and clicked. A notification service when your content is bookmarked would also be low-hanging fruit.

Search

Delicious already has search and many use it extensively to find hidden gems from both the past and present. But search could be made far better. In the end Delicious could have made a play for being the largest and best curated search engine. I might be biased because of my interest in search but this just seems like a no-brainer.

Revenue

Building a PPC platform seems like a good fit if you decide to make search a primary feature of the site. It could even work (to a lesser extent) if you don’t feature search. Advertisers could pay per keyword search or tag search. I doubt this would disrupt user behavior since users are used to this design pattern thanks to Google.

Delicious could even implement something similar to StumbleUpon, allowing advertisers to buy ‘bookmark recommendations’. This type of targeted exposure would be highly valuable (to users and advertisers) and the number of bookmarks could provide long-term traffic and benefits. Success might be measured in a new bookmarks per impression metric.

TL;DR

The new Delicious is a step backward, abandoning simplicity and neglecting mechanisms that build replenishing value. Instead management has introduced complexity and friction while concentrating on cosmetics. The end result is far worse than the neglect Delicious suffered at the hands of Yahoo.