User Behavior as a Ranking Signal
The debate around the correlation between behavioral metrics and rankings has been on for a few years at least. If you think about it, the topic seems to be pretty uncontroversial — after all, that’s a yes or no question. Then why are opinions on it so divided?
The problem is, what Google representatives have said on the issue has been surprisingly inconsistent. They confirmed it, and they denied it, and they confirmed it again, but most often, they equivocated.
Here’s a little thought experiment: for a moment, let us assume that user behavior is not a ranking signal. Can you think of a reason why Google employees would often publicly say the opposite?.. (You’ll find some pretty straightforward quotes later on in this article.)
Now, let’s assume for a second that Google does consider user metrics in its ranking algorithms. Why wouldn’t they be totally open about it? To answer that question, I’d like to take you back to the days when links largely determined where you ranked. Remember how everyone did little SEO apart from exchanging and buying backlinks in ridiculous amounts? Until Penguin and manual penalties followed, of course.
With that experience in mind, Google probably doesn’t have much faith left in the good intentions of the humankind — and it’s very right not to. Revealing the ins and outs of their ranking algorithms could be an honorable thing to do, but it would also be an incentive for many to use this information in a not-so-white-hat way.
If you think about it, Google’s aim is a very noble one — to deliver a rewarding search experience to its users, ensuring that they can quickly find exactly what they’re looking for. And logically, the implicit feedback of Google searchers is a brilliant way to achieve that. In Google’s own words:
In this article, you’ll find the evidence that Google uses behavioral metrics to rank webpages, the 3 most important metrics that can make a real SEO difference, and real-life how-tos on applying that knowledge to improve your pages’ rankings.
Using behavioral metrics as a ranking factor is logical, but it’s not just common sense that’s in favor of the assumption. There’s more tangible proof, and loads of it.
1. Google says so.
While some Google representatives may be reluctant to admit the impact of user metrics on rankings, many aren’t.
In a Federal Trade Commission court case, Google’s former Search Quality chief, Udi Manber, testified the following:
“The ranking itself is affected by the click data. If we discover that, for a particular query, hypothetically, 80 percent of people click on Result No. 2 and only 10 percent click on Result No. 1, after a while we figure probably Result 2 is the one people want. So we’ll switch it.”
Edmond Lau, who also used to work on Google Search Quality, said something similar:
“It’s pretty clear that any reasonable search engine would use click data on their own results to feed back into ranking to improve the quality of search results. Infrequently clicked results should drop toward the bottom because they’re less relevant, and frequently clicked results bubble toward the top.”
Want more? Back in 2011, Amit Singhal, one of Google’s top search engineers, mentioned in an interview with Wall Street Journal that Google had added numerous “signals,” or factors into its algorithm for ranking sites. “How users interact with a site is one of those signals”.
2. Google’s patents imply so.
While Google’s employees can equivocate about the correlation between user metrics and rankings in their public talks, Google can’t omit information like this in their Patents. It can word it funny, but it can’t omit it.
Google has a whole patent dedicated to modifying search result ranking based on implicit user feedback.
There are a bunch of other patents that mention user behavior as a ranking factor. In those, Google clearly states that along with the good old SEO factors, various user data can be used by a so-called rank modifier engine to determine a page’s position in search results.
3. Real-life experiments show so.
A couple of months ago, Rand Fishkin of Moz ran an unusual experiment. Rand reached out to his Twitter followers and asked them to run a Google search for best grilled steak, click on result No.1 and quickly bounce back, and then click on result No. 4 and stay on that page for a while.
Guess what happened next? Result No.4 got to the very top.
The 3 user metrics that make a difference
Okay, we’ve figured out that user behavior can affect your rankings — but what metrics are we talking about exactly?
1. Click-through rate
It is clear from numerous patents filed by Google that they collect and store information on click-through rates of search results. A click-through rate is a ratio of the number of times a given search listing was clicked on to the number of times it was displayed to searchers. There isn’t, of course, such a thing as a universally good or bad CTR, and many factors that affect it are not directly within your control. Google is, of course, aware of them.
To start with, there’s presentation bias. Clearly, SERP CTR varies significantly for listings depending on where they rank in search results, with top results getting more clicks.
Secondly, different click-through rates are typical for different types of queries. For every query, Google expects a CTR in a certain range for each of the listings (e.g. for branded keywords, the CTR of No.1 result is around 50%; for non-branded queries, the top result gets around 33% of clicks). As we can see from Rand Fishkin’s test mentioned above, if a given listing gets a CTR that is seriously above (or below) that range, Google can re-rank the result in a surprisingly short time span.
One other thing to remember is that CTR affects rankings in real time. After Rand’s experiment, the listing users were clicking and dwelling on eventually dropped to position No. 4 (right where it was before). This shows us that a temporary increase in clicks can only result in a temporary ranking improvement.
2. Dwell time
Another thing that Google can use to modify rankings of search results, according to the patents, is dwell time. Simply put, dwell time is the amount of time that a visitor spends on a page after clicking on its listing on a SERP and before coming back to search results.
Clearly, the longer the dwell time the better — both for Google and for yourself.
Google’s patent on modifying search results based on implicit user feedback says the following user information may be used to rank pages:
Google wants searchers to be satisfied with the first search result they click on (ideally, the No.1 result). The best search experience is one that immediately lands the searcher on a page that has all the information they are looking for, so that they don’t hit the back button to return to the SERP and look for other alternatives.
Bouncing off pages quickly to return to the SERP and look for another result is called pogo-sticking.
All of this affects your website’s overall quality score
OK, this one is really big: good performance is terms of clicks and viewing times isn’t just important for the individual page (and query) you are going after. It can impact the rankings of your site’s other pages, too.
Google has a patent on evaluating website properties by partitioning user feedback. Simply put, for every search somebody runs on Google, a Document-Query pair (D-Q) is created. Such pairs hold the information about the query itself, the search result that was clicked on, and user feedback in relation to that page. Clearly, there are trillions of such D-Qs; Google divides them into groups, or “partitions”. There are thousands of “partition parameters”, e.g. the length of the query, the popularity of the query, its commerciality, the number of search results found in Google in response to that query, etc. Eventually, a single D-Q can end up in thousands of different groups.
“So what?”, you might reasonably ask. The reason why Google is doing all of this mysterious partitioning is to determine a certain property of a website, such as its quality. This means that if some of your pages perform poorly in certain groups in comparison to other search results, Google could conclude that it is not a high-quality site overall, and down-rank many — or all — of its pages. The opposite is also true: if some of your pages perform well compared to competitors’, all of your sites’ pages can be up-ranked in search results.
How-to: Optimize user metrics for better rankings
The three metrics above are a must to track and improve on. Offering a great user experience to visitors is a win-win per se; as a sweet bonus, it can also improve your organic rankings.
1. Boost your SERP CTR
The snippet Google displays for search results consists of a title (taken from the page’s title tag), a URL, and a description (the page’s meta description tag).
- Find pages with poor CTR
Once you’re positive your meta description and title meet the technical requirements, it’s time to go after the CTR itself. The first thing you’d want to do is to log in to your Google Webmaster Tools account and go to the Search Analytics report. Select Clicks, Impressions, CTR, and Position to be displayed:
While CTR values for different positions in Google SERPs can vary depending on the type of the query, on average, you can expect at least 30% of clicks for a No.1 result, 15% for a No.2 result, and 10% for a No.3 result. Here’s a graph from Chitika’s CTR study:
If the CTR for some of your listings is seriously below these averages, these could be the problem listings you’d want to focus on in the first place.
- Make sure your title and meta description meet the tech requirements
If you’re just starting at optimizing your listings’ organic click-through rate, the first thing you need to do is make sure that all your titles and meta descriptions are in line with SEO best practices.
By: Masha Maksimava
This article first appeared on SEOPowerSuite’s Web site.
I strongly recommend this site to anyone wanting to better understand SEO.