Keywords are at the heart of any SEO strategy. They are the thread that links sites to customers via their search terms. Over the years there has been a huge amount of focus and speculation about how precisely keywords should be used in on-page content to signal relevance to search engines and to rank to well.
Historically, much of the talk about keywords has been of the “how many angels can dance on the head of a pin” variety: where should they be placed in the content, how many words should there be between the start of a page or heading and the keyword, what is the optimum ratio of keywords to other text, and so on. Some (not very good) SEOs strive to produce content that contains an exact percentage of keywords, often with the result of producing content that isn’t very readable to humans. And some stick to the old tactic of writing as many pages as possible stuffed with keyword variants and synonyms to cover all possible long-tail searches.
Not much has been done to empirically verify these speculations and strategies until recently. Mark Collier over at The Open Algorithm is at least attempting to do something about that.
He has embarked on a program of testing purported signals to see how well they correlate with rankings, including collecting keyword density measurements for over a million pages and over 12000 keywords. Using a mathematical technique for measuring correlation (Spearman’s Correlation co-efficient), the results showed no positive correlation between keyword density and SERP position, and even a slight negative correlation. We won’t go deeply into his methods here — or its flaws, but you can read about both at The Open Algorithm.
This is a result that should surprise no-one working in SEO today, but there is another conclusion drawn from the data that should definitely give pause for thought. In another test, Collier examined whether the presence of a keyword on a page was correlated with ranking. The results showed no correlation at all. That is, according to these tests, whether a page ranks for a keyword is not correlated to whether a page contains that keyword in its text.
How can that be?
There is a difference between being indexed for a keyword and being ranked highly for it. Having particular keywords on a page may lead to an association with that keyword in the index, but it seems that it has no effect at all on where in the SERPs that page appears. Google results are probably (and this is confirmed by Collier’s results) much more influenced by other signals, like anchor text and domain authority of incoming links, page title, meta tags, and so on.
So, should we stop caring about keywords. Absolutely not. Collier’s methodology is open to criticism — see the comments on his articles — but his results should be the spur for a fascinating conversation in the SEO world about erroneous misconceptions, and they will hopefully encourage others to apply a more rigorous, mathematical, and scientific approach to search engine optimization.
What do you think. Is Collier on to something, or is his method so flawed that his results can be disregarded. Does your experience contradict his results. Let us know in the comments.