Scroll Top
19th Ave New York, NY 95822, USA

Since the release of Google Panda, the company has been heavily criticised by a range of press regarding the quality of the recent updates. So we thought we would throw together a list of areas that the search engine has lost its way, and what it can do to improve. This is not a list of grievances against Google, as there are plenty of other places where you can see this occurring. This is more about what we think is affecting the quality of Google’s search algorithm, and what is causing some good quality relevant websites to have a lower overall position.

Still using title tags, not page content

The number one biggest failing, in my opinion, of the way Google analyses search results. Based on the recent survey by SEO Moz, it is the number one on page keyword factor determining the rank of the page. Using this as a page metric is a BIG mistake for two reasons.

Firstly, it encourages websites to have more pages with less content, because in order to use this tool you have to have a different page for every aspect you intent to promote. So, for example, if I am selling a plumbing service for unblocking drains, I would be better off to have a different page for each aspect of the unblocking process, and detail what these aspects are. If I want to promote that I unblock storm drains AND domestic drains, I have to have two separate pages. Basically I can outrank my competitors by having two pages about blocked drains rather than one good one.

Secondly, the title tag of a page is one of THE easiest aspects of SEO and website design to manipulate. No one reads the tag except search engines, so it does not matter what you put there. I have seen websites that use their title tag the way they used to use the keywords meta tag. It has become a keyword dumping ground.

Search engines will argue that the page title is the only reliable way to determine what the intent was of the author for the subject of the page information. After all, where will they pull the title of the individual search result from.

I have two possible solutions for this. One is that they could start to implement and read some sort of a rich snippet/XML solution, where designers can title tag the paragraph(s) they are using. So a page that has several paragraphs of sub content can be broken up with a <subcontent> </subcontent> tag. But isn’t this just a different implementation of the title tag, and similarly open to abuse? Well not really. For one, it is something that the use will actually have to read, so it will have to make sense. For two, it allows one page on one topic to be broken up, so now search engines can see the relationship between  topics on a single page. So a page that has lots of sub headings balanced with content is going to be worth more than one page without any other supplementing information. It also allows for a more natural language to be used in making a web page.

The other solution might be to read H1 tags the same way they read title tags now. Again, websites could have multiple H1 tags on a page. They can be used the same way internal page anchor links are used now. Also it would mean that the text would HAVE to be readable.

Still valuing domain names

This is an obvious problem. Again the guys at SEO Moz listed it as a factor that is likely to decrease in value over time anyway. The main problems with valuing domains are that anyone can get them at a price that is often lower than putting the work into the content and there are few restrictions to the keywords they contain.

Reading too much into social media

Search engines have always equated links to recommendations. If I link to you, that is my way of saying I trust you. They have spent a lot of time and effort working out who to trust and who to rely on for these links. However Social media does not work in the same way. Whereas it takes time and money to set up a website, and get it trusted, a social page is cheaper and easier to manipulate. It costs nothing to set up 100 twitter accounts all linking to each other. There are websites where you can buy 1000 “likes” for your Facebook page. Also Google has never publicly said trading likes and affiliations on Facebook and Twitter are against their guidelines. As a correlation to that, it is too easy to share links with social media. There is no trust issues associated with anyone linking to anyone’s page, and for all purposes that link is then forgotten about.

Domain wide signals going focussing too much on the negative

This has really been seen since the latest update. It is speculated that content that is copied from any source anywhere can have a detrimental affect to the domain value. Looking at the sum total of ranking factors for a page, the quality of the domain plays a massive part. Primarily, all pages are linked to each other on a domain, so the domain’s rank has multiple impact points. As a secondary point Google looks at is the values such as domain age and .tld status to give EVERY page points. This means that it is probably the second biggest ranking factor after on-page factors (albeit indirectly).

What I am getting at is, in the modern era many company employees and contractors have an impact on a corporate website. One page that goes astray can easily affect the whole website.

Not looking at website “purpose”

The last thing Google can do to improve its search pages is to look at the “purpose” of a website and give ranking points based on that. Is the website a directory, a news aggregator, a business page, a community? All of these factors would determine if the content of that page can be trusted and what value the links leaving that page have. Looking at Wikipedia, it is clearly one of the most visited websites on the web. If you search for “Pilates” it is the top result. However it is, really, just a collection of links referencing other websites. The information on their is good enough to pass the Google filter, but if you were looking for a local Pilates instructor, the history of Pilates or other information, you will go through Wikipedia to another site. Why not serve those sites up first? Or better yet, as Google has done with their “News” “Blogs” etc, they could have a few more for “Social Media” “Communities” etc.

Overpersonalisation

This one was prompted by a BBC article online. Google personalises all search results now, so that if you have clicked on or “liked” a search result before, then you will get the result closer to the top of your searches. This is great if you are always looking for similar things. However, if you search for “lane cove dentist”, the first time you might want the number of your dentist, the next time you might want to see what the local council says about operating in the area. This is a very simplified view, but you get the idea. With the addition of the “Google Plus One” this is going a little further, with results from within your local network getting preference over other results too.

Overall, my main compaint, is that all of these things are affecting the purity of Google’s results. Where once they had the most accurate and cleanest search engine, now they have something that is less accurate.

I am sure you have your ideas for improvements, let us know!

Leave a comment