4593731593_04a66d928f

Around the middle of Janurary, Google rolled out “Search Plus Your World” (hereon called SPYW), which means that logged-in users will get their organic search results augmented with socially shared content and markup, ostensibly from Google+. Danny Sullivan already wrote up two pieces about that (“Google’s Results Get More Personal”:http://searchengineland.com/googles-results-get-more-personal-with-search-plus-your-world-107285 & “Real-Life Examples of How Search Plus Pushes Google+ Over Relevancy”:http://searchengineland.com/examples-google-search-plus-drive-facebook-twitter-crazy-107554), which cover the changes brilliantly, so I suggest reading those, before carrying on. Continue Reading

2941676828_07b19d1699_z

What a week this has been in search! On Tuesday Google changed the rules in a big way and this time it could have a significant impact on search results. Don’t be alarmed its unlikely to affect your results over night so there is time to embrace the changes that are needed but change is definitely necessary unless you want to risk getting left behind.

Google are constantly updating their algorithm which is why they continue to be the most popular search engine by a country mile. But this week they took a step that forces those serious about the success of their website to sit up and listen – by engaging with Google+. Continue Reading

5177358991_71dc3e1ac7

You may have noticed in recent months that there is a new version of analytics accessible within your existing account. It looks quite different but has much of the same data. The navigation is organised in a more sensible way and the filtering options & advanced segments are easier to manage. Its a step forward in more ways that just the layout and usability interface.

Continue Reading

1928559400_b265ca89ec_z

Over recent months, even years, it’s become more and more evident that search as we know it is changing. The growth and popularity of social media and location information (via GPS & IP address) means that the search engines can tailor search results to an individual, meaning that people are starting to see completely different search results than the person sitting next to them at work. In the past few weeks, both Google and Facebook have announced new social features, which just reinforces our belief that a new age of search is coming.

Google Goes Social

In the past few weeks, we have seen a big change in Google, this started with a change to the look of search results and the appearance of a black bar across the top. The filter options down the side have been given some colour, and even when signed out with personalization turned off information such as location is logged and search results adjusted accordingly.

These changes are all part of Google’s latest, and most promising, attempt at social media. Google+ is still in it’s early days, with limited user capacity, but the sounds coming from Google+ is positive, with people really liking the interface and usability. So far this is looking to be Google’s most successful attempt at social media, and one that is likely to stick.

There are a number of aspects of Google+ which show that Google have been paying attention to how people use other social media and what they’ve been unhappy with. One of the most widely hailed success stories so far has been ‘Circles’. Google have realised that people don’t necessarily want to share every part of their life with everyone they know, so when you add a new person to your friends you will be asked to categorise them into a circle, this will mean that with a circle for family and one for colleagues you can choose to share something personal with your family but your colleagues won’t see this update. Yes, Facebook have lists, and yes if you go through the effort of clicking through you can choose a list to view your status update, but it’s clunky, a lot of effort, and a lot of users just don’t know how to do it! In Google+ Circles this process is quick and simple, and everyone likes quick and simple! If you see something on any Google site you want to share with one of your circle’s you just need to look to the new black toolbar and voila it is shared.

The next feature of Google+ is ‘Sparks’ which searches the internet and pulls in elements it believes you will be interested in based on what you add to the search bar. This is a feature which is lacking at the moment, but we believe that as more users join Google+ and Google collect more information on different people and their interests, the results will become more relevant. There is also a ‘Featured Interests’ section where you can see what other people are using sparks for.

‘Huddle’ is a group messaging tool which will work across Android, iPhone & SMS. It uses your circles and means that a group of people can be part of the same instant conversation as it unfolds. Closely related are ‘Hangouts’, the best way to describe these are as the new age of forums. You can log on and make yourself available for a video chat in a huddle, friends will see you are available and can come and go as they please with up to ten people at a time in one huddle. This eliminates some of the issues with Skype where people can’t or don’t want to talk as everyone in a huddle chooses to be there.

Once a user has activated their Google+ profile their standard Google profile will be removed so that Google+ is the central point for all things Google, there are also some pretty nice features associated with the profile. Those with the android app can auto upload photos from their phone! The profile will also show all of the +1 content the user has chosen.

What is +1

Google +1 is Google’s version of a ‘Like’ button. When logged into Google, any search you do will have a +1 image next to the title tag. By clicking on this you are doing several things:

  • bookmarking this site for future use (available in your Google+ profile)
  • telling other friends that you like something as your profile image will appear in their search results as having +1’d this site
  • feeding information into Google and the owner of that site about who likes it and how they use it!

and once you’ve +1’d a listing:

The +1 button can also be added to websites so users can click on a page whilst in the site. The advantage to website owners for adding the button is that they can access an activity report in Webmaster Tools which shows how many times pages have been +1’d from site links and search links. They are also able to access an audience report which provides demographic information.

Social Plugin Tracking in Google Analytics

Google haven’t stopped at  Google+ and their +1 button though! They have also introduced social plugin tracking for Google Analytics, this will track all social media activity from Tweets to Facebook Share’s to Google +1 clicks. By adding a bit of javascript into your site you can access a Social Engagement Report which will breakdown site behaviour for all social visitors. This means that you can see if people who use the social interaction buttons, spend longer on the site and which pages they visit and share.

This gives website owners, new considerations. They may have pages which are ‘hot’ for a few days or weeks because they are topical, or there is an e-newsletter sent out etc. But there may also be pages which are consistently ‘liked’ over a long period of time, which will give website owners the opportunity to optimise content which is valuable to their target audience.

There are some pros and cons to the tracking though; Analytics will only report on +1 interactions which occur on your website domain. Whereas the activity report in Webmaster Tools will show all +1 interactions regardless of where on the web they happened. However, Analytics is updated more frequently than Webmaster Tools which also means that the two will rarely tally up!

Google acquired Post Rank?

That’s right, Google have acquired the company who, to date, have the best methods of aggregating social engagement information. This is going to mean that as well as all the new information Google will acquire from Google+ and +1 they will also acquire all of this social information PostRank have acquired over time. This said, the services offered by Post Rank are impressive and definitely worth considering. Some of the highlights are:

  • real time tracking of where your visitors have come from, how they came to you, and what they’ve done on your site.
  • measuring of actual user activity, this will translate into the relevance and influence of your site – off-site engagement can account for 80% of the attention your content receives!
  • finds the influencers for your brand
  • benchmarks your competition
  • tracks your engagement points
  • top posts widget for blogs, meaning your most popular content is always easily available

Finally, now combined Post Rank and Google Reader enables ability to score, filter, and track the performance of your RSS feeds.

Google Offers

Set to compete with flailing group offer site Groupon, Google Offers will give companies the opportunity to get a specific offer out to potential customers who fit into their designated demographic. These customers will buy directly from Google in advance meaning that the company presenting the offers will receive a one-off payment from Google once the payment’s have gone through.

The staff at Google Offers will be advertising specialists who will be able to help in the ad and offer creation meaning that the company wanting to create the offer will have support right the way through the process.

Authorship Markup

Rel Author is a form of content tagging which enables you to tag content to highlight the author. This enables Google to distribute weight appropriate weight based on who the author is and how popular they are. This authorship markup will also link into the author’s Google+ profile so that a list of all the content they have written will be available from their profile. Pete Wailes summarised the process as “integrating Google+ and the authorship of the net”.

Facebook Edge Rank

Finally, the whole world doesn’t revolve around Google, although they would like to think it does. Facebook have been pretty busy themselves. They recently announced their partnership with Skype which will allow video calls from within the site. However, there has been a lot of talk about Edge Rank, what it is and how it works. Basically it’s what Facebook uses to ensure that what appears in the News Feed is relevant to each user.

If the News Feed wasn’t filtered it would be completely unmanageable so Facebook created a formula which takes into account three ‘edges’ each one of these edges is a component which is then fed into the formula. The components which make up each ‘edge’ are:

  • Affinity Score – between the user and the status creator, how much interaction is there between you, how often do you visit each other’s profile’s, comment on each other’s wall etc.
  • Type of Edge – Different types of activity have a different amount of weight, so a status update may be worth more than merely pressing a like button
  • Time – The older the edge the less important it is
What it comes down to is that new feed objects are more likely to show for people you interact with regularly, this means that companies need to be actually interacting with their followers and producing relevant content. Getting people to ‘Like’ their page and hammering out sales blurb isn’t going to be enough.
Join the discussion?
In summary, social is the next big thing, SEO is becoming more influenced by what is happening in the social landscape. It is not enough to have the right keyphrases and content anymore. As SEO’s we have to become more socially minded and truly understand the direction the internet is taking, and embrace it! Companies need to take the time to answer customer’s queries, respond to their questions, encourage discussion, and not just set up Twitter and Facebook pages as token gestures. It’s time to ‘Join The Discussion’
4075815796_2329eaa9de

“My, but we’ve come a long way”, we’ll say on the day when Google’s list of links finally disappears. And that day will come sooner than many think.

Over the past eight or so years that I’ve been working in the search industry, I’ve seen a lot of changes. Google News & Froogle (what was to become the Shopping search interface) had only recently launched, Google’s entire index was less than 6 billion pages, there was no Gmail, no mobile search, YouTube, Facebook, Bing was MSN Search and powered by Looksmart & Inktomi, Yahoo! was powered by Google’s technology…

More interesting though has been the lack of innovation in result UI. Oh sure, we’ve got much richer results now than we’ve ever had before, and the underlying technology is far in advance of what it was then, but in terms of how we actually deliver results, I’m not so sure.

A Future Interface

Let me clarify. Based on some recent comments by people at both Google and Microsoft, with regards to answering search queries, the interfaces of the future clearly aren’t going to look like they are now. Instead, they’re going to focus far more on actually answering the users question. We’ve seen the start of this with Google’s recipe search, and Bing’s travel search products.

However, these are just the beginnings of a greater shift in how we interact with the great database that is the Internet. For a more complete understanding, we rather strangly, have to turn to the world of TV game shows.

Search? It’s Elementary My Dear Watson

Earlier this year, Watson, a supercomputer built by IBM, trounced the two greatest human Jeopardy! players at their own game. Much like a modern web search engine, Watson runs thousands of algorithms symulatniously to actually calculate the correct answer to a question. Now, this is fine for where there is an actual answer (questions like ‘what is the’, ‘in what year did’, ‘where can you’ etc), but for ones where a user decision is required, we need to look beyond this.

At this point, we get in to the idea of a twin-structured search engine. In the first part, it’d simply attempt to answer a question presented to it. We can already see this done, if you ask an engine what the time is in a certain place, what a cinema is showing today, or if you want an answer to a calculation. It’s simply an extension (albeit a huge one) of technology that’s already in place.

In this particular area, SEO as we know it will die. Google will simply parse the question and deliver the answer. No links involved.

The second area though, where the user needs to decide based on information, is quite different. This is where the semantic web truly comes in to its own.

Second Site

The semantic web is a fairly old idea, the crux of which is that one day, all the data on the web will be understandable by machines. To kick-start this, Google, Bing and Yahoo! recently announced the launch of schema.org, a protocol similar to XML sitemaps (but with far broader scope) in that it aims to get the entire web marked up in a way that will facilitate this.

In this new web, a search engine would be able to grab any piece of data from any website, understand it, and then use it to produce better answers for the user. So if I were to type in ‘best small family car’, my results page would show me various small family cars, ratings by various associations, new & used prices, ancilliary information (videos, image galleries etc), and links to places to go to buy one.

This offers an exciting possibility for consumers – instant, well presented information on any topic, with the option to go out and view the original source information, with greater expansion on the subject if required. Think of it like an uber-Wikipedia. For a live example of something like this working, take a look at this results page for ‘yoga poses’ in Bing.

Welcome to the Jungle

Now, for the record, I don’t know what Microsoft or Google’s intentions are. But it’s increasingly clear that if they wanted, this is a direction that they could move in. With their increasingly titanic data stores, they’re in an amazing position to completely transform how we interact with the world’s information. For now though, webmasters need to consider three things:

  • Marking up your data probably won’t help your rankings in any particular area at the moment
  • Not marking up your data almost certainly will stop you ranking in different forms of search interface in the future
  • The websites that act now will, as always, be better placed when change comes along

So do you need to worry about getting your data marked up today? No, but have it in the back of your mind, and make sure you do it sooner rather than later.

1856663523_cffa76bfbc

We speak to a lot of Clients who don’t realise that it is extremely important for their site navigation, (commonly referred to as internal links or information architecture) to be extremely well considered so that the right pages get indexed easily and regularly by the search engine spiders. Connected to the site architecture is the preference that no one page contains more than 100 links, this keeps the quality score assigned to each link at a respectable level and helps the spiders move through the site properly.

Crawl Priority

To start, it helps to understand how the spiders prioritise the pages, and then crawl the site.

Spiders will visit popular pages more often, popular pages are defined by the number of back-links and the site architecture should correlate with this. For example:

  • Your homepage, and chosen landing pages, should be the most popular with the most back-links
  • First and second level category pages should be fairly popular but containing less back-links than the homepage
  • At the bottom of the priority are the deepest pages, these will be pages such as news pages, product pages, service price lists etc

The spiders will enter the site via a landing page, this doesn’t need to be the homepage, they will then follow links through each page looking to index the whole site. They don’t like being sent in circles and they don’t like feeling lost in too many links, so it’s important that your site architecture makes it as easy as possible for the spiders to do their job, whilst getting all the pages which need indexing, indexed. Ideally you want the spiders to be able to index everything within three clicks of arriving on the site, regardless if that is your homepage or your deepest category page.

XML Site Maps

XML site maps are seen as the quick fix for architecture issues, and this is what they are. They do not resolve problems in the site architecture and internal navigation, they merely hide the problems so that you are unaware of them.

In an ideal world, you would not add an XML site map until you know the website architecture is sound and secure and most importantly indexing on it’s own. Below are some basic architecture tips to get you started.

Keep Architecture Flat

You want to keep your architecture as flat and easy to navigate as possible, whilst retaining the three click rule (if a spider lands on one of your deeper pages, can they reach the other pages within three links?)

In a brand new website the following structure is a common one used with the 100 links per page being the absolute maximum you should have on each page.

At the top: Homepage with no more than 100 links per page
First Level: Categories – no more than 100 pages (each page has no more than 100 links)
Second Level: Sub-Categories – no more than 10,000 (each page has no more than 100 links)
At the bottom: Detail/Products – no more than 1,000,000 pages

Index and rankings are determined by how much authority each page has, the higher the domain authority of your site the more links you can realistically get away with including on each page. As a rough guide, if your website already holds some domain authority (DA) you can increase the links on each page as follows:

DA 7-10 = 250 links
DA 5-7 = 175 links
DA 3-5 = 125 links
DA 0-3 = 100 links

So, the smaller the number of links the spiders have to follow to index the whole site, the happier they are and the more weight each page will hold.

Faceted Navigation

This is a common and useful aspect of ecommerce sites, which allows you to pick facets of a product which are important to you. For example, you could pick the category of T-Shirts, pick the colour black, and the size Medium, the results you are shown then directly correspond with what you specifically want. In essence the website has ignored anything which doesn’t contain the facets you have chosen.

Setting up faceted navigation can be tricky, and you need to keep in mind that the primary facet pages won’t rank, you want the deeper facet pages to rank as these are the one’s that will help the spiders discover all of the product pages.

When setting up faceted navigation, some of the things to keep in mind are:

URL

You must have a unique URL for each facet level. The URL’s should be clear and not complicated and hard to follow:

Clear URL: www.tshirtdomain.co.uk/tshirts/black/medium
Unclear URL: www.tshirtdomain.co.uk/all/tshirts/all/black/all/medium

You also want to ensure that whatever route somebody takes to reach this facet level the same URL is shown so for example:

Somebody clicks on Tshirts, then Medium, then Black the URL they end up on should still be www.tshirtdomain.co.uk/tshirts/black/medium and not www.tshirtdomain.co.uk/tshirts/medium/black which would result in you creating unnecessary duplicate content issues!

Adding & Removing Facets

You should make it easy for your customers to add or remove additional facets as they see fit.

As they add facets to their search these should be displayed as follows so that any or all facets can be removed by the user:

Tshirts [remove]
black [remove]
medium [remove]

So that they can easily choose which facets can be automatically generated from the results meta data so it is easy for you to display the number of results within that facet, for example:

Blue [35]
Green [23]
Yellow [1]

No Index

Any pages which could be considered as duplicate content should be no-indexed, the spiders will still visit these pages but they won’t index them. To keep a page out of the index you want to add some code to the page as follows:

<meta name = “robots” content = “noindex”> – This will make the page no index
<link rel = “canonical” href = “domainname.co.uk/tshirts/black”> – This will take the spiders back to the correct page.

Filtering & Pagination

Another common aspect of ecommerce sites is filtering results. This is where you can choose a filter which will sort the products in a certain way, for example only showing 10 items per page (creating pagination or multiple pages), or showing lowest priced items first.

The ideal way to deal with pagination in category results is to programme the page to show all results rather than writing each page of results as page 1, page 2, etc.

Once the main category page has been created you can then use javascript to create the pagination. Search engine spiders don’t follow javascript so you don’t risk duplicate content from having multiple pages under each content, but all of the products are indexed.

Plan, Plan, and Plan Again

Don’t under-value the benefit of properly planning your website. Most of our examples have referred to ecommerce sites, but the same principal applies to brochure sites. Plan to succeed and your website will be a spider’s navigational dream and you will be rewarded with good search results and no duplicate content issues.

In summary, the number one rule for you to keep in mind when you are planning your navigation is that you want as few pages as possible to be indexed, whilst allowing for each and every product page to be indexed.

2506706124_0e9c3da720

Around a year ago, Google introduced their new database architecture, Caffeine. This changeover was done for several reasons, the first of which was to allow Google to continue to index all of the web in years to come, and the second of which was revealed last week: Instant.

The main issue with Instant, from Google’s perspective, is that it generates between 7-10 times the volume of searches per second than the previous version, as Google loads search result pages constantly as people are typing. With the expected rollout of this into browser bar-based searches (like the Chrome bar, the Google toolbar etc), this will almost certainly expand steadily from only appearing to logged-in users, to being the default state for Google.

Ten Blue Links?

So, the main upshot of the changeover to the Caffeine system is that it allows for vast amounts of real-time data to be added to the index almost as fast as it’s created. But what does this mean in terms of rankings?

Well, in short, it allows fresh data to be displayed to users much more rapidly. As a result, we’ve seen greater emphasis on results featuring video, location-based services, news items, personalised results and the like over the last year. This has had the effect of changing the strategy for SEO in certain industries, as it has created new avenues for search marketers to reach their intended audiences.

Instant Coffee Anyone?

A lot has been written about Instant over the last couple of days, some of it accurate, some of it less-so. To save time, I’ve compiled some basic takeaway points as to the nature of Instant, what it brings to the table, and how it affects SEO and PPC.

  • Does Google kill SEO? No, but it does change keyword research slightly, as marketers need to pay greater attention to the suggested keyword searches
  • Negative keywords need to be paid closer attention to in PPC, as a search for “U2 new” will return results for “U2 new album”, where a user might type their full query as “U2 new zealand tour dates”
  • PPC ad impressions will only count when:
    • the user clicks anywhere on the page after beginning to type a search query
    • the user chooses one of the predicted queries from Google Instant
    • the user stops typing and search results are shown for at least three seconds
  • The nuts and bolts of how SEO is conducted on-site and in linkbuilding hasn’t changed
  • The nuts and bolts of how PPC is conducted hasn’t changed either, although it is now pretty much the only good way of getting impression data for search volume numbers for keywords. Keyword tools will soon be relegated to being only useful for generating keyword ideas, not for estimating volume
3150549850_45b48486aa

To gain top listings in the search engines it helps if you understand how the search engines work with your website and how they determine which websites get to appear at the top.

Search engines uses robots (other names are spiders or crawlers) which are programs that visit websites by following links from one page to another. When a robot visits a page it will take a copy of that page and put it in the search engine’s own database (this process is known as caching a page). Once in the database the search engine will apply their algorithm to tag or index the page so that its position in the SERPs for any given search term can be determined quickly.

Website pages need to be indexed regularly in order to stand any chance of performing. Each page on a website that is indexed can be produced in the SERPs so every indexed page should be considered to be a potential landing page. You can easily block any page that you do not want to get indexed and concentrate efforts on optimising important landing pages.

The algorithms used by the search engines take into consideration many different factors (Google has about 200 factors) when they evaluate a page and decide where it should appear in their results pages. SEO considers each page on a website and applies the best fit to get the best ranking for that page amongst its competition. However, competition for different phrases is not equal and so the same thing that is successful for one page is not necessarily effective for another page.

Search engines want to provide the best and most relevant results for their users and so our efforts are concentrated on meeting the demands of the algorithms through white hat methods. With Google having up to 85% of the UK search market we tend to favour Google’s algorithm which in turn favours good quality content combined with quality, relevant inbound links. These clean strategies also feed Yahoo’s and MSN’s algorithms too.

For competitive market places it is harder to get top rankings for generic product and service related search terms because other websites have fought hard to get to the top and will defend their position against newcomers. With the right strategy, over time, top listings can be achieved but we would argue that the value is in the conversions achieved and conversions can come from a much wider source than the immediately obvious generic phrases.

Approximately 80% of searches performed every day are unique and many use 3, 4 or 5 word phrases, they are quite specific phrases and so they tend to convert well – this is commonly referred to as the “long tail” of search and its value should not be underestimated.

Our approach to optimising our client’s websites be to develop a two-pronged strategy that aims for the top competitive search engine listings and also top listings for a wide variety of long tail search phrases.  We know that the bottom line for our clients is to make more money so that is what we help them do.

If you are interested to know more about how the search engines are working with your own website you can conduct your own mini website SEO audit. Follow the simple steps and you will find the answers to the questions below, you can also so the same with your competitor sites and see what you can learn to improve your own site.

Is my site indexed by the search engines?
How many links do I have pointing to my site?
What are my Search Engine Ranking Positions (SERPs)?
What do my meta tags look like?

You can do SEO on your website yourself and we encourage clients to have an understanding of how we work but doing your own optimisation is much like fixing your own roof – you can buy the tools, read a book and have a go… it will cost you time and energy and it may get the result you want, or you can leave it to the professionals and go do something else that is a better use of your own time.

10281421_160b2bdcfa

In search engines results pages (SERPs) a single listing for a popular phrase will bring you traffic if it ranks high enough. However, having a second indented listing just under that main listing doesn’t just stand to bring you 100% more traffic, it could bring you as much as 200% more traffic. Its all to do with the way that people view the search results, fixate on ceratin points and click through to your website.

Studies of indented listings using heap maps show that where the primary listing is popular the indented listing is more popular, it gets more fixation points and more clicks than the primary listing does. You don’t have to have a number 1 primary listing in order to get an indented listing, you can get this listing at positions 8 & 9, at positions 4 & 5 or positions 23 and 24, it doesn’t make a difference where your primary listing appears. You want the highest possible listing, of course you do but my point is that you don’t have to be number one to achieve an indented listing.

An added bonus is that double listings can also help to increase your ranking positions and push you up the search engines results pages.

How do you get a double listing?

The primary listing is the page on your site that is deemed most relevant to the search phrase and the second listing is the page that is deemed second most important – the key is to find out which is the second best page for that phrase. Double listing’s only show when both pages are ranked on the same SERPs page so – SEO TIP – change your preferences on Google to show 100 results in the SERPs (the default is 10 per page), use the ‘find’ function to locate your website and see if you have a double listing showing. If you do then the second URL is the one you need to work on to strengthen the optimisation for the search phrase you are using.

Get in touch