448212_53eb2ea916_z

Strategy Internet Marketing are seeking to build relationships with talented journalism and creative writing students.

We are currently looking for volunteer bloggers to write interesting and engaging blogs on a wide range of different topics. This is an excellent opportunity for any writers who are looking to gain some valuable industry experience and get published online. Continue Reading

bowl

Cloaked in secrecy, hiding in the offices of many internet marketing agencies, reside sneaky Bowling Ninjas who, on the quiet, possess legendary skills in the art of Ten Pin Bowling. Sadly, not many of them have taken up residency at Strategy Internet Marketing. Still, not liking to admit to possessing the collective bowling prowess of an out of date tin of Spam, the team decided a mission was needed. A social event was planned to find out answers. Most specifically, one answer. An answer to the ultimate question:

Who at SIM could lob a lump of plastic reactive resin down a bowling alley and cause the most devastating carnage to the neat formation of pins so beautifully arranged at its zenith? Continue Reading

1928559400_b265ca89ec_z

Over recent months, even years, it’s become more and more evident that search as we know it is changing. The growth and popularity of social media and location information (via GPS & IP address) means that the search engines can tailor search results to an individual, meaning that people are starting to see completely different search results than the person sitting next to them at work. In the past few weeks, both Google and Facebook have announced new social features, which just reinforces our belief that a new age of search is coming.

Google Goes Social

In the past few weeks, we have seen a big change in Google, this started with a change to the look of search results and the appearance of a black bar across the top. The filter options down the side have been given some colour, and even when signed out with personalization turned off information such as location is logged and search results adjusted accordingly.

These changes are all part of Google’s latest, and most promising, attempt at social media. Google+ is still in it’s early days, with limited user capacity, but the sounds coming from Google+ is positive, with people really liking the interface and usability. So far this is looking to be Google’s most successful attempt at social media, and one that is likely to stick.

There are a number of aspects of Google+ which show that Google have been paying attention to how people use other social media and what they’ve been unhappy with. One of the most widely hailed success stories so far has been ‘Circles’. Google have realised that people don’t necessarily want to share every part of their life with everyone they know, so when you add a new person to your friends you will be asked to categorise them into a circle, this will mean that with a circle for family and one for colleagues you can choose to share something personal with your family but your colleagues won’t see this update. Yes, Facebook have lists, and yes if you go through the effort of clicking through you can choose a list to view your status update, but it’s clunky, a lot of effort, and a lot of users just don’t know how to do it! In Google+ Circles this process is quick and simple, and everyone likes quick and simple! If you see something on any Google site you want to share with one of your circle’s you just need to look to the new black toolbar and voila it is shared.

The next feature of Google+ is ‘Sparks’ which searches the internet and pulls in elements it believes you will be interested in based on what you add to the search bar. This is a feature which is lacking at the moment, but we believe that as more users join Google+ and Google collect more information on different people and their interests, the results will become more relevant. There is also a ‘Featured Interests’ section where you can see what other people are using sparks for.

‘Huddle’ is a group messaging tool which will work across Android, iPhone & SMS. It uses your circles and means that a group of people can be part of the same instant conversation as it unfolds. Closely related are ‘Hangouts’, the best way to describe these are as the new age of forums. You can log on and make yourself available for a video chat in a huddle, friends will see you are available and can come and go as they please with up to ten people at a time in one huddle. This eliminates some of the issues with Skype where people can’t or don’t want to talk as everyone in a huddle chooses to be there.

Once a user has activated their Google+ profile their standard Google profile will be removed so that Google+ is the central point for all things Google, there are also some pretty nice features associated with the profile. Those with the android app can auto upload photos from their phone! The profile will also show all of the +1 content the user has chosen.

What is +1

Google +1 is Google’s version of a ‘Like’ button. When logged into Google, any search you do will have a +1 image next to the title tag. By clicking on this you are doing several things:

  • bookmarking this site for future use (available in your Google+ profile)
  • telling other friends that you like something as your profile image will appear in their search results as having +1′d this site
  • feeding information into Google and the owner of that site about who likes it and how they use it!

and once you’ve +1′d a listing:

The +1 button can also be added to websites so users can click on a page whilst in the site. The advantage to website owners for adding the button is that they can access an activity report in Webmaster Tools which shows how many times pages have been +1′d from site links and search links. They are also able to access an audience report which provides demographic information.

Social Plugin Tracking in Google Analytics

Google haven’t stopped at  Google+ and their +1 button though! They have also introduced social plugin tracking for Google Analytics, this will track all social media activity from Tweets to Facebook Share’s to Google +1 clicks. By adding a bit of javascript into your site you can access a Social Engagement Report which will breakdown site behaviour for all social visitors. This means that you can see if people who use the social interaction buttons, spend longer on the site and which pages they visit and share.

This gives website owners, new considerations. They may have pages which are ‘hot’ for a few days or weeks because they are topical, or there is an e-newsletter sent out etc. But there may also be pages which are consistently ‘liked’ over a long period of time, which will give website owners the opportunity to optimise content which is valuable to their target audience.

There are some pros and cons to the tracking though; Analytics will only report on +1 interactions which occur on your website domain. Whereas the activity report in Webmaster Tools will show all +1 interactions regardless of where on the web they happened. However, Analytics is updated more frequently than Webmaster Tools which also means that the two will rarely tally up!

Google acquired Post Rank?

That’s right, Google have acquired the company who, to date, have the best methods of aggregating social engagement information. This is going to mean that as well as all the new information Google will acquire from Google+ and +1 they will also acquire all of this social information PostRank have acquired over time. This said, the services offered by Post Rank are impressive and definitely worth considering. Some of the highlights are:

  • real time tracking of where your visitors have come from, how they came to you, and what they’ve done on your site.
  • measuring of actual user activity, this will translate into the relevance and influence of your site - off-site engagement can account for 80% of the attention your content receives!
  • finds the influencers for your brand
  • benchmarks your competition
  • tracks your engagement points
  • top posts widget for blogs, meaning your most popular content is always easily available

Finally, now combined Post Rank and Google Reader enables ability to score, filter, and track the performance of your RSS feeds.

Google Offers

Set to compete with flailing group offer site Groupon, Google Offers will give companies the opportunity to get a specific offer out to potential customers who fit into their designated demographic. These customers will buy directly from Google in advance meaning that the company presenting the offers will receive a one-off payment from Google once the payment’s have gone through.

The staff at Google Offers will be advertising specialists who will be able to help in the ad and offer creation meaning that the company wanting to create the offer will have support right the way through the process.

Authorship Markup

Rel Author is a form of content tagging which enables you to tag content to highlight the author. This enables Google to distribute weight appropriate weight based on who the author is and how popular they are. This authorship markup will also link into the author’s Google+ profile so that a list of all the content they have written will be available from their profile. Pete Wailes summarised the process as “integrating Google+ and the authorship of the net”.

Facebook Edge Rank

Finally, the whole world doesn’t revolve around Google, although they would like to think it does. Facebook have been pretty busy themselves. They recently announced their partnership with Skype which will allow video calls from within the site. However, there has been a lot of talk about Edge Rank, what it is and how it works. Basically it’s what Facebook uses to ensure that what appears in the News Feed is relevant to each user.

If the News Feed wasn’t filtered it would be completely unmanageable so Facebook created a formula which takes into account three ‘edges’ each one of these edges is a component which is then fed into the formula. The components which make up each ‘edge’ are:

  • Affinity Score – between the user and the status creator, how much interaction is there between you, how often do you visit each other’s profile’s, comment on each other’s wall etc.
  • Type of Edge – Different types of activity have a different amount of weight, so a status update may be worth more than merely pressing a like button
  • Time – The older the edge the less important it is
What it comes down to is that new feed objects are more likely to show for people you interact with regularly, this means that companies need to be actually interacting with their followers and producing relevant content. Getting people to ‘Like’ their page and hammering out sales blurb isn’t going to be enough.
Join the discussion?
In summary, social is the next big thing, SEO is becoming more influenced by what is happening in the social landscape. It is not enough to have the right keyphrases and content anymore. As SEO’s we have to become more socially minded and truly understand the direction the internet is taking, and embrace it! Companies need to take the time to answer customer’s queries, respond to their questions, encourage discussion, and not just set up Twitter and Facebook pages as token gestures. It’s time to ‘Join The Discussion’
Google-plus-one-spam

Just a quick post for all you developers out there – I’ve quickly hacked together a function for getting the number of shares of a url on Google+. I can’t be the only one out there who needs this, so I thought I’d give back to the community with it. This implimentation is in PHP, but it shouldn’t be too hard to understand and port.


<?php
$ch = curl_init();

$encUrl = "https://plusone.google.com/u/0/_/+1/fastbutton?url=".urlencode($url)."&count=true";

$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_USERAGENT => 'spider', // who am i
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 5, // timeout on connect
CURLOPT_TIMEOUT => 10, // timeout on response
CURLOPT_MAXREDIRS => 3, // stop after 10 redirects
CURLOPT_URL => $encUrl,
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => false,
);

curl_setopt_array($ch, $options);

$content = curl_exec($ch);
$err = curl_errno($ch);
$errmsg = curl_error($ch);

curl_close($ch);

if ($errmsg != '' || $err != '') {
print_r($errmsg);
print_r($errmsg);
}
else {
$dom = new DOMDocument;
$dom->preserveWhiteSpace = false;
@$dom->loadHTML($content);
$domxpath = new DOMXPath($dom);
$newDom = new DOMDocument;
$newDom->formatOutput = true;

$filtered = $domxpath->query("//div[@id='aggregateCount']");
return $filtered->item(0)->nodeValue;
}
?>

Enjoy!

4075815796_2329eaa9de

“My, but we’ve come a long way”, we’ll say on the day when Google’s list of links finally disappears. And that day will come sooner than many think.

Over the past eight or so years that I’ve been working in the search industry, I’ve seen a lot of changes. Google News & Froogle (what was to become the Shopping search interface) had only recently launched, Google’s entire index was less than 6 billion pages, there was no Gmail, no mobile search, YouTube, Facebook, Bing was MSN Search and powered by Looksmart & Inktomi, Yahoo! was powered by Google’s technology…

More interesting though has been the lack of innovation in result UI. Oh sure, we’ve got much richer results now than we’ve ever had before, and the underlying technology is far in advance of what it was then, but in terms of how we actually deliver results, I’m not so sure.

A Future Interface

Let me clarify. Based on some recent comments by people at both Google and Microsoft, with regards to answering search queries, the interfaces of the future clearly aren’t going to look like they are now. Instead, they’re going to focus far more on actually answering the users question. We’ve seen the start of this with Google’s recipe search, and Bing’s travel search products.

However, these are just the beginnings of a greater shift in how we interact with the great database that is the Internet. For a more complete understanding, we rather strangly, have to turn to the world of TV game shows.

Search? It’s Elementary My Dear Watson

Earlier this year, Watson, a supercomputer built by IBM, trounced the two greatest human Jeopardy! players at their own game. Much like a modern web search engine, Watson runs thousands of algorithms symulatniously to actually calculate the correct answer to a question. Now, this is fine for where there is an actual answer (questions like ‘what is the’, ‘in what year did’, ‘where can you’ etc), but for ones where a user decision is required, we need to look beyond this.

At this point, we get in to the idea of a twin-structured search engine. In the first part, it’d simply attempt to answer a question presented to it. We can already see this done, if you ask an engine what the time is in a certain place, what a cinema is showing today, or if you want an answer to a calculation. It’s simply an extension (albeit a huge one) of technology that’s already in place.

In this particular area, SEO as we know it will die. Google will simply parse the question and deliver the answer. No links involved.

The second area though, where the user needs to decide based on information, is quite different. This is where the semantic web truly comes in to its own.

Second Site

The semantic web is a fairly old idea, the crux of which is that one day, all the data on the web will be understandable by machines. To kick-start this, Google, Bing and Yahoo! recently announced the launch of schema.org, a protocol similar to XML sitemaps (but with far broader scope) in that it aims to get the entire web marked up in a way that will facilitate this.

In this new web, a search engine would be able to grab any piece of data from any website, understand it, and then use it to produce better answers for the user. So if I were to type in ‘best small family car’, my results page would show me various small family cars, ratings by various associations, new & used prices, ancilliary information (videos, image galleries etc), and links to places to go to buy one.

This offers an exciting possibility for consumers – instant, well presented information on any topic, with the option to go out and view the original source information, with greater expansion on the subject if required. Think of it like an uber-Wikipedia. For a live example of something like this working, take a look at this results page for ‘yoga poses’ in Bing.

Welcome to the Jungle

Now, for the record, I don’t know what Microsoft or Google’s intentions are. But it’s increasingly clear that if they wanted, this is a direction that they could move in. With their increasingly titanic data stores, they’re in an amazing position to completely transform how we interact with the world’s information. For now though, webmasters need to consider three things:

  • Marking up your data probably won’t help your rankings in any particular area at the moment
  • Not marking up your data almost certainly will stop you ranking in different forms of search interface in the future
  • The websites that act now will, as always, be better placed when change comes along

So do you need to worry about getting your data marked up today? No, but have it in the back of your mind, and make sure you do it sooner rather than later.

1856663523_cffa76bfbc

We speak to a lot of Clients who don’t realise that it is extremely important for their site navigation, (commonly referred to as internal links or information architecture) to be extremely well considered so that the right pages get indexed easily and regularly by the search engine spiders. Connected to the site architecture is the preference that no one page contains more than 100 links, this keeps the quality score assigned to each link at a respectable level and helps the spiders move through the site properly.

Crawl Priority

To start, it helps to understand how the spiders prioritise the pages, and then crawl the site.

Spiders will visit popular pages more often, popular pages are defined by the number of back-links and the site architecture should correlate with this. For example:

  • Your homepage, and chosen landing pages, should be the most popular with the most back-links
  • First and second level category pages should be fairly popular but containing less back-links than the homepage
  • At the bottom of the priority are the deepest pages, these will be pages such as news pages, product pages, service price lists etc

The spiders will enter the site via a landing page, this doesn’t need to be the homepage, they will then follow links through each page looking to index the whole site. They don’t like being sent in circles and they don’t like feeling lost in too many links, so it’s important that your site architecture makes it as easy as possible for the spiders to do their job, whilst getting all the pages which need indexing, indexed. Ideally you want the spiders to be able to index everything within three clicks of arriving on the site, regardless if that is your homepage or your deepest category page.

XML Site Maps

XML site maps are seen as the quick fix for architecture issues, and this is what they are. They do not resolve problems in the site architecture and internal navigation, they merely hide the problems so that you are unaware of them.

In an ideal world, you would not add an XML site map until you know the website architecture is sound and secure and most importantly indexing on it’s own. Below are some basic architecture tips to get you started.

Keep Architecture Flat

You want to keep your architecture as flat and easy to navigate as possible, whilst retaining the three click rule (if a spider lands on one of your deeper pages, can they reach the other pages within three links?)

In a brand new website the following structure is a common one used with the 100 links per page being the absolute maximum you should have on each page.

At the top: Homepage with no more than 100 links per page
First Level: Categories – no more than 100 pages (each page has no more than 100 links)
Second Level: Sub-Categories – no more than 10,000 (each page has no more than 100 links)
At the bottom: Detail/Products – no more than 1,000,000 pages

Index and rankings are determined by how much authority each page has, the higher the domain authority of your site the more links you can realistically get away with including on each page. As a rough guide, if your website already holds some domain authority (DA) you can increase the links on each page as follows:

DA 7-10 = 250 links
DA 5-7 = 175 links
DA 3-5 = 125 links
DA 0-3 = 100 links

So, the smaller the number of links the spiders have to follow to index the whole site, the happier they are and the more weight each page will hold.

Faceted Navigation

This is a common and useful aspect of ecommerce sites, which allows you to pick facets of a product which are important to you. For example, you could pick the category of T-Shirts, pick the colour black, and the size Medium, the results you are shown then directly correspond with what you specifically want. In essence the website has ignored anything which doesn’t contain the facets you have chosen.

Setting up faceted navigation can be tricky, and you need to keep in mind that the primary facet pages won’t rank, you want the deeper facet pages to rank as these are the one’s that will help the spiders discover all of the product pages.

When setting up faceted navigation, some of the things to keep in mind are:

URL

You must have a unique URL for each facet level. The URL’s should be clear and not complicated and hard to follow:

Clear URL: www.tshirtdomain.co.uk/tshirts/black/medium
Unclear URL: www.tshirtdomain.co.uk/all/tshirts/all/black/all/medium

You also want to ensure that whatever route somebody takes to reach this facet level the same URL is shown so for example:

Somebody clicks on Tshirts, then Medium, then Black the URL they end up on should still be www.tshirtdomain.co.uk/tshirts/black/medium and not www.tshirtdomain.co.uk/tshirts/medium/black which would result in you creating unnecessary duplicate content issues!

Adding & Removing Facets

You should make it easy for your customers to add or remove additional facets as they see fit.

As they add facets to their search these should be displayed as follows so that any or all facets can be removed by the user:

Tshirts [remove]
black [remove]
medium [remove]

So that they can easily choose which facets can be automatically generated from the results meta data so it is easy for you to display the number of results within that facet, for example:

Blue [35]
Green [23]
Yellow [1]

No Index

Any pages which could be considered as duplicate content should be no-indexed, the spiders will still visit these pages but they won’t index them. To keep a page out of the index you want to add some code to the page as follows:

<meta name = “robots” content = “noindex”> – This will make the page no index
<link rel = “canonical” href = “domainname.co.uk/tshirts/black”> – This will take the spiders back to the correct page.

Filtering & Pagination

Another common aspect of ecommerce sites is filtering results. This is where you can choose a filter which will sort the products in a certain way, for example only showing 10 items per page (creating pagination or multiple pages), or showing lowest priced items first.

The ideal way to deal with pagination in category results is to programme the page to show all results rather than writing each page of results as page 1, page 2, etc.

Once the main category page has been created you can then use javascript to create the pagination. Search engine spiders don’t follow javascript so you don’t risk duplicate content from having multiple pages under each content, but all of the products are indexed.

Plan, Plan, and Plan Again

Don’t under-value the benefit of properly planning your website. Most of our examples have referred to ecommerce sites, but the same principal applies to brochure sites. Plan to succeed and your website will be a spider’s navigational dream and you will be rewarded with good search results and no duplicate content issues.

In summary, the number one rule for you to keep in mind when you are planning your navigation is that you want as few pages as possible to be indexed, whilst allowing for each and every product page to be indexed.

5986540225_a7fb61a4db_z

I'm speaking at SMX London AdvancedYes, in case you hadn’t heard, after a two year absence, I’m going to be back speaking at SMX London Advanced. The session (copied from the agenda website) is:

Link Alchemy: Creative Ways Of Conjuring SEO Gold

Despite all the recent changes in search engine algorithms, links remain the single most important part of an effective search marketing campaign. And to successfully compete, you need to go beyond traditional link building techniques to create natural but scalable campaigns. What tools are available to analyse competitor links? What non-traditional channels, such as .edu links and retweets can be used? Our speakers show you how to reinvigorate your link building campaigns and take them to the next level.

Speakers:

I’ll also be co-moderating:

What’s New In Local & Mobile

According to Google, as many as 30% of all search queries have local intent. And according to IDC, more internet-capable mobile devices will be sold this year than computers. In short, local and mobile are both here and huge, and will continue to be an important part of many search marketer’s activities. This session looks at new developments in local search, location services, mobile apps and ads.

  • Moderator: Greg Sterling, Founding Principal, Sterling Market Intelligence
  • Q&A Moderator: Me, Here

Speakers:

It’s an honour to be back speaking to the industry again, and to be back as a participant after an extended period, and I look forward to seeing you all there!

2833540510_3a3d273b8f

People are always interested to know how many more clicks they are likely to receive if they appear at position 1 of the search engine results pages for their chosen key-phrases. There are many different studies and statistics available about organic click-through rates on the internet, many of which are contradictory. This report details findings from some of the more well known studies and how much you can actually learn from them.

Study 1

Below is a chart detailing the data from the famous AOL data leak in 2006. Although old, this data is still often quoted as gospel.

2006 AOL Data Leak Chart

Study 2

This study was conducted by Neil Walker, a UK based SEO expert. Some Blog posts suggest Walker’s study is based on Webmaster Tools data across 2700 keywords. Walker himself claims that the data comes from a study of Webmaster Tools in 2010, the AOL data of 2006 and an eye tracking study conducted in 2004.

Organic Click Through Rate Study 2

Study 3

Another well known study conducted in 2010 was by Chitika, a data analytics company in the business of advertising. For their study, they looked at a sample of traffic coming into their advertising network from Google and broke it down by Google results placement.
Traffic by Google Result - Study 3

What can we actually learn from this?

Well, it is clear that if you are at position 1 in the search engine results pages, you are very likely to receive substantially more clicks. However, there are always exceptions to this rule.

A famous example:

For a long time, if you searched for ‘mail’ on Google, Gmail would come up at position 1 and Yahoo would come up at position 2. Still, Yahoo received in excess of 50% of the click-throughs. Studies indicated that this was because people searching for ‘mail’ were looking to login to their Yahoo mail account.

This example illustrates that if people are looking for something specific, they will not always click on position 1 if it doesn’t seem to offer what they are looking for. Another example is Wikipedia: they are often displayed in high positions for a wide range of phrases, but won’t always receive a high click-through rate because people aren’t always looking for general information.

In summary, at position 1 of the search engine results pages you are extremely likely to receive the most clicks, but exactly how many more than the lower positions is impossible to say. However, the figures in the studies detailed above can give a good indicator of what to expect. Search engine results positions and click-through rates will always be dependant on high quality SEO, your choice of key-phrases, and the area of business in which you operate.

2215253658_1d49567f0d

Most companies with a website are probably guilty of claiming on their website and internet marketing activities that they are ‘the best’ in their field. However this will be a risky claim to make from 1st March 2011 when the Advertising Standards Authority (ASA) extend the advertising rules around making these claims to include online advertising. Continue Reading

Get in touch