London eye by cuellar

So, another year and another searchlove. Firstly, a huge thanks to all the Distilled crew, but especially to Lynsey and Lauren, without whom searchlove wouldn’t be half the conference it is.

So, with that aside, what have we learned from 2 days of brain-crushing content?

on mobile…

Mobile devices are here to stay. We’ve already passed a billion devices, and that’s set to more than double over the next five years. Put simply, if you don’t have a mobile strategy, you’re going to be out-manoeuvred by your competitors who do. Continue Reading


ndon skyline. Skyline de Londres. by J. A. Alcaide

session 1 – david mihm: the need to know of local seo

30% of all searches are local on desktop, 50% on mobile.

Mobile SEO used to look much like anything else – 10 blue links. Has been changing since about 09. We’re now seeing author markup, user reviews, local data etc.

Venice blew it open – before about 31% was blended, now it’s way past 60%.

Ranking factors include business name, physical location, customer reviews, references and anchor text, citations etc… Continue Reading


SEO, over the time that I’ve been working on it, has changed drastically. Back in the mists of time, it was fairly easy to simply create a site, get the title, meta keywords and description tags right, have OK content and you’d rank. Nowadays it’s somewhat more complex.

There’s various aspects that haven’t been traditionally considered part of SEO which have absolutely become part of it. Over a series of posts, I’m going to deconstructing each of these and looking at what needs to be taken in to account as part of it. Continue Reading


Around the middle of Janurary, Google rolled out “Search Plus Your World” (hereon called SPYW), which means that logged-in users will get their organic search results augmented with socially shared content and markup, ostensibly from Google+. Danny Sullivan already wrote up two pieces about that (“Google’s Results Get More Personal”:http://searchengineland.com/googles-results-get-more-personal-with-search-plus-your-world-107285 & “Real-Life Examples of How Search Plus Pushes Google+ Over Relevancy”:http://searchengineland.com/examples-google-search-plus-drive-facebook-twitter-crazy-107554), which cover the changes brilliantly, so I suggest reading those, before carrying on. Continue Reading


Just a quick post for all you developers out there – I’ve quickly hacked together a function for getting the number of shares of a url on Google+. I can’t be the only one out there who needs this, so I thought I’d give back to the community with it. This implimentation is in PHP, but it shouldn’t be too hard to understand and port.

$ch = curl_init();

$encUrl = "https://plusone.google.com/u/0/_/+1/fastbutton?url=".urlencode($url)."&count=true";

$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_USERAGENT => 'spider', // who am i
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 5, // timeout on connect
CURLOPT_TIMEOUT => 10, // timeout on response
CURLOPT_MAXREDIRS => 3, // stop after 10 redirects
CURLOPT_URL => $encUrl,

curl_setopt_array($ch, $options);

$content = curl_exec($ch);
$err = curl_errno($ch);
$errmsg = curl_error($ch);


if ($errmsg != '' || $err != '') {
else {
$dom = new DOMDocument;
$dom->preserveWhiteSpace = false;
$domxpath = new DOMXPath($dom);
$newDom = new DOMDocument;
$newDom->formatOutput = true;

$filtered = $domxpath->query("//div[@id='aggregateCount']");
return $filtered->item(0)->nodeValue;



“My, but we’ve come a long way”, we’ll say on the day when Google’s list of links finally disappears. And that day will come sooner than many think.

Over the past eight or so years that I’ve been working in the search industry, I’ve seen a lot of changes. Google News & Froogle (what was to become the Shopping search interface) had only recently launched, Google’s entire index was less than 6 billion pages, there was no Gmail, no mobile search, YouTube, Facebook, Bing was MSN Search and powered by Looksmart & Inktomi, Yahoo! was powered by Google’s technology…

More interesting though has been the lack of innovation in result UI. Oh sure, we’ve got much richer results now than we’ve ever had before, and the underlying technology is far in advance of what it was then, but in terms of how we actually deliver results, I’m not so sure.

A Future Interface

Let me clarify. Based on some recent comments by people at both Google and Microsoft, with regards to answering search queries, the interfaces of the future clearly aren’t going to look like they are now. Instead, they’re going to focus far more on actually answering the users question. We’ve seen the start of this with Google’s recipe search, and Bing’s travel search products.

However, these are just the beginnings of a greater shift in how we interact with the great database that is the Internet. For a more complete understanding, we rather strangly, have to turn to the world of TV game shows.

Search? It’s Elementary My Dear Watson

Earlier this year, Watson, a supercomputer built by IBM, trounced the two greatest human Jeopardy! players at their own game. Much like a modern web search engine, Watson runs thousands of algorithms symulatniously to actually calculate the correct answer to a question. Now, this is fine for where there is an actual answer (questions like ‘what is the’, ‘in what year did’, ‘where can you’ etc), but for ones where a user decision is required, we need to look beyond this.

At this point, we get in to the idea of a twin-structured search engine. In the first part, it’d simply attempt to answer a question presented to it. We can already see this done, if you ask an engine what the time is in a certain place, what a cinema is showing today, or if you want an answer to a calculation. It’s simply an extension (albeit a huge one) of technology that’s already in place.

In this particular area, SEO as we know it will die. Google will simply parse the question and deliver the answer. No links involved.

The second area though, where the user needs to decide based on information, is quite different. This is where the semantic web truly comes in to its own.

Second Site

The semantic web is a fairly old idea, the crux of which is that one day, all the data on the web will be understandable by machines. To kick-start this, Google, Bing and Yahoo! recently announced the launch of schema.org, a protocol similar to XML sitemaps (but with far broader scope) in that it aims to get the entire web marked up in a way that will facilitate this.

In this new web, a search engine would be able to grab any piece of data from any website, understand it, and then use it to produce better answers for the user. So if I were to type in ‘best small family car’, my results page would show me various small family cars, ratings by various associations, new & used prices, ancilliary information (videos, image galleries etc), and links to places to go to buy one.

This offers an exciting possibility for consumers – instant, well presented information on any topic, with the option to go out and view the original source information, with greater expansion on the subject if required. Think of it like an uber-Wikipedia. For a live example of something like this working, take a look at this results page for ‘yoga poses’ in Bing.

Welcome to the Jungle

Now, for the record, I don’t know what Microsoft or Google’s intentions are. But it’s increasingly clear that if they wanted, this is a direction that they could move in. With their increasingly titanic data stores, they’re in an amazing position to completely transform how we interact with the world’s information. For now though, webmasters need to consider three things:

  • Marking up your data probably won’t help your rankings in any particular area at the moment
  • Not marking up your data almost certainly will stop you ranking in different forms of search interface in the future
  • The websites that act now will, as always, be better placed when change comes along

So do you need to worry about getting your data marked up today? No, but have it in the back of your mind, and make sure you do it sooner rather than later.


I'm speaking at SMX London AdvancedYes, in case you hadn’t heard, after a two year absence, I’m going to be back speaking at SMX London Advanced. The session (copied from the agenda website) is:

Link Alchemy: Creative Ways Of Conjuring SEO Gold

Despite all the recent changes in search engine algorithms, links remain the single most important part of an effective search marketing campaign. And to successfully compete, you need to go beyond traditional link building techniques to create natural but scalable campaigns. What tools are available to analyse competitor links? What non-traditional channels, such as .edu links and retweets can be used? Our speakers show you how to reinvigorate your link building campaigns and take them to the next level.


I’ll also be co-moderating:

What’s New In Local & Mobile

According to Google, as many as 30% of all search queries have local intent. And according to IDC, more internet-capable mobile devices will be sold this year than computers. In short, local and mobile are both here and huge, and will continue to be an important part of many search marketer’s activities. This session looks at new developments in local search, location services, mobile apps and ads.

  • Moderator: Greg Sterling, Founding Principal, Sterling Market Intelligence
  • Q&A Moderator: Me, Here


It’s an honour to be back speaking to the industry again, and to be back as a participant after an extended period, and I look forward to seeing you all there!

Get in touch