Between the Lines of Google’s Search Algorithm Improvements

Google recently released a video that detailed their internal process of implementing the changes to their search engine ranking algorithms.  We here at Conductor wanted to take the time to break down the individual components that exemplify successful implementation of product management within internet marketing.  Below you will find a transcript of this video and some detail around how the Search Engine Giant follows proven business models in advancing one of the most complex (and sacred) mathematical equations on the internet.

Embedded within the video transcript (content below the video in quote blocks) you will find additional insight and knowledge as we “read between the lines”.  Enjoy!

Every year Google implements over 500 improvements to its search algorithms

This is a look at how each potential change is valuated

Rajan Patel, Search Scientist: “the google search algorithm is made up of several hundred signals that we try to put together to serve the best results for each user”

One common challenge for SEO’s is identifying the “low hanging fruit”. There are literally hundreds of triggers that account for a distinct percentage of your relevant page authority at the keyword level.  It can be difficult for search marketers to show positive measurable gains out of the gate especially if they inherit a large-scale site with a less then optimal architecture.  The important factor here is the education of all teams on industry standards to minimize the amount of signals an SEO needs to obsess about on a daily basis.  Proper company evangelism of internet marketing requires innate people skills as you learn to communicate with a deep diversity of personality types.  For enterprise level clients, building your companies knowledge of search marketing team by team is a critical element of your road map.

Amit Singhal, Google Fellow: “just last year we launched over 500 changes to our algorithm so by some count we change our algorithm almost every day almost twice over”

We easily identify the large tweaks to the algorithm and they often come with a name like “Big Daddy”, “Vince” or “Panda” but the little ones often go unnoticed.  The bulk of these typically work in our favor as long as we are focused on relevant terms that are pertinent to our conversion metrics.  This is why we often see sites with similar conversion types converging on page one.  This is also why we need to educate our company that position shifts are a normal event.  In most cases the higher you rank on page one, the more static your results appear.  After all, these are the most relevant pages with the highest authority (in theory).

Scott Huffman, Engineering Director: “we really analyze each potential change very deeply to try and make sure that it’s the right thing for the users”

Let’s be honest.  At the end of the day Google is completely obsessed with the user.  As internet marketers we should be as well.  Analyzing user behavior, conversion funnels and response rates and using that data to create a good experience is the roadmap to success.

Mark Paskin, Software Engineer: “The first step in improving Google search is coming up with an idea”

Scott Huffman: “There are almost always a set of motivating searches… and these searches are not performing as well as we would like… Ranking engineers then come up with a hypothesis about what signal, what data could be integrated into our algorithm.”

What determines good Google performance?  Is it bounce rate?  Time on the SERP page?  The Google SERP time on page valuation is completely different then a typical marketers.  Search Engines want you to find the proper result for your search query and leave their site.  Good experience yields returning visitors.  With the amount of testing that Google does, it would be ridiculous for them to ignore natural search click through rates and visitors that bounce back quickly.  SEO professionals have theorized both of these rank quality metrics for years.

Amit Singhal: “We test all these reasonable ideas through rigorous scientific testing”

Mark Paskin: “The first is with raters.  These are external people that have been trained to judge whether one ranking is more relevant and higher quality then another”

I assume “external people” means that Google outsources their raters.  This gives another definition of “Paid Search”.  What I do like about this process is the fact that they look to third parties in order to validate their assumptions before investing a lot of time and resources in development of a theory or opinion.

Random Googler: “We show these raters a side by side for queries that the engineers experiment might be affecting…we also confirm these changes with live experiments on real users”

Mark Paskin: “And we do this in something that’s called a sandbox… We send a very small traffic of actual Google traffic to the sandbox… We compute lots of different metrics”

Do you ever get that call late at night from your boss asking why your term is getting beaten out or congratulating you for your win while you see something completely different?  Always celebrate your wins after an established period of ranking stability.  It is common to see rankings “dance” around slightly but we should always take note on the potential shift that may be about to take place.  Report your big wins just as you do your losses, with complete data analysis.  Above all, be prepared as far in advance as possible if you start seeing inconsistencies or instability across your rankings.

Scott Huffman: “In 2010 we ran over 20,000 different experiments… All the information from the human evaluation and the live experiment are then rolled up by a search analyst”

Sangeeeta Das, Quantitative Analyst: “For each project its usually one analyst assigned from the moment that we’re talking to the engineers trying to learn about their change”

Scott Huffman: “We then have a launch decision meeting where the leadership of the search team then looks at that data and makes the decision”

Sangeeeta Das: “Ultimately the goal of the search eval analyst team is to provide and form data driven decisions and present an unbiased view”

Amit Singhal: “If our scientistic testing says this is a good idea for Google users, we will launch it on Google”

I love the fact that Google is so focused on multivariate testing.  The amount of testing and data aggregations that they analyze is absolutely immense.  Making changes to your user experience based on statistical significance is a never ending process.  As soon as a winner is identified and put in place, it becomes the target for the next iteration to outperform.  A consistent focus on conversion and response rates will yield amazing results over time.  Small wins stacked one after another can become impressive year over year gains.

An actual example

Mark Paskin: “For many years now Google has been offering spelling suggestions for queries that contain typos or misspellings.  So sometimes you will type a query and you might see ‘did you mean’ and then an alternate query.  If you type a misspelling of your medacine and you don’t click on ‘did you mean’, you might be getting results that contain that misspelling and they tend not to be high quality results… so we thought about a different kind of interface that we call Full Page Replacement and instead of ‘did you mean’ youll se at the top of your page ‘showing results for’ and in the case that we made a mistake theres another link ‘search instead for’ and it has the query that you typed…We call that link The Escape Hatch… For every time the user had to click that escape hatch because the spelling algorithm made a mistake we wanted to make sure that there were 50 other times that they got the right spelling suggestion and they didn’t have to click the ‘did you mean’…And they were also looking to see in the live traffic data how often were users clicking on that Escape Hatch to make sure that the user signal that we get from live experiments was lining up with the signal that we get from our regular evaluations… We brought it to Launch Committee and based on the rater evaluations and the live experiments it was pretty clear that the engineers had done what they were supposed to do and so we launched it.

I am a big fan of Full Page Replacement and the Escape Hatch so I can completely relate with the example Mark talks about above.  What we like about this test case and real world scenario is the fact that Google was not satisfied with the initial win of misspelling and typos.  They identified an industry need and continued to build upon its success.  Google successfully measured an evolving piece of functionality that many people take for granted.  Google took an initial win and immediately started testing against it to provide an even deeper level of relevance and usability.

Amit Singhal: “When you align Google’s interest with users interest as we have aligned, good things happen.”

Mark Paskin: “We’ve put a huge investment into understanding what works for users.”

Random Googler: “Is this change going to help users not only in the United States or in English but all over the world”

Scott Huffman: “We get excited when we feel like weve hit on an idea that really helps a lot of users”

Amit Singhal: “Users keep coming back to Google even though they have a choice of a search enging every time they open a browser.”

Learn more at:

At the end of the day there are a couple of things we should take away from this video.  First of all we have confirmation of what we already know.  Google is a company of data scientists who build a quality search engine based on user interaction, statistical significance and data driven decisions.  They are never satisfied with the way things are and have an endless drive for increasing the relevance behind user behavior and evolving search query complexity.  Secondly they stress the importance of multivariate testing and conversion rate optimization in a fast paced environment.  They don’t just jump into a redesign based on the opinion of a small handful of designers.  They systematically release upgrades in (usually) small iterations while measuring the impact.  Most importantly Google analyzes user feedback through multiple means and constructs an experience for users that keeps them coming back. These are all traits of the most successful internet marketers.

A special thanks goes out to Rajan Patel, Amit Singhal, Scott Huffman, Mark Paskin and Sangeeeta Das for their insight and focus on quality results.