A Glimpse Of Google Panda Update

Google Panda was initially launched in February, 2011 as part of Google’s efforts to eliminate the Black Hat SEO Tactics. At that time users used to complain about the increasing influence of Content Farm as it had grown quite widespread. It was this along with which came the Panda Algorithm to assign web pages a quality classification which was used internally and modeled after Human Quality Ratings which was set as a ranking factor.
In 2010, the deteriorating quality of Google’s search results and the increase in ‘Content Farm’ business model became the hot topics which were continuously making the rounds in the news.
As Google’s Amit Singhal later told Wired at TED, the “Caffeine” update of late 2009, which had dramatically sped up Google’s ability to index content rapidly, also introduced “some not so good” content into their index.
Google’s Matt Cutts told Wired this new content issue wasn’t really a spam issue, but one of “What’s the bare minimum that I can do that’s not spam?”
In January 2011, Business Insider published a headline that said :Google’s Search Algorithm Has Been Ruined, Time to Move Back to Curation.
Without any doubt, such headlines significantly influenced Google, which in response developed the Panda algorithm. Panda was first launched in February,2003.
It was on 24th February 2011, Google published a blog post about the update, and indicated that they “launched a pretty big algorithmic improvement to their ranking – a change that noticeably impacts 11.8% of our queries.”
The main purpose of the update was to reduce rankings for low-quality sites – sites which are low-value add for users, copy content from other websites or sites that are just not very useful. At the same time, it will provide better rankings for high-quality sites – sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.
Google later revealed that internally this update had been referred to as “Panda,” the engineer’s name who came up with the primary algorithm breakthrough. It was found that sites that were hit the hardest were pretty familiar to anybody who was in the SEO industry at that time. Sites that were included in the same were wisegeek.com, ezinarticles.com, suite101.com, hubpages.com, buzzle.com, articlebase.com and so on.
One of the most apparent changes that took place in the SEO Industry was how heavily it hit ‘article marketing’ in which practitioners used to post low-quality articles on sites. Google started with Panda Algorithm by sending test documents to human quality raters who were also asked a set of 23 questions.
Now the path to recover from Panda is quite challenging as well as straightforward. Panda boosts up the performance of sites which consist of high-quality content, therefore the solution is to increase the quality and uniqueness of the content on our sites.
One of the biggest myths about Panda is that it is all about duplicacy of content. But the clarification in response to the same is that the duplicate content is actually independent of Panda. Panda actually encourages unique and high-quality content and this goes much deeper and beyond than avoiding duplication. Panda is looking for some high quality and unique information which can provide amazing value to its users.
When it comes to resolving Panda issues, it is never recommended to delete data, rather it is always better to add more and more of high quality content. Site’s quality is what should be improved so that the users can trust the content.
Panda specifically doesn’t target the user generated content. But it targets the same in order to impact the sites that contain low-quality content.
Also it does not mean removal of user generated content then be it blogs, articles or forums because most of the high ranking sites rely heavily upon their user generated content and removal of such data means heavy loss of traffic thus impacting their search engine rankings.
Also word limit isn’t any factor that Panda considers. It is also simply a myth.