Wednesday, April 29, 2009

The Six Landing Page Conversion Rate Factors

Here is an article about the first elements to check in order to improve the conversion rate:

This article is an introduction to the WiderFunnel Landing Page Influence Function for Tests™ (or LIFT™) Model, a framework WiderFunnel Marketing uses to analyze conversion pages and develop test hypotheses. We have used this tool as part of a structured process to lift each of our clients’ conversion rates by between 10% to 277%.

Reade more here.

Thursday, April 23, 2009

Reminders about SEO

Since it is always good to be reminded of what to do for SEO and because the articles are good here are reminders about what you should not forget to do for your website.

Article 1: 15 Essential Checks Before Launching Your Website

Your website is designed, the CMS works, content has been added and the client is happy. It’s time to take the website live. Or is it? When launching a website, you can often forget a number of things in your eagerness to make it live, so it’s useful to have a checklist to look through as you make your final touches and before you announce your website to the world.

This article reviews some important and necessary checks that web-sites should be checked against before the official launch - little details are often forgotten or ignored, but - if done in time - may sum up to an overall greater user experience and avoid unnecessary costs after the official site release.

Click here to read more.

Article 2: Masters Of The Google Universe: How To Achieve Top Google Rankings

For years, it has been well known that Google’s search algorithm is driven by the number and quality of links pointing to a particular URL. And as a result, it was all the rage for some time to buy links on web pages that had a high Google PageRank (PR).

But in March of 2007, Google’s mouthpiece Matt Cutts declared that Google was going to fight back against Paid Links. Google put a shot across the bow of many online marketers, letting them know that the days of easily buying links from high PageRank pages in order to influence a website’s ranking in Google were over.

Click here to read more.

And directories do help!

I read a few times that to submit a site in directories doesn't bring anything SEO wise and that it harms more than it helps. I don't agree with that thought I do agree that you should be careful to which directory not submit your site.

Here is an article about the reasons why you should keep make use of directories:

Most website owners fail to differentiate between a directory and search engine, failure to do so has resulted in failure to harness the powers of Internet directory effectively.

Search engine uses the spiders - (an automated software program) to locate and collect data from web pages for inclusion in a search engine’s database and to follow links to find new pages on the World Wide Web. While directories depends on human editors, who in most cases examine every single new listing before they are added to their directory. Most major search engines these days use links from human edited directory to measure the quality of the site they index. That is why you should place emphasis on the type of website or directory you list to and how to do that effectively.

Click here to read more.

Friday, April 17, 2009

A Deeper Look At Robots.txt

SEO is about being found and indexed by search engines, but for some reasons there are some pages you don't want to have indexed. An easy way to control where the robots will go or where they should not go is to have a Robots.txt file.

Here is a recent article about robots.txt file:

The Robots Exclusion Protocol (REP) is not exactly a complicated protocol and its uses are fairly limited, and thus it’s usually given short shrift by SEOs. Yet there’s a lot more to it than you might think. Robots.txt has been with us for over 14 years, but how many of us knew that in addition to the disallow directive there’s a noindex directive that Googlebot obeys? That noindexed pages don’t end up in the index but disallowed pages do, and the latter can show up in the search results (albeit with less information since the spiders can’t see the page content)? That disallowed pages still accumulate PageRank? That robots.txt can accept a limited form of pattern matching? That, because of that last feature, you can selectively disallow not just directories but also particular filetypes (well, file extensions to be more exact)? That a robots.txt disallowed page can’t be accessed by the spiders, so they can’t read and obey a meta robots tag contained within the page?


To read more go here.