Monday, August 30, 2010

SEO 101 - Basic Optimization Techniques

Believe it or not, basic SEO is all about common sense and simplicity. The purpose of search engine optimization is to make a website as search engine friendly as possible. It's really not that difficult. SEO 101 doesn't require specialized knowledge of algorithms, programming or taxonomy but it does require a basic understanding of how search engines work.

For the purposes of brevity this piece starts with a few assumptions. The first assumption is a single, small business site is being worked on. The second assumption is the site in question is written using a fairly standard mark-up language such as HTML or PHP. The last assumption is that some form of keyword research and determination has already taken place and the webmaster is confident in the selection of keyword targets.

There are two aspects of search engines to consider before jumping in. The first is how spiders work. The second is how search engines figure out what pages relate to which keywords and phrases.

In the simplest terms, search engines collect data about a unique website by sending an electronic spider to visit the site and copy its content which is stored in the search engine's database. Generally known as 'bots', these spiders are designed to follow links from one page to the next. As they copy and assimilate content from one page, they record links and send other bots to make copies of content on those linked pages. This process continues ad infinitum. By sending out spiders and collecting information 24/7, the major search engines have established databases that measure their size in the tens of billions.

Knowing the spiders and how they read information on a site is the technical end of basic SEO. Spiders are designed to read site content like you and I read a newspaper. Starting in the top left hand corner, a spider will read site content line by line from left to right. If columns are used (as they are in most sites), spiders will follow the left hand column to its conclusion before moving to central and right hand columns. If a spider encounters a link it can follow, it will record that link and send another bot to copy and record data found on the page the link leads to. The spider will proceed through the site until it records everything it can possible find there.

As spiders follow links and record everything in their paths, one can safely assume that if a link to a site exists, a spider will find that site. There is no need to manually or electronically submit your site to the major search engines. The search spiders are perfectly capable of finding it on their own, provided a link to your site exists somewhere on the web. Search engines have an uncanny ability to judge the topic or theme of pages they are examining, and use that ability to judge the topical relationship of pages that are linked together. The most valuable incoming links, come from sites that share topical themes.

Once a search spider finds your site, helping it get around is the first priority. One of the most important basic SEO tips is to provide clear paths for spiders to follow from point A to point Z in your website. This is easily accomplished by providing easy to follow text links directed to the most important pages on the site in the navigation menu or simply at the bottom of each page. One of these text links should lead to a text-based sitemap, which lists and provides a text link to every page in the site. The sitemap can be the most basic page in the site as its purpose is more to direct spiders than help lost site visitors though designers should keep site visitors in mind when creating the sitemap. Google also accepts more advanced, XML based sitemaps, which can be read about in their Webmaster Help Center.

There will be cases where allowing spiders free access to every page on a site is not always desirable. Therefor you'll need to know how to tell spiders that some site content is off limits and should not be added to their database using "robots.txt" files. (To learn more about setting up your Robots.txt file, start with Jennifer Laycock's article on Robots.txt basics)

Offering spiders' access to the areas of the site one wants them to access is half the battle. The other half is found in the site content. Search engines are supposed to provide their users with lists of pages that relate to the search terms people enter in their search box. Search engines need to determine which of billions of pages is relevant to a small number of specific words. In order to do this, the search engine needs to know your site relates to those words.

To begin with, there are a few elements, a search engine looks at when examining a page. After the URL of a site, a search spider records the site title. It also examines the description meta tag. Both of these elements are found in the "head" section of the source code.

Titles should be written using the strongest keyword targets as the foundation. Some titles are written using two or three basic two-keyword phrases. A key to writing a good title is to remember that human readers will see the title as the reference link on the search engine results page. Don't overload your title with keyword phrases. Concentrate on the strongest keywords that best describe the topic of the page content.

The description meta tag is also fairly important. Search engines tend to use it to gather information on the topic or theme of the page. A well written description is phrased in two or three complete sentences with the strongest keyword phrases woven into each sentence. As with the title tag, some search engines will display the description on the search results pages, generally using it in whole or in part to provide the text that appears under the reference link.

Due to abuse by webmasters, such as using irrelevant terms, search engines place minor (if any) weight in the keywords meta tag. As such, it is not necessary to spend a lot of time worrying about the keywords tag.

After reading information found in the "head" section of the source code, spiders continue on to examine site content. It is wise to remember that spiders read the same way we do, left to right and following columns.

Good content is the most important aspect of search engine optimization. The easiest and most basic SEO rule is search engine spiders can be relied upon to read basic body text 100% of the time. By providing a search engine spider with basic text content, you offer the engines information in the easiest format for them to read. While some search engines can strip text and link content from Flash files, nothing beats basic body text when it comes to providing information to the spiders. You can almost always find a way to work basic body text into a site without compromising the designer's intended look, feel and functionality.

The content itself should be thematically focused. In other words, keep it simple. Some pages cover multiple topics on each page, which is confusing for spiders. The basic SEO rule here is if you need to express more than one topic on a page, you need more pages. Fortunately, creating new pages with unique topic-focused content is one of the most basic SEO techniques, making a site simpler for both live-users and electronic spiders.

When writing page content, try to use the strongest keyword targets early in the copy. For example, a site selling "Blue Widgets" might use the following as a lead-sentence;

"Blue Widgets by Smith and Co. are the strongest construction widgets available and are trusted by leading builders and contractors."

The primary target is obviously construction applications for the blue widget. By placing the keyword phrases "blue widgets" and "construction widgets" along side other keywords such as the singular words, "strongest", "trusted" and "builders" and "contractors", the sentence is crafted to help the search engine see a relationship between these words. Subsequent sentences would also have keywords and phrases weaved into them. One thing to keep in mind when writing page copy is unnecessary repetition of keywords (keyword stuffing) is often considered spam by search engines. Another thing to remember is that ultimately, the written copy is meant to be read by human eyes as well as search spiders. Read your copy out loud. Does is make sense and sound natural? If not, you've overdone the use of keyword phrases and need to make adjustments.

Another important element a spider examines when reading the site (and later relating the content to user queries), is the anchor text used in internal links. Using relevant keyword phrases in the anchor text is a basic SEO technique aimed at solidifying the search engine's perception of the relationship between pages and the words used in the link. For example... we also have a popular series of articles on the basics of SEO written by Stoney deGeyter. Linking the term "basics of SEO" is an example of using keyword phrases in the anchor text. Terms such as "SEO 101" or "SEO for beginners" could also have been used.

Remember, the foundation of successfully optimizing your site is simplicity. The goal is to make a site easy to find, easy to follow, and easy to read for search spiders and live-visitors, with well written topical content and relevant incoming links. While basic SEO can be time consuming in the early stages, the results are worth the effort and set the stage for more advanced future work. 
 Read More

Wednesday, August 25, 2010

Introduction – What Is SEO

Search Engine Optimization (SEO) is often considered the more technical part of Web marketing. This is true because SEO does help in the promotion of sites and at the same time it requires some technical knowledge – at least familiarity with basic HTML. SEO is sometimes also called SEO copyrighting because most of the techniques that are used to promote sites in search engines deal with text. Generally, SEO can be defined as the activity of optimizing Web pages or whole sites in order to make them more search engine-friendly, thus getting higher positions in search results. SEO Information,Seo Consultant,Online SEO Information,online seo tips,seo tips.

One of the basic truths in SEO is that even if you do all the things that are necessary to do, this does not automatically guarantee you top ratings but if you neglect basic rules, this certainly will not go unnoticed. Also, if you set realistic goals – i.e to get into the top 30 results in Google for a particular keyword, rather than be the number one for 10 keywords in 5 search engines, you will feel happier and more satisfied with your results. 

Although SEO helps to increase the traffic to one's site, SEO is not advertising. Of course, you can be included in paid search results for given keywords but basically the idea behind the SEO techniques is to get top placement because your site is relevant to a particular search term, not because you pay.

SEO can be a 30-minute job or a permanent activity. Sometimes it is enough to do some generic SEO in order to get high in search engines – for instance, if you are a leader for rare keywords, then you do not have a lot to do in order to get decent placement. But in most cases, if you really want to be at the top, you need to pay special attention to SEO and devote significant amounts of time and effort to it. Even if you plan to do some basic SEO, it is essential that you understand how search engines work and which items are most important in SEO.

1. How Search Engines Work

The first basic truth you need to learn about SEO is that search engines are not humans. While this might be obvious for everybody, the differences between how humans and search engines view web pages aren't. Unlike humans, search engines are text-driven. Although technology advances rapidly, search engines are far from intelligent creatures that can feel the beauty of a cool design or enjoy the sounds and movement in movies. Instead, search engines crawl the Web, looking at particular site items (mainly text) to get an idea what a site is about. This brief explanation is not the most precise because as we will see next, search engines perform several activities in order to deliver search results – crawling, indexing, processing, calculating relevancy, and retrieving.

First, search engines crawl the Web to see what is there. This task is performed by e piece of software, called a crawler or a spider (or Googlebot, as is the case with Google). Spiders follow links from one page to another and index everything they find on their way.

Having in mind the number of pages on the Web (over 20 billion), it is impossible for a spider to visit a site daily just to see if a new page has appeared or if an existing page has been modified. Sometimes crawlers will not visit your site for a month or two, so during this time your SEO efforts will not be rewarded. But there is nothing you can do about it, so just keep quiet.

What you can do is to check what a crawler sees from your site. As already mentioned, crawlers are not humans and they do not see images, Flash movies, JavaScript, frames, password-protected pages and directories, so if you have tons of these on your site, you'd better run the Spider Simulator below to see if these goodies are viewable by the spider. If they are not viewable, they will not be spidered, not indexed, not processed, etc. - in a word they will be non-existent for search engines.

2. Differences Between the Major Search Engines

Although the basic principle of operation of all search engines is the same, the minor differences between them lead to major changes in results relevancy. For different search engines different factors are important. There were times, when SEO experts joked that the algorithms of Yahoo! are intentionally made just the opposite of those of Google. While this might have a grain of truth, it is a matter a fact that the major search engines like different stuff and if you plan to conquer more than one of them, you need to optimize carefully.

There are many examples of the differences between search engines. For instance, for Yahoo! and MSN, on-page keyword factors are of primary importance, while for Google links are very, very important. Also, for Google sites are like wine – the older, the better, while Yahoo! generally has no expressed preference towards sites and domains with tradition (i.e. older ones). Thus you might need more time till your site gets mature to be admitted to the top in Google, than in Yahoo!.

Read More