Search for a list of SEO factors and you’ll find that most feature at least 50.
That’s 50+ elements of your website that influence your ability to rank in search engines. Sounds complicated, doesn’t it?
Some SEO Consultants will tell you that ranking in search engines is about applying a precise formula to these 50+ elements - about using “special proprietary techniques” fine-tuned to search algorithms to boost your website above the competition.
Not exactly.
There are actually more like 200+ signals that search engines use when ranking websites.
Imagine trying to reverse-engineer something like that? Sounds impossible, right?
That’s because it is.
The good news: it doesn’t matter.
You don’t need to be a computer engineer to rank well in search engines. Relieving, isn’t it?
The truth is that everything boils down to three factors:
- Search-friendly pages
- Relevant content
- A trusted website
All of those other factors and elements of SEO? They all fit into one of these three basic categories.
You don’t need to be a search scientist to understand the basics of what’s going on with these three factors and improve them for your website.
1) Search-friendly pages
Essentially, this first factor has to do with the technical aspects of how your website and pages work.
Search engines use crawlers (or “bots”) to browse the web by following links. As they browse, these crawlers scan the content they see and store it in databases. These databases form search engine’s web index - and when a user comes along and enters a search phrase the index is scanned for pages that match.
The basic idea: you want to make sure your pages, and the content that fills them, are visible to search engine crawlers.
There are a few things you should know about crawlers:
- They don’t support JavaScript - so that rollover menu, those drop-down links, etc, might not be visible to search engine crawlers.
- They don’t support Flash (mostly) - while there have been a few developments in this regard recently, Flash websites still aren’t too search engine friendly.
- They can’t “see” - sometimes designers use images instead of HTML text (usually because they want to use a certain font that isn’t web-safe), and search engine crawlers can’t read or index this text. Crawlers can only read code - and if your content isn’t found there it’s essentially invisible to search engines.
- They skimp on resources - it takes a lot of energy and time (and money) to crawl the web (there are a lot of pages out there) so crawlers are usually programmed to be conservative with how far they’ll dive into a page. If your web pages take a long time to load or feature a tremendous amount of content crawlers might leave without scanning/indexing everything.
There are some other things crawlers can’t/won’t do. To get a sense of what they can see on your own website try SEO-Browser.com. This tool allows you to enter the address of a web page and see it as search crawlers see it.
The bottom line: you might have the best content in the world, but if crawlers can’t see it you won’t rank for relevant keywords.
2) Relevant content
This factor is all about the words on your pages.
As we discussed above the visible content on your pages is stored and searched every time someone uses a search engine. If the keyword or phrase entered doesn’t occur on your page you probably won’t show up.
There are a few key places where you’ll want to use the right language on your pages:
- Title tags
- Headlines
- Body copy
- Anchor text (links pointing to internal pages)
As you browse the web you’ll probably notice that lots of webmasters have gotten a bit, shall we say, “overzealous” with optimizing their content. Title tags stuffed to the brim with dozens of keyword variations is common. Sometimes even the body copy itself is stuffed with keywords in an attempt to boost rankings.
You might be tempted to do this yourself to try and enhance your chances of ranking for a given keyword.
Don’t do it. Please.
Why not? Try reading a page that’s been stuffed with keywords this way. It’s an awful experience, right? Certainly enough to stop your reading flow and send you to another website, isn’t it?
Don’t sacrifice your user’s reading experience in the aim of ranking for a given keyword. It’s not worth it. All of the traffic in the world won’t mean a thing if the users who land at your pages are turned off and leave. Your competitors are just a few painless clicks away.
To learn about what keywords people use when they search for your products/services/info try Google’s AdWords Keyword Tool - enter either your website address or a keyword and this tool will return a list of related keywords including numbers on how many people search for them.
The bottom line: it’s rare to rank for a keyword that doesn’t occur on your pages so use the language your users do when they search. Don’t overdo it and stuff keywords, though, because you’ll annoy your visitors (and search engines don’t like it either - they might flag you as SPAM).
3) A trusted website
When you’ve got 1) search-friendly pages and 2) relevant content it’s still not time to sit back and let the search traffic pour in.
The truth is that most of your competitors will have looked into these factors already - they’re kind of the “low hanging fruit” of SEO, because they’re not usually terribly difficult to work out.
Trust is what sets you apart. It is by far the most important of the three factors.
Before Google came onto the scene using PageRank (a measurement of link popularity) to rank websites search engines generally based their rankings on the first two factors we’ve discussed.
What was the problem with that approach?
Webmasters are greedy. We can’t help ourselves. We love traffic.
Keyword stuffing was rampant, and rarely did webmasters stick to the honest truth about what their website was relevant to. The result: search results littered with SPAM, porn and just about anything with very little relevance.
The reason links were a better signal to Google was simple - it’s harder to game. While you can control the content/keywords on your own website it’s a lot harder to control it on someone else’s. It’s pretty tough to get someone to link to you against their will.
The model simply worked - Google’s results were better. The other search engines quickly caught on and looked to signals of trust for sorting through the SPAM.
Some signals that search engines use to determine whether they can trust your website:
- Inbound links - quality is more important than quantity here - that’s why those “500 directory links for $49.95″ deals are worthless. The easiest links to get are the least valuable/powerful. A single link from Google.com, for example, would outweigh tens of thousands of weaker links - that’s how much quality matters.
- Website age - if your website is new there’s not much you can do about it without a Delorian and a working flux capacitor (”Marty, the website is in place - now we gotta go back to the future!”). A website that’s been around for a while is simply more trusted by search engines.
- Who you link to - it’s not just about inbound links. Search engines also look at what websites you link to from your pages. If you’re linking out to SPAMMY websites selling Viagra, online poker or similar SPAM-saturated topics they might consider you part of that “bad neighborhood” and penalize your website. Be careful who you vouch for.
There are other signals involved, but if you’ve got these three trust factors working in your favor you’re very likely to dominate the competition.
The bottom line: search engines don’t like getting burned by ranking SPAMMY websites. They want to know they can trust your website. Once you’ve got your on-page factors right (#1 and #2 above) you’ll need to build trust signals before your website will rank competitively.
0 comments