11th May

SEO Best Practice – 13 Top Tips for Web Designers

SEO fundamentally starts at the development stage of every website. Semantic code and SEO related structuring of the site’s source code will make you a favorite amongst the SEO collective in the department next door whilst equipping the website with the tools needed to battle the deadly SERPs. Here are 13 SEO tips for every developer and his dog/penguin.

Unique META tags

All META tags, keywords, description, and most specifically tile – must be unique on each page and describe the content on the page. WordPress users should consider the ‘All in One SEO‘ plugin that will take care of all this for you.

Canonical URLs

Use a consistent URL scheme across the site – don’t mix http://domain.com with http://www.domain.com/ and never use a link to index.php or index.php?page=home. In the latter three cases, a link back to the web root is preferential eg. http://www.domain.com. WordPress users should consider Yoasts plugin called Canonical URL’s for WordPress.

Search friendly URLs and absolute URLs

Using URL rewriting, create URLs that make sense to the human eye, eg. http://www.domain.com/products/sofas/big-brown-sofa.html. NOTE: NEVER use underscores in URLs orfor file naming, always use hyphens. Use absolute links throughout the site eg. include the http://www.domain.com in every A HREF. When linking to a page, as opposed to an index, always suffix “.html” onto the URL. All URLs should end with “/” or “.html”.

Proper use of H1 tags

The H1 should be the main heading of the page, semantically proceeded by H2, H3 etc. Do not use H1 for the logo, use an alternative tag. Your H1 should also be reflected in the META Title of the page.

Logo naming

Don’t name your logo image Logo.jpg. Instead name it something relevant to the site – for example for a jewellery site, name it jewellery.jpg

NoFollow links

The rel=”nofollow” attribute prevents search engines from leaking page rank to useless pages, for example, for terms and conditions, privacy policy and in most cases contact pages (unless they include a physical address for example).

XML Sitemap

There is a Google sitemap generator available that will create an XML sitemap and update it on the fly, informing Google of any new pages. All sites should contain an XML sitemap as well as a XHTML one. Show both sitemaps in the footer like this:

Contact | Privacy | Terms | Sitemap (XML Version)

WordPress users should consider the Google XML Sitemap plugin.

Images with ALT attributes

All images should have ALT attributes, primarily to describe the image for screenreaders, but they can have an SEO benefit by including site keywords.

301 redirects

Especially important when redeveloping an old site – ensure old URLs are 301 back to the homepage. A 301 redirect tells search engines “This content has moved permanently”.

Don’t use frames

We’re sure you don’t use them.. do you? Well don’t because amongst other things the content within a frame isn’t accessible to search engines. Same applies to iframes.

Check for W3C compliance

This is part of best practic and has a positive benefit for SEO.

Don’t call links page “links”

This has negative connotations to link farms, reciprocal linking etc. Instead, use a term like “Resources”, “Related Sites” etc. You should preferably come up with a term which is unique to the site, whilst still describing the page accurately.

Keep CSS and JS external

The less clutter there is in the page, the less markup search engines have to sift through. Also, avoid inline styles and inline javascript for the same reason.


Leave a reply below

  • May 11, 2009 at 12:44 pm // Reply

    Great article. There are a few gems here that I’ll put into practice when I get home from work!

    Thank you!

    Michael Wilson’s last blog post..Weekly Web Inspiration #1

  • May 28, 2009 at 3:00 am // Reply

    Some great plugins listed here, as well as some good simple advice to follow. Thanks.

    Heather’s last blog post..Floral Vector Shapes

  • Maxstar
    May 28, 2009 at 5:42 am // Reply

    Very succinct, practical and relevant blog.

  • May 28, 2009 at 6:35 am // Reply

    Nice tips!

    art2code’s last blog post..SEO Best Practice – 13 Top Tips for Web Designers

  • May 28, 2009 at 7:47 pm // Reply

    Useful article, thanks.

    T-Law’s last blog post..Illustrator Freebies: Clipboard & Pencils Vector

  • Ryan
    May 28, 2009 at 7:57 pm // Reply

    Why shouldn’t we use underscores in file names?

  • squid
    May 29, 2009 at 6:06 am // Reply

    Using absolute links that include the domain name sounds like a short-sighted idea and I doubt very much that it helps with SEO – not sure where that idea has come from.

    I have heard (but haven’t seen statistics to confirm) that whilst having human readable URLs is good practise and should be employed in projects from the ground up, the benefit in retro-fitting it existing sites is minimal and ought to be quite low down on the “todo list” if you’re improving such a site. Not sure what you think about that?

  • squid
    May 29, 2009 at 6:17 am // Reply

    With regards to 301 redirects. The redirect should go to the page where it has been moved to. If it doesn’t exist in your new naming structure and you removed the old URL then in my honest opinion you should send a 404 to the client – not send a 301 to the home page. Naturally best practise is that no content should be deleted unless necessary – so you should always try as hard as you can to 301 to the new place.

    With regards to XML sitemaps – what’s the point of giving a hyperlink to it? Only machines read it and if a search engine follows the link it’s not going to magically know that it’s a sitemap – as far as I know there is no HTTP content-type you can send for a sitemap so it would have to inspect the contents which I doubt they have time to do.

    You can however use robots.txt to inform machines where your sitemap is using the ‘Sitemap’ directive. E.g. ‘Sitemap: http://www.thefloatingfrog.co.uk/sitemap.xml‘. This is useful as it means that bots can automatically discover your sitemap without you having to submit it manually to all the major search engines, and without relying on them to guess that some random XML file you linked to was in fact your sitemap.

    Additionally you will find it useful for large sites (e.g. e-commerce sites) to save your bandwidth and use gzip compression; additionally you can split your sitemap into multiple files (necessary on really big sites as there is a limit on the number of urls which you can include).

    There’s loads more info here: http://www.sitemaps.org/protocol.php

  • May 29, 2009 at 12:23 pm // Reply

    @Squid Using absolute URLs is best practice from an SEO perspective. Not only does it make sure that search engines know exactly where the link goes, it means that if someone scrapes your content (which is surprisingly widespread) then all of the links within the text still work. Also, it means you get a link from the scraping site.

    You’re right that a 301 redirect should go the page it’s redirected to, but I think for deleted content you need a custom 404 error page with suggested content on it – at least then you’re returning something that might be of use to the browser. If you want to pass on the ranking power of an old page however, a 301 redirect is the way to go.

    Again it’s best practice to include an XML sitemap and including it in the footer links is, probably of more use for visitors than for search engines. You should definitely have an XML sitemap however and Google do ask you to submit it manually through Webmaster Tools. That link looks a good resource though, I’ll have to give it a read :)

    Piggynap’s last blog post..Earl Fans Did Not Make Up ‘Twitition’

    • May 29, 2009 at 12:34 pm // Reply

      With regards to absolute URLs: there is no ambiguity where the link goes to – the dereferencing of links relative to the current domain is straight forward and well documented – see RFC1808 and HTML 4.0 Recommendation.

      The web scraping issue is interesting however and not something I had considered. The problem I have with working around that in this manner is that:

      1. I don’t think we should ‘break’ the Internet by using absolute URIs just to circumvent the ‘evil doers’; and
      2. It’s so trivial to circumvent – the webscraping tool/person just needs to replace all occurrences of ‘your domain’ with ‘my domain’ or just remove it.

      This is assuming that ‘scraping’ is regarded as a black hat practise of course – maybe it is acceptable?

  • Pingback: My Favorite Design Links of the Week | eGrace Creative Web Solutions

  • Pingback: Sweet Tweets: Design Tweets of the Week #2 | The Design Cubicle

  • Pingback: 9 SEO Tips You’ve Probably Forgot About

  • June 30, 2009 at 11:49 am // Reply

    renaming logo image to a relevant name with site’s content was the first time i read here. will surely apply it.
    thanks for the tips.

  • July 1, 2009 at 8:35 am // Reply

    Really like this post. The part that I took away is to nofollow on links to superfluas pages such as privacy polict etc.


  • July 3, 2009 at 10:32 pm // Reply

    muy lindo blogs, saludos desde Argentina!!!!

  • Pingback: 9 SEO Tips You’ve Probably Forgot About « test

  • Pingback: 9 SEO Tips You’ve Probably Forgot About | X Design Blog

Leave a Comment