Website Optimization Measures, Part II

Published on February 15, 2008 (↻ February 5, 2024), filed under (RSS feed for all categories).

This and many other posts are also available as a pretty, well-behaved ebook: On Web Development. And speaking of which, here’s a short treatise just about managing the quality of websites: The Little Book of Website Quality Control (updated).

Now that we talked about blog cleanups, structure and element revisions as well as search engine verification in part I, here are some additional suggestions, small options for improvement consisting of .htaccess stuff, SEO, and consistency checks.

That should have been a few more refactoring measures. I hope you enjoyed them—I might write about other optimization efforts again soon for there still are many things to improve. Of course.

This is a part of an open article series. Check out some of the other posts!

Was this useful or interesting? Share (toot) this post, become a one-time nano-sponsor, or support my work by learning with my ebooks.

About Me

Jens Oliver Meiert, on November 9, 2024.

I’m Jens (long: Jens Oliver Meiert), and I’m a frontend engineering leader and tech author/publisher. I’ve worked as a technical lead for companies like Google and as an engineering manager for companies like Miro, I’m a contributor to several web standards, and I write and review books for O’Reilly and Frontend Dogma.

I love trying things, not only in web development (and engineering management), but also in other areas like philosophy. Here on meiert.com I share some of my experiences and views. (Please be critical, interpret charitably, and give feedback.)

Comments (Closed)

  1. On February 18, 2008, 6:17 CET, Lazar said:

    Regarding the supplemental index, the way to check it is to compare in Google number of search results of:

    site:meiert.com/

    with

    site:meiert.com/*

    That seems to be what mapelli.info is doing. I personally have my doubts about using * for detecting non supplemental results, as I got some strange results few times. Since you work for Google now, and I’ve heard it has amazing transparency of work among employees of all departments, you can give us a hint about the meaning of * ;o)

    Thanks for mentioning UITest.com, it has a really nice collection of links.

  2. On February 19, 2008, 10:34 CET, Jens Oliver Meiert said:

    site:meiert.com/

    with

    site:meiert.com/*

    That seems to be what mapelli.info is doing.

    Right, it appears to do nothing else đŸ˜Š

    Thanks for mentioning UITest.com, it has a really nice collection of links.

    Thank you!

  3. On February 20, 2008, 1:16 CET, Bennett said:

    I would love to hear about the robots.txt improvements to avoid indexing of automatically generated duplicate content. I suppose the obvious thing is to block all archive pages (categories, months, etc.) so only individual posts are crawlable. Is this what you did?

  4. On February 20, 2008, 10:53 CET, Jens Oliver Meiert said:

    Bennett, yes, basically. The most important thing was to get rid of all duplicate content generated by WordPress (mostly caused by the archives)—though leaving a “path” (by allowing access to the categories)—, then looking for other instances of dup content. For example, there have been a few files available in different formats. It took some time but appears to pay off already. (Not blaming WordPress, not now.)

  5. On March 1, 2008, 17:45 CET, Robert said:

    I would suggest to completely remove Apache directives which aren’t either directory-specific or rather volatile over time out of .htaccess and drop them into an appropriate http.conf-include. These were some of my candidates:

    AddCharset utf-8 .css
    AddDefaultCharset utf-8
    CheckSpelling On
    ContentDigest On
    DefaultLanguage en
    

    .htaccess parsing costs performance so why would you add to the cost by adding settings which fits into a startup configuration item just as well?

  6. On March 2, 2008, 17:24 CET, Jens Oliver Meiert said:

    Robert—you’re absolutely right, for several reasons can it be advisable to disable .htaccess altogether. I just don’t have access to my server’s httpd.conf file.

  7. On February 24, 2010, 20:08 CET, SEO Process said:

    Okay, took a while to analyze .htaccess sorting and stuff. Tried implementing on 3 different sites with different natures, architecture and rewriting techniques. As per my experience, I think coming up with something generic algorithms using wild cards (e.g. ‘*’) could be more helpful. Using wild cards, you can implement almost same algorithm on as much sites as you want to and every time you come back for administration, you don’t feel the need to recall the page structures.

    So in my case, being generic by using .htaccess directives could be more helpful in optimizing a website either for SEO or webmaster activities.

    makes sense?

  8. On March 12, 2010, 12:47 CET, Linda Jobs said:

    could I ask for assistance in preventing duplicate content to be indexed using the .htaccess method you explained above? I feel that’s the only thing not explained your article well, otherwise it’s a great stuff.

    Many thanks in advance for your help!