Website Optimization Measures, Part II
Post from February 15, 2008 (↻ August 23, 2018), filed under Web Development.
This and many other posts are also available as a pretty, well-behaved e-book: On Web Development. And speaking of which, here’s a short treatise just about managing the quality of websites: The Little Book of Website Quality Control.
Now that we talked about blog clean-ups, structure and element revisions as well as search engine verification in part I, here are some additional suggestions, small measures for improvement consisting of .htaccess stuff, SEO, and consistency checks.
Sorting .htaccess directives and adding standardized comments. Quick and dirty: I love to be organized, and I discovered some potential within my projects’ .htaccess files. I didn’t really add new stuff as many useful directives had already been in place, but I went for alphabetical sorting in certain sections, and these sections themselves have been labeled quite “metaphorically”:
# Authentication ## Authentication directives # Startup Routine ## Various alphabetically sorted directives, e.g. AddCharset utf-8 .css AddDefaultCharset utf-8 CheckSpelling On ContentDigest On DefaultLanguage en # Course Correction ## URL rewrite directives # Course Correction: P1-P3 ## Redirect and RedirectMatch directives # Emergency ## ErrorDocument directives
Getting additional assistance with SEO. Sure, this involves actual optimization as well, but I need to thank John Britsios for helping me with a few severe issues first. The main measure that I needed to perform was a robots.txt update that became necessary due to the apparently lousy archive and pagination handling of WordPress—as for the English part of this site, I had about 74% of my pages in the supplemental index (promotion: see more of these tools over at recently face-lifted UITest.com). Way too much, caused by a lot of automatically generated duplicate content. So John analyzed this site and came up with a few solutions, and I’m both confident of and curious about the real outcome within the next weeks and months to come. Thoughts I had about dates in URLs didn’t really matter, yet.
Checking and improving UI and code consistency. There have been many other improvements, but I’ll file them under “consistency efforts.” The lesson I continuously learn from my QA initiative (with quite a few people pointing out mistakes) is likewise continuously learned when checking code. No matter how hard you try, some mistakes always go through. So checking both CSS and HTML files revealed a few though minor issues, and be it that there have been unnecessary references or even support for IE 5 in one project (whose extra code I just don’t carry around anymore).
Considering but dropping hidden file extensions. No wonder I dropped this idea, I wasted too much time with mod_rewrite experiments. Okay, that time wasn’t really wasted since I learned a lot, but what I ultimately noticed was that hiding file extensions (and the implications for my personal projects) wasn’t really worth the effort, and I stopped changing stuff when I even suspected this to become a maintenance issue. Just because you can doesn’t mean you should.
That should have been a few more refactoring measures. I hope you enjoyed them—I might write about other optimization efforts again soon for there still are many things to improve. Of course.
About the Author
Jens Oliver Meiert is a technical lead and author (sum.cumo, W3C, O’Reilly). He loves trying things, including in the realms of philosophy, art, and adventure. Here on meiert.com he shares and generalizes and exaggerates some of his thoughts and experiences.
If you have any thoughts or questions (or recommendations) about what he writes, leave a comment or a message.
Regarding the supplemental index, the way to check it is to compare in Google number of search results of:
That seems to be what mapelli.info is doing. I personally have my doubts about using * for detecting non supplemental results, as I got some strange results few times. Since you work for Google now, and I’ve heard it has amazing transparency of work among employees of all departments, you can give us a hint about the meaning of * ;o)
Thanks for mentioning UITest.com, it has a really nice collection of links.
I would love to hear about the robots.txt improvements to avoid indexing of automatically generated duplicate content. I suppose the obvious thing is to block all archive pages (categories, months, etc.) so only individual posts are crawlable. Is this what you did?
I would suggest to completely remove Apache directives which aren’t either directory-specific or rather volatile over time out of .htaccess and drop them into an appropriate http.conf-include. These were some of my candidates:
AddCharset utf-8 .css AddDefaultCharset utf-8 CheckSpelling On ContentDigest On DefaultLanguage en
.htaccess parsing costs performance so why would you add to the cost by adding settings which fits into a startup configuration item just as well?
On February 24, 2010, 20:08 CET, SEO Process said:
Okay, took a while to analyze .htaccess sorting and stuff. Tried implementing on 3 different sites with different natures, architecture and rewriting techniques. As per my experience, I think coming up with something generic algorithms using wild cards (e.g. ‘*’) could be more helpful. Using wild cards, you can implement almost same algorithm on as much sites as you want to and every time you come back for administration, you don’t feel the need to recall the page structures.
So in my case, being generic by using .htaccess directives could be more helpful in optimizing a website either for SEO or webmaster activities.
On March 12, 2010, 12:47 CET, Linda Jobs said:
could I ask for assistance in preventing duplicate content to be indexed using the .htaccess method you explained above? I feel that’s the only thing not explained your article well, otherwise it’s a great stuff.
Many thanks in advance for your help!