Website Optimization Measures, Part II
This and many other posts are also available as a pretty, well-behaved ebook: On Web Development. And speaking of which, here’s a short treatise just about managing the quality of websites: The Little Book of Website Quality Control (updated).
Now that we talked about blog cleanups, structure and element revisions as well as search engine verification in part I, here are some additional suggestions, small options for improvement consisting of .htaccess stuff, SEO, and consistency checks.
Sorting .htaccess directives and adding standardized comments. Quick and dirty: I love to be organized, and I discovered some potential within my projects’ .htaccess files. I didn’t add new stuff as many useful directives had already been in place, but I went for alphabetical sorting in certain sections, and these sections themselves have been labeled “metaphorically”:
# Authentication ## Authentication directives # Startup Routine ## Various alphabetically sorted directives, e.g. AddCharset utf-8 .css .html .js .txt .xml AddDefaultCharset utf-8 CheckSpelling On ContentDigest On DefaultLanguage en # Course Correction ## URL rewrite directives # Course Correction: P1-P3 ## Redirect and RedirectMatch directives # Emergency ## ErrorDocument directives
Getting additional assistance with SEO. Sure, this involves actual optimization as well, but I need to thank John Britsios for helping me with a few severe issues first. The main measure that I needed to perform was a robots.txt update that became necessary due to the apparently lousy archive and pagination handling of WordPress—as for the English part of this site, I had about 74% of my pages in the supplemental index (promotion: see more of these tools over at recently face-lifted UITest.com). Way too much, caused by a lot of automatically generated duplicate content. So John analyzed this site and came up with a few tweaks, and I’m both curious and confident about the real outcome within the next weeks and months to come.
Checking and improving UI and code consistency. There have been many improvements here, but I file them under “consistency efforts.” The lesson I continuously learn from my QA initiative (with many people pointing out mistakes) is likewise continuously learned when checking code. No matter how hard you try, some mistakes always sneak in. So checking both CSS and HTML files revealed a few minor issues, like unnecessary references and support for IE 5 in one project (whose extra code I don’t carry around anymore).
Considering but dropping hidden file extensions. No wonder I dropped this idea, having wasted too much time with mod_rewrite experiments. Okay, that time wasn’t truly wasted since I learned a lot, but what I ultimately noticed was that hiding file extensions (and the implications for my personal projects) wasn’t worth the effort, and I stopped changing stuff when I suspected this to become a maintenance issue. Just because you can doesn’t mean you should.
That should have been a few more refactoring measures. I hope you enjoyed them—I might write about other optimization efforts again soon for there still are many things to improve. Of course.
This is a part of an open article series. Check out some of the other posts!
I’m Jens, and I’m an engineering lead and author. I’ve worked as a technical lead for companies like Google, I’m close to W3C and WHATWG, and I write and review books for O’Reilly and Frontend Dogma. I love trying things, not only in web development, but also in other areas like philosophy. Here on meiert.com I share some of my views and experiences.
If you have a question or suggestion about what I write, please leave a comment (if available) or a message. Thank you!
Regarding the supplemental index, the way to check it is to compare in Google number of search results of:
That seems to be what mapelli.info is doing. I personally have my doubts about using * for detecting non supplemental results, as I got some strange results few times. Since you work for Google now, and I’ve heard it has amazing transparency of work among employees of all departments, you can give us a hint about the meaning of * ;o)
Thanks for mentioning UITest.com, it has a really nice collection of links.
I would love to hear about the robots.txt improvements to avoid indexing of automatically generated duplicate content. I suppose the obvious thing is to block all archive pages (categories, months, etc.) so only individual posts are crawlable. Is this what you did?
I would suggest to completely remove Apache directives which aren’t either directory-specific or rather volatile over time out of .htaccess and drop them into an appropriate http.conf-include. These were some of my candidates:
AddCharset utf-8 .css AddDefaultCharset utf-8 CheckSpelling On ContentDigest On DefaultLanguage en
.htaccess parsing costs performance so why would you add to the cost by adding settings which fits into a startup configuration item just as well?
On February 24, 2010, 20:08 CET, SEO Process said:
Okay, took a while to analyze .htaccess sorting and stuff. Tried implementing on 3 different sites with different natures, architecture and rewriting techniques. As per my experience, I think coming up with something generic algorithms using wild cards (e.g. ‘*’) could be more helpful. Using wild cards, you can implement almost same algorithm on as much sites as you want to and every time you come back for administration, you don’t feel the need to recall the page structures.
So in my case, being generic by using .htaccess directives could be more helpful in optimizing a website either for SEO or webmaster activities.
On March 12, 2010, 12:47 CET, Linda Jobs said:
could I ask for assistance in preventing duplicate content to be indexed using the .htaccess method you explained above? I feel that’s the only thing not explained your article well, otherwise it’s a great stuff.
Many thanks in advance for your help!
Maybe of interest to you, too:
- Next: “helvetica, arial”, Not “arial, helvetica”
- Previous: Website Optimization Measures, Part I
- More under Web Development, or from 2008
- Most popular posts
Looking for a way to comment? Comments have been disabled, unfortunately.
Get a good look at web development? Try WebGlossary.info—and The Web Development Glossary 3K (2023). With explanations and definitions for thousands of terms of web development, web design, and related fields, building on Wikipedia as well as MDN Web Docs. Available at Apple Books, Kobo, Google Play Books, and Leanpub.