Website Optimization Measures, Part II
Published on February 15, 2008 (⻠February 5, 2024), filed under Development (RSS feed for all categories).
This and many other posts are also available as a pretty, well-behaved ebook: On Web Development. And speaking of which, hereâs a short treatise just about managing the quality of websites: The Little Book of Website Quality Control (updated).
Now that we talked about blog cleanups, structure and element revisions as well as search engine verification in part I, here are some additional suggestions, small options for improvement consisting of .htaccess stuff, SEO, and consistency checks.
-
Sorting .htaccess directives and adding standardized comments. Quick and dirty: I love to be organized, and I discovered some potential within my projectsâ .htaccess files. I didnât add new stuff as many useful directives had already been in place, but I went for alphabetical sorting in certain sections, and these sections themselves have been labeled âmetaphoricallyâ:
# Authentication ## Authentication directives # Startup Routine ## Various alphabetically sorted directives, e.g. AddCharset utf-8 .css .html .js .txt .xml AddDefaultCharset utf-8 CheckSpelling On ContentDigest On DefaultLanguage en # Course Correction ## URL rewrite directives # Course Correction: P1-P3 ## Redirect and RedirectMatch directives # Emergency ## ErrorDocument directives
-
Getting additional assistance with SEO. Sure, this involves actual optimization as well, but I need to thank John Britsios for helping me with a few severe issues first. The main measure that I needed to perform was a robots.txt update that became necessary due to the apparently lousy archive and pagination handling of WordPressâas for the English part of this site, I had about 74% of my pages in the supplemental index (promotion: see more of these tools over at recently face-lifted UITest.com). Way too much, caused by a lot of automatically generated duplicate content. So John analyzed this site and came up with a few tweaks, and Iâm both curious and confident about the real outcome within the next weeks and months to come.
-
Checking and improving UI and code consistency. There have been many improvements here, but I file them under âconsistency efforts.â The lesson I continuously learn from my QA initiative (with many people pointing out mistakes) is likewise continuously learned when checking code. No matter how hard you try, some mistakes always sneak in. So checking both CSS and HTML files revealed a few minor issues, like unnecessary references and support for IE 5 in one project (whose extra code I donât carry around anymore).
-
Considering but dropping hidden file extensions. No wonder I dropped this idea, having wasted too much time with mod_rewrite experiments. Okay, that time wasnât truly wasted since I learned a lot, but what I ultimately noticed was that hiding file extensions (and the implications for my personal projects) wasnât worth the effort, and I stopped changing stuff when I suspected this to become a maintenance issue. Just because you can doesnât mean you should.
That should have been a few more refactoring measures. I hope you enjoyed themâI might write about other optimization efforts again soon for there still are many things to improve. Of course.
This is a part of an open article series. Check out some of the other posts!
About Me
Iâm Jens (long: Jens Oliver Meiert), and Iâm a frontend engineering leader and tech author/publisher. Iâve worked as a technical lead for companies like Google and as an engineering manager for companies like Miro, Iâm a contributor to several web standards, and I write and review books for OâReilly and Frontend Dogma.
I love trying things, not only in web development (and engineering management), but also in other areas like philosophy. Here on meiert.com I share some of my experiences and views. (Please be critical, interpret charitably, and give feedback.)
Comments (Closed)
-
On February 18, 2008, 6:17 CET, Lazar said:
Regarding the supplemental index, the way to check it is to compare in Google number of search results of:
site:meiert.com/
with
site:meiert.com/*
That seems to be what mapelli.info is doing. I personally have my doubts about using * for detecting non supplemental results, as I got some strange results few times. Since you work for Google now, and Iâve heard it has amazing transparency of work among employees of all departments, you can give us a hint about the meaning of * ;o)
Thanks for mentioning UITest.com, it has a really nice collection of links.
-
On February 19, 2008, 10:34 CET, Jens Oliver Meiert said:
site:meiert.com/
with
site:meiert.com/*
That seems to be what mapelli.info is doing.
Right, it appears to do nothing else đ
Thanks for mentioning UITest.com, it has a really nice collection of links.
Thank you!
-
On February 20, 2008, 1:16 CET, Bennett said:
I would love to hear about the robots.txt improvements to avoid indexing of automatically generated duplicate content. I suppose the obvious thing is to block all archive pages (categories, months, etc.) so only individual posts are crawlable. Is this what you did?
-
On February 20, 2008, 10:53 CET, Jens Oliver Meiert said:
Bennett, yes, basically. The most important thing was to get rid of all duplicate content generated by WordPress (mostly caused by the archives)âthough leaving a âpathâ (by allowing access to the categories)â, then looking for other instances of dup content. For example, there have been a few files available in different formats. It took some time but appears to pay off already. (Not blaming WordPress, not now.)
-
On March 1, 2008, 17:45 CET, Robert said:
I would suggest to completely remove Apache directives which arenât either directory-specific or rather volatile over time out of .htaccess and drop them into an appropriate http.conf-include. These were some of my candidates:
AddCharset utf-8 .css AddDefaultCharset utf-8 CheckSpelling On ContentDigest On DefaultLanguage en
.htaccess parsing costs performance so why would you add to the cost by adding settings which fits into a startup configuration item just as well?
-
On March 2, 2008, 17:24 CET, Jens Oliver Meiert said:
Robertâyouâre absolutely right, for several reasons can it be advisable to disable .htaccess altogether. I just donât have access to my serverâs httpd.conf file.
-
On February 24, 2010, 20:08 CET, SEO Process said:
Okay, took a while to analyze .htaccess sorting and stuff. Tried implementing on 3 different sites with different natures, architecture and rewriting techniques. As per my experience, I think coming up with something generic algorithms using wild cards (e.g. ‘*â) could be more helpful. Using wild cards, you can implement almost same algorithm on as much sites as you want to and every time you come back for administration, you donât feel the need to recall the page structures.
So in my case, being generic by using .htaccess directives could be more helpful in optimizing a website either for SEO or webmaster activities.
makes sense?
-
On March 12, 2010, 12:47 CET, Linda Jobs said:
could I ask for assistance in preventing duplicate content to be indexed using the .htaccess method you explained above? I feel thatâs the only thing not explained your article well, otherwise itâs a great stuff.
Many thanks in advance for your help!
Read More
Maybe of interest to you, too:
- Next: âhelvetica, arialâ, Not âarial, helveticaâ
- Previous: Website Optimization Measures, Part I
- More under Development
- More from 2008
- Most popular posts
Looking for a way to comment? Comments have been disabled, unfortunately.
Get a good look at web development? Try WebGlossary.infoâand The Web Development Glossary 3K. With explanations and definitions for thousands of terms of web development, web design, and related fields, building on Wikipedia as well as MDN Web Docs. Available at Apple Books, Kobo, Google Play Books, and Leanpub.