Web Standards

Don’t do this: Initially visible ‘loading’ images

October 10th, 2010

A common pattern on interactive web sites is to include a ‘loading’ image (sometimes called a ‘throbber‘), and display the image when an asynchronous action is taking place. This is usually implemented by placing an animated GIF inside a div that is initially hidden, and is displayed by JavaScript that is executed when the action begins.

However, bad things can happen when you do not initially hide the loading image using CSS. I’m an advocate of using NoScript to block JavaScript execution on untrusted sites, and I found a site that had a working throbber in my face while trying to read a blog post. Because NoScript blocked the JavaScript execution, the code that initialized the loading image by hiding it never executed, and the annoying circular ’spinning wheel’ made me move on without reading the post.

The way it should work:

  1. Set the initial style of the loading image <div> to include display: none;
  2. When the action starts, modify the <div>’s style to display: block;
  3. When the action ends, return to display: none;
Methodology

Code coverage metrics

August 8th, 2010

Randy Johnson (The Big Unit)

A common evil that often rides along with unit testing is the unit testing metric. These metrics come in many flavors, including percent code coverage, number of tests and percentage of tests passing. Like any statistic, there is the potential for misuse and misinterpretation, which is the point of a post on Javalobby titled “Effective Code Coverage (and Metrics in General)“.

Management mandates to have X% of code coverage.
… Upper management, without a clue about the health of the codebase, mandates that developers should reach X% of code coverage. Period. … In many cases, when the codebase was not designed with testability in mind, I’ve seen developers writing a lot of test cases, without any assertions! All the tests pass, and the code coverage goal is met.

This seems like a pretty easy problem to solve. Review the unit tests. Ensure that there are actual assertions (i.e. tests) going on inside each test method. Problem solved.

Unit tests are code. Just as code should be reviewed, both by peers and architects, so should unit tests.

I have never seen this happen.  However, if I ever do encounter it, here’s what I would do (in this specific order):

  1. Re-examine the company’s culture of mandating unit testing, and determine what it was that would cause otherwise reputable developers (you don’t hire anything but, right?) to go to such lengths to play the system.
  2. If you can’t find anything wrong with the company’s culture, look harder. A good, honest developer will not do this unless pushed to the limit. Are there abstract and unrealistic deadlines being set? Do they have the tools they need to write the tests? How about training?  The company, from the top executive down, must embrace and support developer-driven testing.

What I have seen, however, are projects with a lack of unit tests. This is almost always due to technical leadership failing (architects and team leads) to emphasize the importance of user testing, or the aforementioned lack of executive sponsorship.

Web Standards

Penny-wise, pound-foolish development decisions

August 7th, 2010

Benjamin FranklinRecently, I was reading “Reductionism in Web Design“, which was generally a good “less is more” article that touched on reducing content, code and design down to their minimums without sacrificing quality.

However, there was one guideline in the article about code reduction that made me think twice:

Try to write code natively before using an abstracted layer (like MooTools or jQuery)

This really depends upon what it is you’re trying to reduce: the total size of the application code, or the size of YOUR code? The whole point of code frameworks is to allow you to do more while writing less code. Since you spend less time reinventing the wheel, you can complete coding quicker, and with higher quality (since you’re using somebody else’s tested code).

This is particularly true with jQuery, especially when you apply multiple functions on a set of elements. Since jQuery functions return a jQuery object, you can “chain” multiple function calls, which looks something like this:

$(".classname").function1().function2().function3();

In order to do this without jQuery, you’d need to iterate through each element with the class ‘classname’, then iteratively call each function.  If you used jQuery to do this, you would only need to write one line of code, but you’re total application size would increase by including the fully cross-browser tested jQuery library (a whopping 24 kilobytes). If you did this by hand, it would be many more lines of code, the application size would be smaller, and the entire iterative process for calling the multiple functions would require you to expend addition effort testing across multiple browsers.

This may be an application of the penny-wise, pound-foolish pattern: writing code natively may make your total application size smaller, but at the cost of maintainability and testing.

Web Standards

Firefox 4 beta 1: A very quick first look

July 22nd, 2010

There’s not much wrong with the first beta release of Firefox. It performs well, never crashed during several days of use, and, thanks to changes in the menu bar, is much better at using available screen real estate.  Unfortunately, it’s not an everyday-use browser. Yet.

It’s the lack of compatible extensions that keep this from being an everday-use browser.  It has become very apparent just how much I use extensions in my day-to-day use of Firefox.  Between Firebug, Greasemonkey, Read It Later and Xmarks, I can’t do more than just review how my sites look and perform in the new browser. The good news is that NoScript and AdBlock Plus *are* available now, so it’s not you’re browsing unprotected.

Most of the changes just take a bit of getting used to. Finding where the menu bar went and how to get there was a bit of a challenge, but, frankly, there aren’t many daily-use things in the menu bar that are not in the Firefox drop down menu.

During the time I used beta 1, I found only one rendering glitch, and that was with the pan control in the aerial view feature of Google Maps. There appeared to be a ‘ghost’ control behind the main control, and as a result, I couldn’t move to a westerly view. Closing the browser and starting over seemed to take care of the problem, though.

It appears as if there are new features still on the horizon; the extensions page says to ‘watch for something new’. And while eliminating dialog boxes was a focus of this release, several still remain, including the error console. An example of where a dialog box went away is the extensions page. Expect to see more of that in the upcoming beta releases.

Database

Implementing an intersect in MySQL

July 19th, 2010

For those of you looking for information on the TV show “Chuck“, this is not for you. If you don’t understand what that means, then this might be for you.  Standard SQL provides a construct called union that combines the contents of two queries, which acts as a logical OR operation against two or more datasets. However, I recently found myself in the need of the similar functionality, but with an AND operation. SQL Server provides a construct called ‘intersect‘ which does just that, but it’s not ANSI SQL, and, since I was using MySQL, that didn’t help me.  I did find a way to get the data I needed, however, by using a combination of grouping and the having clause.

Here’s the problem: I had two tables of data that formed what was a many-to-many relationship (a map table was the third table). The first table contained generic data, and the second table contained free-form meta information about the records in the first table. For the purpose of illustration, imagine a set of records with a name, address and astrological sign; each one of those records could have 1 or more free-form meta fields attached to it:

My assignment was to select all records in table a that matched all of the free-form meta fields submitted from a user.  At first this seemed simple:

select * from a, b, abmap where a.id = abmap.a_id and abmap.b_id = b.id and b.meta in ('meta1', 'meta2', 'meta3');

Unfortunately, this doesn’t implement an AND; it would include records from table a that had either 1 or 2 of the requested meta values in addition to having all three.  I was looking for something like:

select all records that have meta=meta1 AND meta=meta2 AND meta=meta3

What I needed to do was to group on a field I knew had unique values in table a (using the example case, I used ‘name’), then use the having clause to count the number of rows for every name in the result set. If the number of returned rows for any given name matches the number of meta values provided by the user, the record is considered a match.

select a.* from a, b, abmap where a.id = abmap.a_id and abmap.b_id = b.id and b.meta in ('meta1','meta2','meta3') group by a.name having count(a.name) = 3;

The count function normally returns the number of rows in the query, but since the query has been grouped by the name column, count returns the number of rows in each group. Since the user had specified three different meta values, I wanted to find groups with exactly three records. By using this query, you would get all records from table a that had the values ‘meta1′, ‘meta2′ and ‘meta3′ attached to it through abmap, no more, no less.

Tools

Finding the error console in Firefox 4 beta…and more!

July 7th, 2010

firefox logoThe Firefox 4 beta is out, and I’ve run it through the gamut of applications that I’ve written and somewhat responsible for (good news: everything works!), but I ran into a bit of a problem trying to find the JavaScript (or error) console. The reason for this is that Firefox 4 no longer has a menu bar by default, but instead has a single Firefox drop down menu in the upper left of the Firefox window.  I like this, because it gives more room for what you’re on the Internet for: content. Since the advent of tabs and toolbars, content has been continually having it’s real estate stolen, so it’s good to see content area reclamation taking place.

But without the menu bar, there’s no (visible) way of getting to the JavaScript console. However, it’s still possible to get the menu bar to appear temporarily by pressing the alt key; the tab area pushes down, and exposes a menu bar near the top of the screen. Once the menu bar appears, you can go to Tools -> Error Console to get your JavaScript debug on. While looking at the Tools menu, remember that the key combination Ctrl-Shift-J will bring up the console directly.

If you want the menu bar back permanently, go to the Firefox drop down menu, select Customize, then check Menu Bar, and the menu bar stays, leaving the JavaScript console at your beck and call.

BUT WAIT!

The Error Console is old news. It’s soooo 2008. Now all the cool kids are using the Heads-Up Display, which, from what I’ve seen, is the Error Console on steroids.  There are more types of events to filter, including DOM mutation (which is great for AJAX debugging). Check it out.

Languages

How to fix the font in the MySQL Workbench editor

July 6th, 2010

Just ran into an annoying little problem using the version 5.1.12 of MySQL Workbench: the font used in the SQL editor is really small, and there’s no way to change it.  The preferences dialog shows you what the font is, but it’s a text field, and even though you can type in it, it appears to be a read-only field.

To fix it, go old school and edit the configuration file.  On Linux, it’s in your home directory in .mysql/workbench/wb_options.txt. Be sure to quit MySQL Workbench before editing the config file to avoid having your work overwritten.

Once you’ve opened the file, look for the key workbench.general.Editor:Font, and increase the font size to something usable.  I chose 11, but some may still find that too small.

Save the file, then start MySQL workbench.  You should now be able to read what you’re writing in the editor.

Hardware and Servers

Using Nautilus as a super user in Ubuntu 10.04

June 9th, 2010

Using Nautilus as a super user in Ubuntu 10.04

Of course, regardless of the amount of time you spend writing something, the instant you finish it and make it available for public viewing, you’ll find a problem. Or three. My last post, “Ubuntu 10.04: A first look”, contained a few issues I didn’t see until after hitting the publish button. And don’t get me started about the typo in the freakin’ title.

When mentioning the new features with GNOME’s file manager, Nautilus, I wrote:

I only wish it allowed you to sudo from within the application.

So, it turns out you can, but not by using sudo, but gksudo, which is basically sudo for GUI applications. You get a root-privileged file browser by executing the following in a terminal:

gksudo nautilus

Granted, it’s a pain to start a terminal every time you want to browse as root, but there’s a solution for that too. In the System -> Preference menu, there is an application called ‘Main Menu’ that allows you to modify, or add items to, the system menu.

New Menu Item button

The Main Menu window

After starting Main Menu, you’ll see the window above. Click the ‘New Item’ button as shown below to create the menu that I decided to call ’sudo nautilus’.

New Menu Item properties

The New Menu Item properties window

I chose to put the menu in the ‘Accessories’ menu, but you can put it in any other menu by selecting the menu of your choice prior to clicking the ‘New Item’ button.

Hardware and Servers

Ubuntu 10.04: A first look

June 8th, 2010

Ubuntu 10: a first look

What I’m looking for

Ubuntu’s recent focus has been on usability as an everyday desktop environment, even for non-techies like ol’ mom and pop. There have been some interesting additions to meet this goal, such as a music store and embedded social networking, but that’s not what I’m looking for.  I write web applications using a variety of programming languages and tools, and I need a platform that will boost my productivity. I need good, fast, and (preferably) free tools to help me do my job. I do not need expensive, resource-intensive tools that crash at the drop of a hat.

Why Linux?

Linux is the most popular web server operating system, and runs on six of the top 10 hosting services. Most importantly, however, is that it’s what is used where I work (and has been for the last 10 years with three different employers). I find it easier to automate tasks on Linux (as compared to Windows), something that is essential in productivity gains. The Windows command prompt feels like an add-on, where a terminal app on Linux feels like it’s truly a part of the OS (because it is). Ever try to select multiple lines in a windows command prompt? The block-level selection drives me nuts. But it works just as expected in a terminal window.

Another benefit I’ve found is with the ‘centralized repository’ updating scheme.  When you need to do software updates on a Linux box, it’s usually straightforward: run the system updater, you get a list of things that need updating, you select what you want to update, and then do the update. It’s also usually possible to do this for all of your installed software (with a few exceptions)! It’s kind of like the oft-maligned Apple App Store for the iPhone: a one-stop shop for all your applications. The Linux updaters, though, diverge from the App Store analogy when it comes to third-party apps: It’s not a problem, it’s completely open.

Why Ubuntu?

This came down to just “it’s what’s being used at work.”  I’ll admit that I’m not an Ubuntu fan; prior to my current job, all Linux OSs that I’ve used were Red Hat or Fedora distributions, and I am biased toward those distros.  Because of my lack of experience with Ubuntu, and my need to become familiar with it to comply with my job requirements, I’m taking a good, long, hard look at Ubuntu with this new 10.04 LTS release.

The 10.04 release is what Ubuntu refers to as a “Long Term Support” Release, or LTS. Normally, Ubuntu releases are supported for 18 months. This means that security updates and fixes will be published for 18 months after the initial release. A LTS release has support for three years for the desktop version (or five years for the server version). This longer window means you don’t have to worry as often about upgrading because of end-of-life issues. Fedora does not directly support a free long-term release (that’s what its’ big brother Red Hat Enterprise Linux does, but for a fee). All Fedora releases are supported for only 13 months.

frustrationLet me give you an example of why a long-term support release is important to me. I was setting up a new home server last year, and started the process of installing Fedora 11…I knew that Fedora 12 was only a month off, but I needed a server then, not a month later, so I went ahead with the Fedora 11 install.  Being that Fedora 11 was already five months into it’s 13 month lifecycle, I knew that meant only 8 months of use, max, before needing to do a major upgrade, with all the configuration headaches associated with it. A couple weeks ago, knowing that Fedora 13 was approaching, I decided to do the upgrade from Fedora 11 to 12 with the hope that doing so would make the eventual upgrade to Fedora 13 easier. The act of the upgrade went fine. Until I started using the server. There were some version mismatches that caused httpd and subversion to fail to start up. So, instead of committing some code I’d just worked on, I spent 30 minutes tracking down the problem, and found that I needed to install a newer mod_ssl that didn’t come across in the upgrade. I don’t relish the thought of breaking my development infrastructure every six months, so I’m really interested in how this three-year support cycle works out.

What’s with the names?

Ubuntu likes to have alliterative releases named after animals. This 10.04 release is usually referred to as Limping Lion Lethargic Lemur Lucid Lynx. I think the names are silly (as if you couldn’t already tell), and just refer to the release’s version number. So whenever you see Lucid Lynx, think ‘10.04′ (or April 2010) release, and we’ll get along just fine.

Installation

The Ubuntu installer is a very simple ‘live boot’ CD that provides only the minimal set of packages; you’re forced into a ‘lowest common denominator’ installation. You can’t choose any extras until after the installation has completed. It would be nice to have a pre-configured ‘developer’ installation with common development tools. The “Ubuntu Software Center” helps a bit, with categorized packages, including a Development Tools package.

Dude, where’s my keyboard?

It wasn’t long after installation that the first problem appeared, and it was a doozy: the keyboard didn’t work in the login screen.  I could mouse to my user name, click it, and then get the password prompt, but I could not get the keyboard to work. I was hoping it was just a keyboard driver issue, but changing the keyboard settings on the login page didn’t help.

Universal access preferences

The universal access preferences menu on the login page.

It was at that point I noticed the accessibility icon, and that it had an on-screen keyboard feature. I turned the on-screen keyboard on, but nothing happened. Or so I had thought.  After a close inspection, I saw a small speck (which could have been confused with a dead pixel or smudge), that turned out to the the on-screen keyboard. For whatever reason, the default size of the keyboard was 1 pixel wide. I was able to drag it to make it larger, and was then able to use the on-screen keyboard to log in just fine. After logging in, the real keyboard worked just fine. I thought the problem was fixed, however, on the next reboot I found the real keyboard wasn’t working.

The missing keyboard

There's a keyboard in the upper left. Really.

After a bit of research, I found an online posting about a default setup issue with the keyboard. The problem is in the /etc/default/console-setup file, due to an errant XKBVARIANT setting. Once I made the change mentioned in the post, the keyboard worked fine with the login screen. This problem was initially encountered in the Beta 2 release, but was still present in the initial final release. I’ve noticed a recent installation (about two weeks after the final release) does not exhibit this problem. Also, to clarify, all these problems occurred when running Ubuntu in VMWare Player.  I have not done a clean native install; the one standalone Ubuntu box I have was upgraded from 9.10 to 10.04 via the Synaptic Package Manager.

New themes

A new default theme is being released with Ubuntu 10.04, with some significant changes from prior versions. One of the biggest (and most annoying) is that the window action icons have moved from the upper right corner to the upper left. This was almost a dealbreaker for me, as nearly every OS I’ve ever used had a close box in the upper right, but fortunately, there’s a fix: use the Clearlooks theme.  Clearlooks puts the minimize/maximize and close buttons in their rightful place, the upper right part of the window.

themes

The Clearlooks and Radiance themes

Had the window controls not moved me away from the Radiance theme, the color scheme would have. I found the menu background to be a bit too dark, or the text not having enough contrast with the background. Whatever it was, I found the menus hard, or at least annoying, to read.  The Clearlooks theme provides menus with dark text on a light background, which works just fine for me.

As I continued the several follow-up installations for my development box, I noticed that the Synaptic Package Manager was getting slower and slower in displaying the package list after an install. I was never able to resolve the problem, but I’ve never experienced that kind of delay after doing the initial installs. I’ll just chalk it up to the system being busy handling all those package installation requests over a short period of time.

Now for the really big stuff: the actual development environment.

Eclipsed

I’ve always had a love/hate relationship with Eclipse. To be truthful, it’s been more of an “I’ll tolerate you if I have to”/hate relationship. It is a very powerful IDE, and some of the refactoring, templating and typeahead features are great. The problem is that I have so many problems getting it to work, that any time those features buy me, I lose in troubleshooting.  Hoping those problems would stay behind in the Windows world, I started to install Eclipse from Synaptic.

The Eclipse package in Synaptic is actually a meta-package, which includes the core Eclipse code, the Java developer tools, the Eclipse Plug-in Development kit, and the Rich Client Platform plug-in.  Once those are installed, you need to add the other features you want. For me, this was Subversive and M2Eclipse.  I started with Subversive, because that’s in the ‘Collaboration’ package of the main Galileo repository. Only one problem: I could not connect to the main Galileo repository. Nor could I connect to the external M2Eclipse repository. Or any repository. I plain could not get Eclipse to connect to any repository to add plug-ins.

So I used my smart friend, Google, to find answers. One suggestion was to use the Sun JDK instead of the OpenJDK. This involves adding the Sun repository to aptitude (the backend for the package managers like Synaptic), then running the update-java-alternatives command to use Sun’s JDK instead of the default OpenJDK.

First, to add the “partner” repository, which contains information on Sun’s JDK, execute the following in a terminal window:

sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner"
sudo aptitude update
sudo aptitude install sun-java6-sdk

Next, execute update-java-alternatives:

sudo update-java-alternatives -s java-6-sun

This command will recreate symbolic links to point to Sun’s Java tools instead of the default OpenJDK tools. Unfortunately, after going through all this, I still could not get Eclipse to load repositories.  Grasping at straws, and believing IPv6 to be an issue, I turned off IPv6 support, but still no luck.

Then the real long shot: Look at Eclipse configuration, specifically proxy settings, and ensure proxies aren’t being used. Specifically, in org.eclipse.core.net.prefs, ensure that both systemProxiesEnabled and proxiesEnabled are false.  However, even after doing that, I got the error: “An error occurred during the org.eclipse.equinox.internal.provisional.p2.engine.phases.CheckTrust phase.” My response: Screw this, I’m using NetBeans.

NetBeans to the rescue

Like Eclipse, the NetBeans install from Synaptic is straightfoward. Fortunately, adding plug-ins to NetBeans much more straightforward than Eclipse. Even better, many of those ‘extras’ in Eclipse like Maven support and Subversion, come right out of the box in NetBeans.  After a bit of tweaking, mainly with appearance, I was writing code without spending a lot of time getting the IDE set up. I may have a new favorite IDE.

The other software

One of the big wins with Linux is the availability of good, free software that will do just about anything you want. One of the aforementioned expensive, buggy Windows applications that I’ve tried hard to avoid is Photoshop. On the Linux side, GIMP (GNU Image Manipulation Program) is used in Photoshop’s place. While GIMP is no longer installed by default starting with 10.04, it is still available through any of the software package managers, and still supported; you just need to go through the extra step of installing it. One of the reasons it was not included by default was that it takes a lot of disk space, and may not have easily fit onto the Ubuntu live CD.  All of the original graphics and screenshots on this site have been either edited or touched up with GIMP.

File manager applications are usually nothing to write home (not to mention blog) about, but there’s a new feature in Nautilus, GNOME’s GUI file manager. The new ‘Extra Pane’ feature is very nice, especially when copying from one directory to another. This is very similar to a tool I use on Windows called xplorer2. I only wish it allowed you to sudo from within the application. Update: Why, yes, you can use Nautilus as a super user.

The office suite is where Linux systems let me down. The most popular office suite for Linux is OpenOffice, which has adequate tools except for the word processor, OpenOffice Writer. It just doesn’t measure up to Microsoft Word. The dealbreaker for me is the lack of an outlining option in Writer. I feel that Word’s outliner shines, and helps me compose my thoughts for a document before writing it. I’ve replaced Word on non-Office enabled PC’s with FreeMind, a mind mapping application. Mind mapping is much like outlining, except that it’s more free-form than Word’s outliner, which is a good thing. Fortunately, FreeMind is available in the ‘partner’ software repository, and works just like it does on Windows (as it should, since it’s a Java app).

As mentioned earlier, this release represents a big push to make Ubuntu more of a consumer operating system, including new integration with social media. However, this, along with the new UbuntuOne music platform, aren’t what I’m looking to Linux for; I use Windows for that stuff.

Ubuntu uniqueness

Ubuntu unpacks many packages in ways that I’m not expecting. Again, as a long-time RedHat/Fedora user, I’ve become accustomed to seeing certain files in certain locations. As an example, early in my Ubuntu exposure, I spent quite some time looking for the Apache configuration file, httpd.conf. In fact, I couldn’t even find /etc/httpd, the directory that ‘normally’ contains those configuration files. It turns out that one of those Ubuntu uniquenesses is replacing ‘httpd‘ with ‘apache2‘. Thus, there is no ‘httpd‘ process, there is an apache2 process; there is no /etc/httpd, there is /etc/apache2. To make things even more confusing, there is a /etc/apache2/httpd.conf, but there’s nothing in it. The real configuration file is /etc/apache2/apache2.conf.

There’s a similar issue with Tomcat, but the confusion may be my fault because of the way I’ve always installed Tomcat: just uncompress the package and go! Ubuntu’s packaging of Tomcat is unique in that it uses a lot of symbolic links (which is not at all unique to the world of Unix-based OSs).  The core of Tomcat, the binaries and the webapps directory, is in /usr/share/tomcat6. Or is it? Because there’s also a webapps directory in /var/lib/tomcat6, without a bin directory, and with symlinks for conf (pointing to /etc/tomcat6, to comply with the ‘all configuration in /etc‘ rule), logs (pointing to /var/log/tomcat6 to comply with the ‘all logs in /var/log‘ rule) and work (pointing to /var/cache/tomcat6). This is where the difference between CATALINA_HOME and CATALINA_BASE come in to play.

There’s no explicit ‘root’ login in Ubuntu. Instead of logging in as the super user, you’re expected to use sudo. The first user created (which happens during installation), has full sudo privileges by default, so that user doesn’t need to do anything to gain super user privileges, other than the usual second password entry and prepending ‘sudo‘ to every command. However, in a pinch, you can still gain global super user powers by executing sudo su -. You cannot do sudo with some commands, cd is one example. This becomes troublesome when you need to view or edit a file deep in a hierarchy of directories that your normal user account cannot access. You end up doing iterative ls commands:

# sudo ls dir1
# sudo ls dir1/dir2
# sudo ls dir1/dir2/dir3

In such cases I usually end up doing the sudo su - trick.

More to follow…

My trial migration isn’t something I can wrap up in the month it took to write this; I’m making a comparison to an operating system that I’ve been using for about 10 years, so there will certainly be more to come. As I run into roadblocks and triumphs, I’ll be posting it here…

General

Advertising, the Internet, and guilt trips

March 14th, 2010

The Internet, Advertising, and Guilt Trips

Last month, Ars Technica decided to run an “experiment” in which they denied site access to browsers that employed an “ad blocker”, followed by an impassioned plea to turn off your ad blocker. The impetus for doing this was that an estimated 40% of site visitors were using ad blockers, and since Ars uses ad views (as opposed to click throughs) for their ad metrics, users with ad blockers were denying Ars ad revenue, thus ’stealing’ site content.

Even by Ars’ own admission, the reaction to this act was “mixed”. Many people whitelisted Ars, some even subscribed. However, there were some commenters in Ars’ original announcement (which, unfortunately, are no longer online), who weren’t too happy about it.

The reason may have been the way in which content was denied: Ars simply served a blank page. There was no indication as to why, or what could be done to remedy the problem. It wasn’t until Ars published their plea that it became known what was happening.

Perhaps if only Ars swapped the order of events, things would have turned out better.  Had the post explaining how ad blocking was affecting the company come before the “experiment”, then active visitors could have prepared to avoid the blank pages, or at least known why the blank pages were being served. According to comments in Ars’ post-experiment post, all some needed to whitelist the site was to just be asked to do so.

I do enjoy Ars’ content, and read their RSS feed daily. I reluctantly decided to run my own experiment and whitelist arstechnica.com, and continue to do so to this day. I find that their advertisements aren’t distracting (for the most part, there is the occasional animated Flash ad), but I feel as if I was guilt tripped into doing it.

In the month since whitelisting, I haven’t regretted doing so. The ads served by Ars do not detract from the content for the most part, and are usually related to what I’m reading (they’re usually tech focused). I  As long as the substance AND STYLE of the ad matches the article. I don’t mind seeing animated or video advertising if I’m on a site that provides video content, but if I’m reading a article with static text, then the ad should be static as well, not Flash or an animated GIF.

How many animated advertisements do you see in a newspaper or magazine? Unless you’ve been dipping into Timothy Leary’s personal stash, the answer is “none.” Put simply, Ars Technica is an online version of a magazine. The advertising present on arstechnica.com should basically follow how advertising works in print magazines and newspapers. Print ads don’t flash or jiggle, make sound or appear in the middle of the page like magic; neither should ads serving static content.

Print ads also don’t have the ability to see what magazine or newspaper I read next, providing you ignore the possibility of following the trail of filler cards that fall out of a print magazine. There are some online ad purveyors that do like to follow where you go, much like a cyber stalker.

After whitelisting Ars, I noticed I wasn’t seeing ads on every visit. On some visits there would be a banner ad, usually in-house references to other Condé Nast sites, in the header; on other visits the banner area would be blank. I confirmed my whitelist settings, then realized I was still blocking JavaScript. Ads served from the nefarious doubleclick.net were being blocked because I specifically do not allow JavaScript from that domain to be executed because of their aggressive use of tracking cookies.

I don’t mind a web site tracking my visits. I very much mind when a third party, such as an ad server, tracks which sites I visit, and for how long. This is what Double Click did prior to their acquisition by Google. In the days prior to ad blocking extensions, I avoided doubleclick.net by using the host file trick to redirect doubleclick.net references to localhost, so nothing would appear, JavaScript and cookies wouldn’t be downloaded, and my actions wouldn’t be tracked. I’m still not convinced they’re behaving like a good net citizen, and I refuse to whitelist them.

Unfortunately, Ars uses Double Click. And being that Double Click just doesn’t serve ads, but JavaScript as well, the NoScript extension in Firefox blocks the JavaScript download, which prevents the ad from loading. (The fact that the ad won’t even display if JavaScript is disabled is troubling to me, and it should be troubling to Ars as well) Even more unfortunate, NoScript will not allow you to whitelist scripts for only a single site. In order to view the Double Click ad on Ars, I would need to allow doubleclick.net JavaScript on all sites I visit. I’m not willing to do that.

Shortly after the Ars “experiment”, I started having troubles accessing some stories on my local newspaper’s site. Most of the time, visiting sacbee.com (The Sacramento Bee) would result in seeing a story. However, every now and again, I would get a very confusing message about needing to be logged in to see a story:

Note that further down the page it says that I am, in fact, logged in. Regardless of what I did, including logging out and back in, and a forced refresh (control-F5), I would get this message, and then only on certain stories. Then, one day, completely by accident, I used a browser without any JavaScript or ad blocking (it was Internet Explorer, which I use only by accident), and stories that had previously been showing the above error were showing fine.  I tried with Firefox again, and the very same story I’d just been viewing was still being blocked.

It turns out that sacbee.com also uses Double Click, and it appears that either Double Click or their clients (in this case, sacbee.com) are “pulling an Ars” and refusing content to browsers with ad blocking.  I was able to confirm this by turning off the ad blocking and JavaScript blocking software in Firefox, and the previously blocked article suddenly started appearing.

I’m willing to work with sites and whitelist them if their ads are relevant and not distracting, but don’t expect me to start practicing unsafe browsing practices just so I can see your ads. There are some newspaper sites (ahem) that trigger XSS (Cross-site scripting) alerts when JavaScript blocking is turned off. That is a security risk, and I’m not willing to allow that risk just to see ads. If the Sacramento Bee, or any other site serving potential malware, doesn’t want me viewing their pages unless I allow my malware defenses to be lowered, then I won’t view their pages.

Aside from showing advertising in dissimilar media, I don’t like advertisements that slow down page loads. The next time you find yourself waiting for  a page to complete downloading, look at your browser’s status bar, and see if it’s waiting on an advertiser. I find that most page “hangs” are due to advertisements. Having content blocked by waiting on a overloaded ad server is infuriating, even more so if you’re being told it’s bad to block ads.

I certainly want the sites I use and enjoy to continue producing content and services, and if that means viewing advertisements, I’m all for it as long as my guidelines are met. Ars appears to be meeting those guidelines (for the most part), so I’m willing to help them out. A great example of how I believe advertising should be done can be found at Instapaper. Instapaper is an offline web reader that offers excellent Kindle integration, and I find it an invaluable resource. Instapaper displays a single, small, relevant ad, served by The Deck, an advertising company I find to be reputable (check out their web site to understand what I mean by ‘reputable’). Any site thinking of using advertising should look to Instapaper (or any of The Deck advertising clients) as an example.