Archive for the ‘Hardware and Servers’ Category

Using Nautilus as a super user in Ubuntu 10.04

Wednesday, June 9th, 2010

Using Nautilus as a super user in Ubuntu 10.04

Of course, regardless of the amount of time you spend writing something, the instant you finish it and make it available for public viewing, you’ll find a problem. Or three. My last post, “Ubuntu 10.04: A first look”, contained a few issues I didn’t see until after hitting the publish button. And don’t get me started about the typo in the freakin’ title.

When mentioning the new features with GNOME’s file manager, Nautilus, I wrote:

I only wish it allowed you to sudo from within the application.

So, it turns out you can, but not by using sudo, but gksudo, which is basically sudo for GUI applications. You get a root-privileged file browser by executing the following in a terminal:

gksudo nautilus

Granted, it’s a pain to start a terminal every time you want to browse as root, but there’s a solution for that too. In the System -> Preference menu, there is an application called ‘Main Menu’ that allows you to modify, or add items to, the system menu.

New Menu Item button

The Main Menu window

After starting Main Menu, you’ll see the window above. Click the ‘New Item’ button as shown below to create the menu that I decided to call ’sudo nautilus’.

New Menu Item properties

The New Menu Item properties window

I chose to put the menu in the ‘Accessories’ menu, but you can put it in any other menu by selecting the menu of your choice prior to clicking the ‘New Item’ button.

Ubuntu 10.04: A first look

Tuesday, June 8th, 2010

Ubuntu 10: a first look

What I’m looking for

Ubuntu’s recent focus has been on usability as an everyday desktop environment, even for non-techies like ol’ mom and pop. There have been some interesting additions to meet this goal, such as a music store and embedded social networking, but that’s not what I’m looking for.  I write web applications using a variety of programming languages and tools, and I need a platform that will boost my productivity. I need good, fast, and (preferably) free tools to help me do my job. I do not need expensive, resource-intensive tools that crash at the drop of a hat.

Why Linux?

Linux is the most popular web server operating system, and runs on six of the top 10 hosting services. Most importantly, however, is that it’s what is used where I work (and has been for the last 10 years with three different employers). I find it easier to automate tasks on Linux (as compared to Windows), something that is essential in productivity gains. The Windows command prompt feels like an add-on, where a terminal app on Linux feels like it’s truly a part of the OS (because it is). Ever try to select multiple lines in a windows command prompt? The block-level selection drives me nuts. But it works just as expected in a terminal window.

Another benefit I’ve found is with the ‘centralized repository’ updating scheme.  When you need to do software updates on a Linux box, it’s usually straightforward: run the system updater, you get a list of things that need updating, you select what you want to update, and then do the update. It’s also usually possible to do this for all of your installed software (with a few exceptions)! It’s kind of like the oft-maligned Apple App Store for the iPhone: a one-stop shop for all your applications. The Linux updaters, though, diverge from the App Store analogy when it comes to third-party apps: It’s not a problem, it’s completely open.

Why Ubuntu?

This came down to just “it’s what’s being used at work.”  I’ll admit that I’m not an Ubuntu fan; prior to my current job, all Linux OSs that I’ve used were Red Hat or Fedora distributions, and I am biased toward those distros.  Because of my lack of experience with Ubuntu, and my need to become familiar with it to comply with my job requirements, I’m taking a good, long, hard look at Ubuntu with this new 10.04 LTS release.

The 10.04 release is what Ubuntu refers to as a “Long Term Support” Release, or LTS. Normally, Ubuntu releases are supported for 18 months. This means that security updates and fixes will be published for 18 months after the initial release. A LTS release has support for three years for the desktop version (or five years for the server version). This longer window means you don’t have to worry as often about upgrading because of end-of-life issues. Fedora does not directly support a free long-term release (that’s what its’ big brother Red Hat Enterprise Linux does, but for a fee). All Fedora releases are supported for only 13 months.

frustrationLet me give you an example of why a long-term support release is important to me. I was setting up a new home server last year, and started the process of installing Fedora 11…I knew that Fedora 12 was only a month off, but I needed a server then, not a month later, so I went ahead with the Fedora 11 install.  Being that Fedora 11 was already five months into it’s 13 month lifecycle, I knew that meant only 8 months of use, max, before needing to do a major upgrade, with all the configuration headaches associated with it. A couple weeks ago, knowing that Fedora 13 was approaching, I decided to do the upgrade from Fedora 11 to 12 with the hope that doing so would make the eventual upgrade to Fedora 13 easier. The act of the upgrade went fine. Until I started using the server. There were some version mismatches that caused httpd and subversion to fail to start up. So, instead of committing some code I’d just worked on, I spent 30 minutes tracking down the problem, and found that I needed to install a newer mod_ssl that didn’t come across in the upgrade. I don’t relish the thought of breaking my development infrastructure every six months, so I’m really interested in how this three-year support cycle works out.

What’s with the names?

Ubuntu likes to have alliterative releases named after animals. This 10.04 release is usually referred to as Limping Lion Lethargic Lemur Lucid Lynx. I think the names are silly (as if you couldn’t already tell), and just refer to the release’s version number. So whenever you see Lucid Lynx, think ‘10.04′ (or April 2010) release, and we’ll get along just fine.

Installation

The Ubuntu installer is a very simple ‘live boot’ CD that provides only the minimal set of packages; you’re forced into a ‘lowest common denominator’ installation. You can’t choose any extras until after the installation has completed. It would be nice to have a pre-configured ‘developer’ installation with common development tools. The “Ubuntu Software Center” helps a bit, with categorized packages, including a Development Tools package.

Dude, where’s my keyboard?

It wasn’t long after installation that the first problem appeared, and it was a doozy: the keyboard didn’t work in the login screen.  I could mouse to my user name, click it, and then get the password prompt, but I could not get the keyboard to work. I was hoping it was just a keyboard driver issue, but changing the keyboard settings on the login page didn’t help.

Universal access preferences

The universal access preferences menu on the login page.

It was at that point I noticed the accessibility icon, and that it had an on-screen keyboard feature. I turned the on-screen keyboard on, but nothing happened. Or so I had thought.  After a close inspection, I saw a small speck (which could have been confused with a dead pixel or smudge), that turned out to the the on-screen keyboard. For whatever reason, the default size of the keyboard was 1 pixel wide. I was able to drag it to make it larger, and was then able to use the on-screen keyboard to log in just fine. After logging in, the real keyboard worked just fine. I thought the problem was fixed, however, on the next reboot I found the real keyboard wasn’t working.

The missing keyboard

There's a keyboard in the upper left. Really.

After a bit of research, I found an online posting about a default setup issue with the keyboard. The problem is in the /etc/default/console-setup file, due to an errant XKBVARIANT setting. Once I made the change mentioned in the post, the keyboard worked fine with the login screen. This problem was initially encountered in the Beta 2 release, but was still present in the initial final release. I’ve noticed a recent installation (about two weeks after the final release) does not exhibit this problem. Also, to clarify, all these problems occurred when running Ubuntu in VMWare Player.  I have not done a clean native install; the one standalone Ubuntu box I have was upgraded from 9.10 to 10.04 via the Synaptic Package Manager.

New themes

A new default theme is being released with Ubuntu 10.04, with some significant changes from prior versions. One of the biggest (and most annoying) is that the window action icons have moved from the upper right corner to the upper left. This was almost a dealbreaker for me, as nearly every OS I’ve ever used had a close box in the upper right, but fortunately, there’s a fix: use the Clearlooks theme.  Clearlooks puts the minimize/maximize and close buttons in their rightful place, the upper right part of the window.

themes

The Clearlooks and Radiance themes

Had the window controls not moved me away from the Radiance theme, the color scheme would have. I found the menu background to be a bit too dark, or the text not having enough contrast with the background. Whatever it was, I found the menus hard, or at least annoying, to read.  The Clearlooks theme provides menus with dark text on a light background, which works just fine for me.

As I continued the several follow-up installations for my development box, I noticed that the Synaptic Package Manager was getting slower and slower in displaying the package list after an install. I was never able to resolve the problem, but I’ve never experienced that kind of delay after doing the initial installs. I’ll just chalk it up to the system being busy handling all those package installation requests over a short period of time.

Now for the really big stuff: the actual development environment.

Eclipsed

I’ve always had a love/hate relationship with Eclipse. To be truthful, it’s been more of an “I’ll tolerate you if I have to”/hate relationship. It is a very powerful IDE, and some of the refactoring, templating and typeahead features are great. The problem is that I have so many problems getting it to work, that any time those features buy me, I lose in troubleshooting.  Hoping those problems would stay behind in the Windows world, I started to install Eclipse from Synaptic.

The Eclipse package in Synaptic is actually a meta-package, which includes the core Eclipse code, the Java developer tools, the Eclipse Plug-in Development kit, and the Rich Client Platform plug-in.  Once those are installed, you need to add the other features you want. For me, this was Subversive and M2Eclipse.  I started with Subversive, because that’s in the ‘Collaboration’ package of the main Galileo repository. Only one problem: I could not connect to the main Galileo repository. Nor could I connect to the external M2Eclipse repository. Or any repository. I plain could not get Eclipse to connect to any repository to add plug-ins.

So I used my smart friend, Google, to find answers. One suggestion was to use the Sun JDK instead of the OpenJDK. This involves adding the Sun repository to aptitude (the backend for the package managers like Synaptic), then running the update-java-alternatives command to use Sun’s JDK instead of the default OpenJDK.

First, to add the “partner” repository, which contains information on Sun’s JDK, execute the following in a terminal window:

sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner"
sudo aptitude update
sudo aptitude install sun-java6-sdk

Next, execute update-java-alternatives:

sudo update-java-alternatives -s java-6-sun

This command will recreate symbolic links to point to Sun’s Java tools instead of the default OpenJDK tools. Unfortunately, after going through all this, I still could not get Eclipse to load repositories.  Grasping at straws, and believing IPv6 to be an issue, I turned off IPv6 support, but still no luck.

Then the real long shot: Look at Eclipse configuration, specifically proxy settings, and ensure proxies aren’t being used. Specifically, in org.eclipse.core.net.prefs, ensure that both systemProxiesEnabled and proxiesEnabled are false.  However, even after doing that, I got the error: “An error occurred during the org.eclipse.equinox.internal.provisional.p2.engine.phases.CheckTrust phase.” My response: Screw this, I’m using NetBeans.

NetBeans to the rescue

Like Eclipse, the NetBeans install from Synaptic is straightfoward. Fortunately, adding plug-ins to NetBeans much more straightforward than Eclipse. Even better, many of those ‘extras’ in Eclipse like Maven support and Subversion, come right out of the box in NetBeans.  After a bit of tweaking, mainly with appearance, I was writing code without spending a lot of time getting the IDE set up. I may have a new favorite IDE.

The other software

One of the big wins with Linux is the availability of good, free software that will do just about anything you want. One of the aforementioned expensive, buggy Windows applications that I’ve tried hard to avoid is Photoshop. On the Linux side, GIMP (GNU Image Manipulation Program) is used in Photoshop’s place. While GIMP is no longer installed by default starting with 10.04, it is still available through any of the software package managers, and still supported; you just need to go through the extra step of installing it. One of the reasons it was not included by default was that it takes a lot of disk space, and may not have easily fit onto the Ubuntu live CD.  All of the original graphics and screenshots on this site have been either edited or touched up with GIMP.

File manager applications are usually nothing to write home (not to mention blog) about, but there’s a new feature in Nautilus, GNOME’s GUI file manager. The new ‘Extra Pane’ feature is very nice, especially when copying from one directory to another. This is very similar to a tool I use on Windows called xplorer2. I only wish it allowed you to sudo from within the application. Update: Why, yes, you can use Nautilus as a super user.

The office suite is where Linux systems let me down. The most popular office suite for Linux is OpenOffice, which has adequate tools except for the word processor, OpenOffice Writer. It just doesn’t measure up to Microsoft Word. The dealbreaker for me is the lack of an outlining option in Writer. I feel that Word’s outliner shines, and helps me compose my thoughts for a document before writing it. I’ve replaced Word on non-Office enabled PC’s with FreeMind, a mind mapping application. Mind mapping is much like outlining, except that it’s more free-form than Word’s outliner, which is a good thing. Fortunately, FreeMind is available in the ‘partner’ software repository, and works just like it does on Windows (as it should, since it’s a Java app).

As mentioned earlier, this release represents a big push to make Ubuntu more of a consumer operating system, including new integration with social media. However, this, along with the new UbuntuOne music platform, aren’t what I’m looking to Linux for; I use Windows for that stuff.

Ubuntu uniqueness

Ubuntu unpacks many packages in ways that I’m not expecting. Again, as a long-time RedHat/Fedora user, I’ve become accustomed to seeing certain files in certain locations. As an example, early in my Ubuntu exposure, I spent quite some time looking for the Apache configuration file, httpd.conf. In fact, I couldn’t even find /etc/httpd, the directory that ‘normally’ contains those configuration files. It turns out that one of those Ubuntu uniquenesses is replacing ‘httpd‘ with ‘apache2‘. Thus, there is no ‘httpd‘ process, there is an apache2 process; there is no /etc/httpd, there is /etc/apache2. To make things even more confusing, there is a /etc/apache2/httpd.conf, but there’s nothing in it. The real configuration file is /etc/apache2/apache2.conf.

There’s a similar issue with Tomcat, but the confusion may be my fault because of the way I’ve always installed Tomcat: just uncompress the package and go! Ubuntu’s packaging of Tomcat is unique in that it uses a lot of symbolic links (which is not at all unique to the world of Unix-based OSs).  The core of Tomcat, the binaries and the webapps directory, is in /usr/share/tomcat6. Or is it? Because there’s also a webapps directory in /var/lib/tomcat6, without a bin directory, and with symlinks for conf (pointing to /etc/tomcat6, to comply with the ‘all configuration in /etc‘ rule), logs (pointing to /var/log/tomcat6 to comply with the ‘all logs in /var/log‘ rule) and work (pointing to /var/cache/tomcat6). This is where the difference between CATALINA_HOME and CATALINA_BASE come in to play.

There’s no explicit ‘root’ login in Ubuntu. Instead of logging in as the super user, you’re expected to use sudo. The first user created (which happens during installation), has full sudo privileges by default, so that user doesn’t need to do anything to gain super user privileges, other than the usual second password entry and prepending ‘sudo‘ to every command. However, in a pinch, you can still gain global super user powers by executing sudo su -. You cannot do sudo with some commands, cd is one example. This becomes troublesome when you need to view or edit a file deep in a hierarchy of directories that your normal user account cannot access. You end up doing iterative ls commands:

# sudo ls dir1
# sudo ls dir1/dir2
# sudo ls dir1/dir2/dir3

In such cases I usually end up doing the sudo su - trick.

More to follow…

My trial migration isn’t something I can wrap up in the month it took to write this; I’m making a comparison to an operating system that I’ve been using for about 10 years, so there will certainly be more to come. As I run into roadblocks and triumphs, I’ll be posting it here…

Copying a VMWare Player virtual machine

Monday, December 14th, 2009

Over the past couple months I’ve been comparing VMWare Workstation against VMWare Player.  Workstation costs $200, while Player is free. While the “free” part is enough for me to keep using Player, there are some things I miss about Workstation.  One of the things I miss is the ability to make snapshots and copies of virtual machines.  However, with a little bit of work, it’s possible to create a copy of a virtual machine you created with Player. Here’s an example of how I did it with a minimalist installation of Fedora 12.

Copy the original “golden” virtual machine directory within the Virtual Machines directory.  The original virtual machine directory was named Fedora12Mini, and the new directory was renamed Fedora12Firewall. The first step is to change all the file names in the copied directory to match the new directory name (which will become your virtual machine’s name in Player).  The virtual machine directory contains (at least) five files that have the same name as the virtual machine’s name, differing only by their extension.  In the example below, all instances of Fedora12Mini need to be changed to Fedora12Firewall.

How the copied directory looks before changing file names

How the copied directory looks before changing file names

Once that’s complete, open the new Fedora12Firewall.vmx file.  The .vmx file contains most of the configuration settings for the virtual machine. Modify all instances of the original virtual machine name (Fedora12Mini) to the new name (Fedora12Firewall) in the .vmx file. While you have the .vmx file open, note two lines you’ll need to look for later.  These are the lines that start with ‘ethernet0.generatedAddress‘ and ‘uuid.location‘. These values are equivalent to the HWADDR (or MAC) and UUID values in your operating system’s configuration.  Both of those values are intended to uniquely identify your network card and computer, respectively. When you start the virtual machine, these values will be regenerated for your new virtual machine, and you will need to update your virtual machine’s configuration with those new values. Before starting the new virtual machine, you still need to make one more file name change, this one in the .vmxf file; change the original virtual machine name to the new name, just like you did in the .vmx file earlier.

The generatedAddress and uuid lines in the .vmx file

The generatedAddress and uuid lines in the .vmx file

Start the VM by starting VMWare Player, and then clicking “Open a Virtual Machine”. Navigate to the new directory and open the .vmx file in that directory.  Now click “Play Virtual Machine.” Shortly after doing that, you will be asked if you moved or copied the virtual machine.  When asked, say that you “copied”.  The virtual machine will then start up.

VMWareCopy3

The network adapter will fail upon startup. You’ll know this because you will not be able to make any network connections.  Verify this by executing ifconfig and looking for the IP address of the eth0 controller:

Results of ifconfig

Results of ifconfig

Note there is no ethernet controller (there should be at least an eth0 setting). This is happening because Fedora’s configuration files do not match the changes made to the virtual hardware that were made when you told Player that you copied the virtual machine.  To fix this, open up the .vmx file on the host and note the new ethernet generatedAddress and UUID location values.  Open /etc/sysconfig/networking-scripts/ifcfg-eth0 on the virtual machine and enter the contents of the ethernet0.generatedAddress line into the HWADDR line in ifcfg-eth0, and the uuid.location contents into the UUID line in ifcfg-eth0. Copying ethernet0.generatedAddress to ifcfg-eth0 is straightforward, but the UUID value isn’t formatted the same as in the .vmx file.  When updating ifcfg-eth0 with the new UUID, just ensure that it follows the same pattern of 4 bytes-2 bytes-2 bytes-2bytes-6 bytes.  The easiest way I found to do this was to just add a new UUID underneath the existing UUID, then deleting the original UUID when finished:

ifcfg-eth0 after updating HWADDR and UUID, but before deleting original UUID

ifcfg-eth0 after updating HWADDR and UUID, but before deleting original UUID

After deleting the original UUID value and saving ifcfg-eth0, restart the virtual machine. (An aside: I’m not completely sure a full system restart is necessary here.  I tried to restart the networking service [service network restart], and the changes didn’t seem to take effect until after the full system restart.) You should now have an IP address, and all will be good with the world.

Attention software developers: Hands off my desktop!

Monday, November 30th, 2009

I returned from the Thanksgiving holiday to find my new PC with a black desktop. It wasn’t the Black Screen of Death; there were (a few) icons on the desktop, and the PC was functioning normally, it was just that my desktop appeared to be a photo of a very deep cave at midnight during a new moon. Perhaps a remnant of Black Friday?

Artist's rendering of my black desktop

Artist's rendering of my black desktop

At first, I thought it was an issue with Windows activation, since that can cause the desktop to go black if Windows hadn’t been properly activated during install.  That was not the case, since there was no activation warning in the lower right of the screen, plus I have system updates turned on, which requires the Windows genuine advantage tool (or the Windows 7 version of it, anyway). I also noticed that fonts didn’t look quite right.  The smoothness of the fonts in the Windows Explorer were gone, and most other fonts looked jagged as well.

After I started poking around, I found there was a dialog that had been minimized telling me that my trial version of Norton AntiVirus had expired. Surely, they wouldn’t black out my desktop and screw with my fonts over that, would they?

Short answer: yes. I had planned on testing out Microsoft Security Essentials, so I uninstalled Norton, and lo, the desktop reappeared! After restarting the desktop came back, the fonts were smooth as a baby’s bumper cushions, and all was right with the world.

The chances of me extending my Norton trial? Very close to zero.  There are ways of communicating with your users other than messing with fonts and desktop backgrounds.  Not cool.

Issues with installing 64-bit MySQL

Monday, November 23rd, 2009

melting_mysqlIt had been a while since I needed to install MySQL (January), but a new desktop PC required a new install of MySQL.  To my delight, not only had a new version of MySQL been released, but there was a 64-bit version available as well.  Installing 32-bit code onto a 64-bit machine just seems wrong, so even though I probably don’t need the 64-bit speed for my development tasks on my PC, I went right ahead and started installing it.

Things went well up until I reached the configuration wizard. I had selected pretty much standard everything in the installer (for a development setup), but when I reached the configuration wizard, it hung just before creating the databases and configuration files. The wizard itself had to be forcibly killed.

After a bit of research, it appears this problem occurs because the wizard depends upon a 32 bit libmySQL, even though it’s installing a 64-bit package, and the 64-bit installer didn’t have the 32-bit library. Fortunately, the fix was easy: just put a 32-bit libmySQL.dll into the dll path (the %PATH% environment variable).

So where can you get a 32-bit libmySQL.dll? I already had MySQL tools installed, so I just borrowed it from there. The Windows MySQL tools are 32-bit only. You probably will need these tools, anyway, so just install them before installing the 64-bit database server.

When you start the installation, go up to the point where the “MySQL Server Instance Config Wizard” starts, and then cancel it.  Go to your 32-bit MySQL tools directory and copy the libmySQL.dll from that directory.

Next, go to your MySQL Server installation (for me, it was C:\Program Files\MySQL\MySQL Server 5.1\bin) and rename libmySQL.dll to something else, like 64libmySQL.dll, and then paste the 32-bit DLL you copied earlier.

Now go to your Start Menu, then navigate down to the MySQL Server 5.1 directory in the “All Programs” area. Inside “MySQL Server 5.1″ is the “MySQL Server Instance Config Wizard” icon.  Select that, and wait for the configuration to end.

Once the configuration has completed, make sure the databases have been stopped, and then delete the 32-bit libmySQL.dll and rename 64libmySQL.dll back to libmySQL.dll.  Start up the databases again, and you’ll experience the 64-bit MySQL goodness.

Someone’s been eating my shared folders

Thursday, November 19th, 2009

You’re probably noticing a theme in the last couple of posts. There are weeks where tech is my friend. This week is not such a week. Today’s problem involves losing critical functionality on my home PC’s VMWare install.

I love VMWare.  I’ve been a user of their server products for a few years, and with the recent purchase of a beefy i7-based PC, I started looking into VMWare Workstation and Player.  One of the features of these desktop versions of VMWare is that you can share folders from the host operating system on the guests. In my specific case, the host operating system is Windows 7, and the guest OS is Fedora 11. In order to backup my Fedora installation, I use the shared folders feature to copy from Fedora onto the Windows 7 host, where it’s backed up onto an external USB drive using Acronis Home.

This system worked for a few days, but suddenly stopped working one day.  It occurred after doing a software update on the Fedora guest. I checked the usual suspects for when things go wrong on Linux: firewall settings on both the host and guest, selinux, disk space issues, but no problem was found.  I deleted the shared folder settings and restored them, and I still couldn’t access the shared folders. I even went so far as to reboot both the guest and the host, and the shared folders still wouldn’t work.

It was as that point I realized the significance of shared folders disappearing after doing a Fedora system update.  Among the updates was a security update to the Linux kernel. It was then I realized what had gone wrong.

Shared folders will work with a guest OS only if VMWare Tools are installed on the guest.  On a Fedora guest, that involves recompiling the kernel, and when the kernel was overwritten by the system update, the VMWare changes to the kernel were lost, and thus no more shared folders.

To resolve the issue, I just reinstalled VMWare tools.  Since they were previously installed, all I needed to do was to go to where I had expanded the VMWare Tools tar file and run the vmware-install.pl script again.  I used all the default selections for the prompts during the re-install, and when it finished, shared folders reappeared as quickly as they had disappeared.

Someone’s been eating my CPU cycles

Monday, November 16th, 2009

Late last week I started noticing a problem with my laptop, a Dell Studio Core 2 Duo running Windows Vista. The problem first manifested itself by my laptop’s fan cycling on and off quite often, accompanied by a shorter-than-normal battery life (about two hours as opposed to the 3 ½ to four hours when the battery was new). The battery is less than a year old, and I use it only when necessary, so it shouldn’t be having charge retention issues just quite yet.

I looked at the task manager, and noticed that CPU usage was staying steady between 30 and 40 percent even though I didn’t have any applications running. The disk activity light wasn’t active.  When I looked at the process list for all users, I found that the System Idle process was around 90 to 95 percent. I would think that the CPU usage plus the idle percentage would come close to 100%, so this concerned me.

My first thought was a (insert evil laugh here) virus, but every scan I did, including when running in safe mode, didn’t show any problems.  There also weren’t any significant errors (or as NASA likes to say “no unexpected errors”) in any of the logs.  I was flummoxed.

My next step was to run msconfig and stop anything that was not absolutely necessary from starting at boot time.  I also went through all the scheduled tasks, and either disabled unnecessary tasks or set them to run only when on AC power. I also paid close attention to the applications I was using when the problem appeared.  There were a couple of close calls, but I was able to eliminate the apps I thought were causing the problem by repeating the steps that I thought caused the problem.

A co-worker suggested that I look at Process Monitor from Microsoft SysInternals. I had used a file monitor product from SysInternals when it was produced by an independent company, and found it very useful, but this was the first time I’d used Process Monitor.  After running it less than a minute, the culprit became obvious: Hardware Interrupts are a part of the System Idle Process (who knew?) and it was consuming the 30-40 percent of my CPU.

But why? Is my laptop dying? No. A quick Google for “hardware interrupts high cpu dell” included a comment about an application called FastAccess that was causing a similar problem.  FastAccess provides facial recognition for login purposes; I don’t use it, nor do I use the camera integrated into the laptop’s screen, so the simple answer to my problem was to uninstall FastAccess.

It’s been several hours since then, and the issue has not reoccurred. The fan has not spun up once, and after using the laptop on battery for over an hour, it still showed at 77%. Case closed.

Recovering from a subversive corruption

Wednesday, September 16th, 2009

While performing a normal update from Subversion recently, I received an error stating that an error had occurred during the download, and the update stopped. This article shows how I diagnosed the problem and got svn back on track.

The error I was getting from the svn client (in this case, Tortoise SVN) was:

svn: REPORT request failed on '/repos/myproject/!svn/vcc/default'
svn: REPORT of '/repos/myproject/!svn/vcc/default': 200 OK

To ensure that this was not due to a problem in the copy of source I had, I used another PC to do a full checkout of the code. The error occurred in the same location. I then decided to look at the server logs and see if I could get any better idea of what was going wrong. Since I use svn with apache, all I needed to do was look at the apache erorr log, where I saw the following:

A failure occurred while driving the update report editor  [5000, #200002]
Can't read length line in file '/var/www/svn/myproject/db/revprops/733'

I opened the above file, and found that it was corrupted to the point of being unreadable; it appeared to be binary data.

I first tried to manually fix the properties file (revprops/733), but that had no effect other than changing the error message to something very arcane, but still reporting problems with rev 733.

svn: The REPORT request returned invalid XML in the response: XML parse error at line 79099: no element found (/repos/ASIOne_Prototype/!svn/vcc/default)

I do full backups of the subversion repositories every night using svnadmin dump. Since I wasn’t able to fix the repository manually, I had to turn to the full backup, but ran into a problem: The backup was also suffering from this same problem. Since it couldn’t retrieve rev 733 from the repository, it just stopped at revision 732.

Fortunately, I do a backup of each revision just after it’s committed. I do this with a svnadmin dump —incremental command in each repository’s pre-commit hook. I looked at the backup file for rev 733, and it appeared to be fine; it was completely readable since the corruption occurred some time after rev 733 had been committed.

I moved the original repository, and created a new myproject repository, then loaded the full backup (which went only to rev 732) using the svnadmin load utility. After the full backup had finished, I wrote a script to call svnadmin load on the per-revision backup files starting at revision 733, and going all the way to 763 (the most current rev at the time).

Once the script completed, I exported a revision I knew I had fully backed up elsewhere, created a sha1 hash file for the directory containing the source, and then compared the hashes against the backup of those files. There were no checksum mismatches or missing files.

Ensure you have backups. It’s very easy to do. To create a nightly full backup, create a script similar to the following:

mv SVN_Full.dump SVN_Full`date +%Y%m%d`.dump
svnadmin dump /var/www/svn/myproject > SVN_Full.dump

Add an entry to your crontab to run this once a day, preferably late at night or early in the morning.

Here’s what my per-revision backup script looks like:

NEW_REVISION=$1
svnadmin dump /var/www/svn/myproject -r $NEW_REVISION:$NEW_REVISION --incremental > SVN_$NEW_REVISION.dump

It takes a single parameter, the revision number of the commit just completed. Note the use of the —incremental switch. That will make svnadmin dump backup only those revisions shown in the range. Since we’re backing up only one revision, the start and end revision numbers are the same. This script is invoked in the post-commit hook with the following code:

/bin/sh /home/backup/scripts/backup_svn_rev.sh "$REV"