Archive for the ‘VMware’ Category

A new version of VMware Server 1.0.7 has been released. Sadly, it does not work with Linux 2.6.26.x series kernel.

From the Release Notes, it seems the changes from 1.0.6 to 1.0.7 are primary for security fixes. It does not address the issue of compatibility with Linux Kernel.

Fortunately, the patch files for 1.0.6 can be apply to 1.0.7 cleanly. I manage to build the latest VMware Server after applied the patches. Seems to work nicely.


Finally! Someone made patches that enable VMware Server 1.0.6 to work in kernel 2.6.26.x!

The site is currently down, but the link to the patch for vmmon and vmnet are still available:

I have successfully built VMware Server 1.0.6 in Linux after applied the patches. So far, it seems to work fine. However, the kernel option to export obsolete symbol still needs to be enabled.

The site has come back: VMware-server – paldo.

All thanks to paldo developer for the patches.

VMware 1.0.6 has been released (a few days ago). I had downloaded and installed in 2 systems, so far it seems working well, and it no longer need any additional patching in order to build with Linux kernel 2.6.25.x

However, it still using a deprecated or unused module API.

VMware: VMware Server

VMware Server 1.0.5 and Kernel 2.6.25

Linux kernel 2.6.25 has been released. As usual, I download it and try it out with my Dell Vostro 1400 laptop. The good news, this kernel support my Intel-HDA perfectly, there is no need to download and get the latest driver from ALSA (as I previously had). The bad news is, VMware does not work with this kernel.

After some searching at google, I manage to find a solution.

Step 1 – Download the Patches for VMware 1.0.5

Download the patches for VMware 1.0.5 from here (thanks to jondaley). Basically, the two patches for vmnet and vmmon.

Step 2 – Apply the Patches

Apply the patch file by copying the vmnet.tar and vmmon.tar from /usr/lib/vmware/modules/source directory to an empty directory. Decompress both files by running tar xvf vmnet.tar and tar xvf vmmon.tar. Two new directories should appear in this directory, they are respectively vmnet-only and vmmon-only Copy both of the downloaded patch files into this directory as well.

From here, do:

$> cd vmnet-only

$> patch -p1 < ../vmnet-2.6.25.patch

$> cd ../vmmon-only

$> patch -p1 < ../vmmon-2.6.25.patch

Both patches should patch correctly without warning. Create the tar files again by doing: “tar cvf vmnet.tar vmnet-only/” and “tar cvf vmmon.tar vmmon-only/” and overwrite the original files at /usr/lib/vmware/modules/source directory.

Step 3 – Rebuild VMware

Rebuild vmware by running the


  • Make sure “Enable unused/obsolete exported symbols” must be selected under the “Kernel hacking” of the Linux Kernel configuration.
  • This only works for Non-any-any-patch VMware. If VMware has been previously patched with any-any-patch, it needs to be reinstalled again. Don’t worry about the error message while reinstalling VMware, is expected, since it not patched yet. Just copy both tar files and follow the steps mentioned above

VMware 1.0.5 – Excellent!

Posted: March 26, 2008 in VMware

There is a new version of VMware (1.0.5), so far it does not suffer the same problem I had with 1.0.4.

It seems to be more painless if migrate from 1.0.3 to 1.0.5

Linux Software Raid + VMware

Posted: February 22, 2008 in Linux, VMware

In my previous entry, I wrote about the problem I faced with VMware using SCSI as the virtual disk emulation. After a few experiment, I notice, the problem only exist in my Linux that uses software raid. All of my 3 machines that hosts VMware are using software raid mirror. I tried with one machine without software raid, the default SCSI virtual disk works fine, even in I/O high load.

I had mentioned this in one of my response to the comments I received. So far, I notice, in software raid setup, Linux run best with IDE as the virtual disk emulation. Below are the steps taken by me to convert the SCSI disk to IDE disk. The steps are quite simple, so far no lost data or guest OS failure encountered. However, I am using Slackware, which uses lilo as the OS loader. So, this little guide only applicable to Linux that uses lilo.

  • Step 1: Delete the Virtual SCSI disk and remove SCSI from the system – This may sound like a very drastic measure, but removing SCSI disk does not destroy the virtual disk images. This can be done by doing: Edit Virtual Machine Settings -> Select the SCSI disk -> Press the “Remove” button
  • Step 2: Edit the .vmx file and .vmdk file of the guest OS – For example, in my guest OS, Linux, it will have a Linux.vmx file. Inside this file would contain the following entries:
    scsi0.present = "TRUE"
    scsi0.virtualDev = "lsilogic"

    Set the “TRUE” to “FALSE” and remove the scsi0.virtualDev line. After this, edit the Linux.vmdk file, and change:
    ddb.adapterType = "lsilogic" to ddb.adapterType = "ide"
  • Step 3: Add the modify virtual disk image – Goto Edit virtual machine settings again, this time use Add to add a new disk, and make sure we select Use an existing disk and browse the the modified Linux.vmdk that we had done in step 2.
  • Step 4: Make the guest OS boot from hda instead of sda – First, use the Slackware installer CD or ISO image to boot up. After boot up the system using the installer disk, mount the virtual disk partition to /mnt. From here, edit /etc/lilo.conf and /etc/fstab, make sure we change all the entry of sda to hda. Once this is done, run ‘lilo -r /mnt’. Create the hda* device file in /mnt/dev if required.

Once we reached here, we can remove the CD installer or ISO image from the guest OS and let it start as per normal.

I had converted many guest OS systems using the steps mention, it works quite well and it took not longer than 10 minutes to finish.

However, I am using Slackware, so these steps are typically for Slackware. For other distro, we will need to know how to manually make a partition bootable. This is very important! Else we won’t be able to boot up the disk image we had modified

Lastly, (though it may seems obvious to some) do remember to try these steps with a few test guest OS before actually apply to a production guest OS. We should backup the production guest OS just in case something went wrong.

VMware is a marvelous software, especially those who (like me) needs to setup test environment quickly and efficiently. Using it, there is no need for me to search/prepare hardware, there is no need to fix or troubleshoot any potential hardware problem (e.g. disk failure). It really helps my work a lot.

I would consider myself as a new comer/newbie for using VMware. Like anyone of us who started using any new, we tend to select default options, as we don’t really have much experience to really figure out which configurations are the best. Sometime, some default selection is even mark as “recommended” by the system supplier.

This prove to be problematic for me…

The story started when I need to perform an upgrade from version 1.0.3 to 1.0.4.

To start, check out this thread:

Basically, in version 1.0.3, the default SCSI emulation is done using Buslogic. However, once upgraded to 1.0.4, it generate a lot of errors in scanning SCSI disk. It will still boot eventually, but will take a very long time to finish all the scanning of SCSI disk. We need to change the SCSI emulation from Buslogic to Lsilogic. Though it solve this problem, it has another problem.

My VMware host server has > 10 guest OS. I don’t run all of them concurrently, but there would be situation where I may need to run more than 5 guest OS. This is when problem starts to appear. From time to time, I would get the following messages in the guest OS:

mptscsih: ioc0: task abort: SUCCESS (sc=c411c640)
mptscsih: ioc0: attempting task abort! (sc=c411c780)
sd 0:0:0:0: [sda] CDB: cdb[0]=0x2a: 2a 00 00 b0 d5 af 00 00 08 00
mptscsih: ioc0: task abort: SUCCESS (sc=c411c780)
mptscsih: ioc0: attempting task abort! (sc=c411c8c0)
sd 0:0:0:0: [sda] CDB: cdb[0]=0x2a: 2a 00 00 b0 e7 1f 00 00 08 00
mptscsih: ioc0: task abort: SUCCESS (sc=c411c8c0)
mptscsih: ioc0: attempting task abort! (sc=c411ca00)
sd 0:0:0:0: [sda] CDB: cdb[0]=0x2a: 2a 00 00 b0 ee e7 00 00 08 00
mptscsih: ioc0: task abort: SUCCESS (sc=c411ca00)
mptscsih: ioc0: attempting task abort! (sc=cf8ae280)
sd 0:0:0:0: [sda] CDB: cdb[0]=0x2a: 2a 00 00 6c e6 57 00 00 08 00
mptscsih: ioc0: task abort: SUCCESS (sc=cf8ae280)
mptscsih: ioc0: attempting task abort! (sc=cf8aeb40)
sd 0:0:0:0: [sda] CDB: cdb[0]=0x2a: 2a 00 00 21 92 cf 00 00 10 00
mptscsih: ioc0: task abort: SUCCESS (sc=cf8aeb40)
mptscsih: ioc0: attempting task abort! (sc=cf8aea00)
sd 0:0:0:0: [sda] CDB: cdb[0]=0x2a: 2a 00 00 69 c0 d7 00 00 08 00
mptscsih: ioc0: task abort: SUCCESS (sc=cf8aea00)

Worst, in some situation, the entire guest OS may hanged and gave out filesystem errors of disk I/O failure.

I believe these messages are due to high disk I/O operation. So, I did a complete “tar zcvf” of the entire filesystem to test the system. Predictably, these warning messages appeared. Worst, even other guest OS which was idling also showing these messages. If I try to compress a filesystem that contain a large file, say is about few hundred MBytes of file size, the system would became unstable and sometime, generate kernel panic of disk I/O errors.

I spent a lot of my time checking through host and guest OS kernel configurations and trying out newer version of Linux kernel, which I thought would be the root caused of the problem. But after many attempts, the situation remains.

Thankfully, I know someone who is been using VMware long before I started using it. According to him, he always select IDE as the disk emulation, and he never experience such problem. So I decided not to follow the default and “recommended” option for disk emulation, and manually change my installation into IDE disk emulation.

I run a few round of testing (like I did earlier), IDE perform much more stable than SCSI emulation.

I did a few google search, I don’t find anyone facing similar problem as I did. I start wonder how likely for me to face such issue alone. There maybe something else that I am not aware of, but at this moment, I don’t really have the time to continue the investigation. Projects (emphasis plural) deadlines are coming soon, I guess I will revisit this again later, when I have the time.