TOPIC: VIRTUAL MACHINE
Windows 11 virtualisation on Linux using KVM and QEMU
Windows 11 arrived in October 2021 with a requirement that posed a challenge to many virtualisation users: TPM 2.0 was mandatory, not optional. For anyone running Windows in a virtual machine, that meant their hypervisor needed to emulate a Trusted Platform Module convincingly enough to satisfy the installer.
VirtualBox, which had been my go-to choice for desktop virtualisation for years, could not do this in its 6.1.x series. Support arrived only with VirtualBox 7.0 in October 2022, meaning anyone who needed Windows 11 in a VM faced roughly a year with no straightforward path through their existing tool.
That gap prompted a look at KVM (Kernel-based Virtual Machine), which could handle the TPM requirement through software emulation. This article documents what that investigation found, what the rough edges were at the time, and how the situation has developed in the years since.
What KVM Actually Is
KVM is not a standalone application. It is a virtualisation infrastructure built directly into the Linux kernel, and has been since the module was merged between 2006 and 2007. Rather than sitting on top of the operating system as a separate layer, it turns the Linux kernel itself into a hypervisor. This makes KVM a type-1 hypervisor in practice, even when running on a desktop machine, which is part of why its performance characteristics compare favourably with hosted solutions.
In use, KVM operates alongside QEMU for hardware emulation, libvirt for virtual machine management and virt-manager as a graphical front end. The distinction matters because problems and improvements tend to originate in different parts of that stack. KVM itself is rarely the issue; QEMU and libvirt are where the day-to-day configuration lives.
To confirm that the host CPU supports hardware virtualisation before beginning, the following command checks for the relevant flags:
egrep -c '(vmx|svm)' /proc/cpuinfo
Any result above zero means the hardware is capable. Intel processors expose the vmx flag and AMD processors expose svm.
Installing the Required Packages
The installation is straightforward on any major distribution.
On Debian and Ubuntu:
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager
On Fedora:
sudo dnf install @virtualization
On Arch Linux:
sudo pacman -S qemu libvirt virt-manager bridge-utils
After installation, the current user needs to be added to the libvirt and kvm groups before the tools will work without root privileges:
sudo usermod -aG libvirt,kvm $(whoami)
Logging out and back in instates the group membership.
Configuring Network Bridging
The default network configuration in libvirt uses NAT, which is sufficient for most purposes and requires no additional setup. The VM can reach the internet and the host, but the host cannot initiate connections to the VM. For a Windows 11 guest used primarily for application compatibility, NAT works without complaint.
A bridged network, which places the VM on the same network segment as the host, requires a wired Ethernet connection. Wireless interfaces do not support bridging in the standard Linux networking stack due to how 802.11 handles MAC addresses. For those on a wired connection, a bridge can be defined with a file named bridge.xml:
<network>
<name>br0</name>
<forward mode="bridge"/>
<bridge name="br0"/>
</network>
The bridge is then activated with:
sudo virsh net-define bridge.xml
sudo virsh net-start br0
sudo virsh net-autostart br0
Installing Windows 11
Windows 11 requires TPM 2.0 and Secure Boot. Neither is present in a default KVM configuration, and both need to be added explicitly.
The swtpm package provides software TPM emulation:
sudo apt install swtpm swtpm-tools # Debian/Ubuntu
sudo dnf install swtpm swtpm-tools # Fedora
UEFI firmware is provided by the ovmf package, which supplies the file that virt-manager needs for Secure Boot:
sudo apt install ovmf # Debian/Ubuntu
sudo dnf install edk2-ovmf # Fedora
In virt-manager, when creating the VM, the firmware should be set to UEFI x86_64: /usr/share/OVMF/OVMF_CODE.fd rather than the default BIOS option. A TPM 2.0 device should be added in the hardware configuration before the VM is started. With those two elements in place, the Windows 11 installer proceeds without complaint about the hardware requirements.
The VirtIO drivers ISO should be attached as a second virtual CD-ROM drive during installation. The installer will not find the storage device otherwise because the VirtIO disk controller is not a standard device that Windows recognises without a driver. When prompted to select an installation location and no disks appear, clicking "Load driver" and browsing to the VirtIO ISO resolves it.
During the out-of-box experience, Windows 11 requires a Microsoft account and an internet connection by default. To bypass this and create a local account instead, opening a command prompt with Shift+F10 and running the following works on the Home edition:
oobebypassNRO
The machine restarts and presents an option to proceed without internet access.
Performance Considerations
KVM performance for a Windows 11 guest is generally good, but one factor specific to Windows 11 is worth understanding. Memory Integrity, also referred to as Hypervisor-Protected Code Integrity (HVCI), is a Windows security feature that uses virtualisation to protect the kernel. Running it inside a virtual machine creates nested virtualisation overhead because the guest is attempting to run its own virtualisation layer inside the host's. The performance impact is more pronounced on processors predating Intel Kaby Lake or AMD Zen 2, where the hardware support for nested virtualisation is less capable.
The CPU type selection in virt-manager also matters more than it might appear. Setting the CPU model to host-passthrough exposes the actual host CPU flags to the guest, which improves performance compared to emulated CPU models, at the cost of reduced portability if the VM image is ever moved to a different machine.
Host File System Access and Clipboard Sharing
This was where the experience diverged most noticeably from VirtualBox. VirtualBox Guest Additions handle shared folders and clipboard integration as a single installation, and the result works reliably with minimal configuration. KVM requires separate solutions for each, and in 2022 neither was as seamless as it has since become.
Clipboard Sharing via SPICE
Clipboard sharing uses the SPICE display protocol rather than VNC. The VM needs a SPICE display and a virtio-serial controller, which virt-manager adds automatically when SPICE is selected. Within the Windows guest, the installer for SPICE guest tools provides the clipboard agent. Once installed, clipboard text passes between host and guest in both directions.
The critical dependency that caused problems in 2022 was the virtio-serial channel. Without a com.redhat.spice.0 character device present in the VM configuration, the clipboard agent installs successfully but does nothing. Virt-manager now adds this automatically when SPICE is selected, which removes one of the more common failure points.
Host Directory Sharing via Virtiofs
At the time of this investigation, the practical option for sharing files between the Linux host and a Windows guest was WebDAV, which worked but felt like a workaround. The proper solution, virtiofs, existed but was not yet well-supported on Windows guests. The situation has since improved to the point where virtiofs is now the standard recommended approach.
It requires three components: the virtiofsd daemon on the host (included in recent QEMU packages), the virtiofs driver from the VirtIO Windows drivers package and WinFsp, which is the Windows equivalent of FUSE. Once configured through virt-manager's file system hardware settings, the shared directory appears as a mapped drive in Windows Explorer. The virtiofsd daemon was also rewritten in Rust in the intervening period, improving both its reliability and performance.
To configure a shared directory, shared memory must first be enabled in the VM's memory settings, then a file system device added with the driver set to virtiofs, a source path on the host and an arbitrary mount tag. The corresponding libvirt XML looks like this:
<memoryBacking>
<source type='memfd'/>
<access mode='shared'/>
</memoryBacking>
<filesystem type='mount' accessmode='passthrough'>
<driver type='virtiofs' queue='1024'/>
<source dir='/home/user/shared'/>
<target dir='host_share'/>
</filesystem>
This was the area where VirtualBox held a clear practical advantage in 2022. The gap has since narrowed considerably.
Migrating from VirtualBox
Moving existing VirtualBox VMs to KVM is possible using qemu-img, which converts between disk image formats. The straightforward conversion from VDI to QCOW2 is:
qemu-img convert -f vdi -O qcow2 windows11.vdi windows11.qcow2
For large images or where reliability is a concern, converting via an intermediate RAW format reduces the risk of issues:
qemu-img convert -f vdi -O raw windows11.vdi windows11.raw
qemu-img convert -f raw -O qcow2 windows11.raw windows11.qcow2
The resulting QCOW2 file can then be used when creating a new VM in virt-manager, selecting "Import existing disk image" rather than creating a new one.
How the Landscape Has Shifted Since
The investigation described here took place during a specific window: VirtualBox 6.1.x was the current release, Windows 11 had just launched, and KVM was the most practical route to TPM emulation on Linux. That context has changed in several ways worth noting for anyone reading this in 2026.
VirtualBox 7.0 arrived in October 2022 with TPM 1.2 and 2.0 support, Secure Boot and a number of additional improvements. The original reason for investigating KVM was resolved, and for those who had moved across during the gap period, returning to VirtualBox for Windows guests made sense given its more straightforward Guest Additions integration.
QEMU reached version 10.0 in April 2025, a significant milestone reflecting years of accumulated improvements to hardware emulation, storage performance and x86 guest support. Libvirt has kept pace, adding reliable internal snapshots for UEFI-based VMs, evdev input device hot plug and improved unprivileged user support. The virtiofs situation for Windows guests has moved from "technically possible but awkward" to "the recommended approach with good documentation and a rewritten daemon", which addresses the most significant practical shortcoming from 2022 directly.
The broader desktop virtualisation landscape shifted when VMware Workstation Pro became free for all users, including commercial ones, in November 2024. VMware Workstation Player was discontinued as a separate product at the same time, having become redundant once Workstation Pro was available at no cost. This gave desktop users a third credible option alongside VirtualBox and KVM, with VMware's historically strong Windows guest integration now accessible without a licence fee, though users of the free version are not entitled to support through the global support team.
The miniature PC market also expanded considerably from 2023 onwards, with Intel N100-based and AMD Ryzen Embedded systems offering enough performance to run Windows natively at modest cost. For many people, that proves a cleaner solution than any hypervisor, eliminating the integration limitations entirely by giving Windows its own dedicated hardware.
Final Assessment
KVM handled Windows 11 competently during a period when the alternatives could not, and the platform has continued to improve in the years since. The two areas that fell short in 2022, host file sharing and clipboard integration, have been addressed by developments in virtiofs and the SPICE tooling, and a new user starting today may find the experience noticeably smoother.
Whether KVM is the right choice in 2026 depends on the use case. For Linux-native workloads and server-style VM management, it remains the strongest option on Linux. For a Windows desktop guest where ease of integration matters most, VirtualBox 7.x and VMware Workstation Pro are both strong alternatives, with the latter now free to use for both commercial and personal purposes. The question that drove this investigation was answered by VirtualBox itself in October 2022. KVM provided a workable solution in the meantime, and the platform has only become more capable since then.
Additional Reading
How To Convert VirtualBox Disk Image (VDI) to Qcow2 format
How to enable TPM and secure boot on KVM?
Windows 11 on KVM – How to Install Step by Step?
Enable Virtualization-based Protection of Code Integrity in Microsoft Windows
Converting QEMU disk images to VirtualBox images on Linux Mint 21
Recently, VirtualBox gained fuller support for Windows 11 and I successively set up a new Windows 11 virtual machine that, I hope, will supplant a Windows 10 counterpart in time. While the setup itself was streamlined, I ran into such stability issues that I set the new VM aside until a new version of VirtualBox got released. That has happened with the appearance of version 7.0.2, but Windows 11 remains prone to freezing on my Linux Mint machine. Thankfully, that now is much less frequent, yet the need for added stability remains outstanding.
While I was thinking about trying out VirtualBox 7.0.0, I remembered a QEMU machine that I had running Windows 11. Though QEMU proved more limited than VirtualBox when it came to having easy availability of functionality like moving data in and out of the virtual machine or support for sound, there was no problem with TPM support or system stability. Since it did contain some useful data, I wondered about converting its virtual hard disk to VirtualBox format, and it is easy to do. First, you need to install qemu-img and other utilities as follows:
sudo apt-get install qemu-utils
With that in place, executing a command like the following performs the required conversion. Here, the -O switch specifies the required file type of vdi in this case.
qemu-img convert -O vdi [virtual hard disk].qcow2 [virtual hard disk].vdi
While I have yet to mount it on the new VirtualBox Windows 11 virtual machine, it is good to have the old virtual hard disk available for doing so. The thought of using it as a boot drive in VirtualBox did enter my mind, but the required change of drivers and other incompatibilities dissuaded me from doing so.
Getting rid of the Windows Resizing message from a Manjaro VirtualBox guest
Like Fedora, Manjaro also installs a package for VirtualBox Guest Additions when you install the Linux distro in a VirtualBox virtual machine. However, it does have certain expectations when doing this. On many systems, and my own is one of these, Linux guests are forced to use the VMSVGA virtual graphics controller while Windows guests are allowed to use the VBoxSVGA one. It is the latter that Manjaro expects, so you get a message like the following appearing when the desktop environment has loaded:
Windows Resizing
Set your VirtualBox Graphics Controller to enable windows resizing
After ensuring that gcc, make, perl and kernel headers are installed, I usually install VirtualBox Guest Additions myself from the included ISO image, and so I did the same with Manjaro. Doing that and restarting the virtual machine got me extra functionality like screen resizing and being able to copy and paste between the VM and elsewhere after choosing the Bidirectional setting in the menus under Devices > Shared Clipboard.
That still left an unwanted message popping up on startup. To get rid of that, I just needed to remove /etc/xdg/autostart/mhwd-vmsvga-alert.desktop. While it can be deleted, I just moved it somewhere else and a restart proved that the message was gone as needed. Now everything is working as I wanted.
Shared folders not automounting on an Ubuntu 18.04 guest in a VirtualBox virtual machine
Over the weekend, I finally got to resolve a problem that has affected Ubuntu 18.04 virtual machine for quite a while. The usual checks on Guest Additions installation and vboxsf group access assignment were performed but were not causing the issue. Also, no other VM (Windows (7 & 10) and Linux Mint Debian Edition) on the same Linux Mint 19.2 machine was experiencing the same issue. The latter observation made the problem intrinsic to the Ubuntu VM itself.
Because I install the Guest Additions software from the included virtual CD, I executed the following command to open the relevant file for editing:
sudo systemctl edit --full vboxadd-service
If I had installed virtualbox-guest-dkms and virtualbox-guest-utils from the Ubuntu repositories instead, then this would have been the command that I needed to execute instead of the above.
sudo systemctl edit --full virtualbox-guest-utils
Whichever configuration gets opened, the line that needs attention is the one beginning with "Conflicts" (line 6 in the file on my system). The required edit removes systemd-timesync.service from the list following the equals sign. It is worth checking that file paths include the correct version number for the Guest Additions software that is installed, in case this is not how things are. The only change that was needed on my Ubuntu VM was to the Conflicts line, and rebooting it got the Shared Folder automatically mounted under the /media directory as expected.
Enlarging a VirtualBox VDI virtual disk
It is remarkable how the Windows folder manages to grow on your C drive, and one in a Windows 7 installation was the cause of my needing to expand the VirtualBox virtual machine VDI disk on which it was installed. After trying various ways to cut down the size, an enlargement could not be avoided. In all of this, it was handy that I had a recent backup for restoration after any damage.
The same thing meant that I could resort to enlarging the VDI file with more peace of mind than otherwise might have been the case. This needed use of the command line once the VM was shut down. The form of the command that I used was the following:
VBoxManage modifyhd <filepath/filename>.vdi --resize 102400
It appears that this also would work on a Windows host, but mine was Linux, and it did what I needed. The next step was to attach it to an Ubuntu VM and use GParted to expand the main partition to fill the newly available space. That does not mean that it takes up 100 GiB on my system just yet because these things can be left to grow over time and there is a way to shrink them too if you ever need to do just that. As ever, having a backup made before any such operation may have its uses if anything goes awry.
Migrating a virtual machine from VirtualBox to VMware Player on Linux
The progress of Windows 10 is something that I have been watching. Early signs have been promising, and the most recent live event certainly contained its share of excitement. The subsequent build that was released was another step in the journey, though the new Start Menu appears more of a work in progress than it did in previous builds. Keeping up with these advances sometimes steps ahead of VirtualBox support for them, and I discovered that again in the last few days. VMware Player seems unaffected, so I thought that I'd try a migration of the VirtualBox VM with Windows 10 onto there. In the past, I did something similar with a 32-bit instance of Windows 7 that subsequently got upgraded all the way up to 8.1, but that may not have been as slick as the latest effort, so I thought that I would share it here.
The first step was to export the virtual machine as an OVF appliance, and I used File > Export Appliance... only to make a foolish choice regarding the version of OVF. The one that I picked was 2.0 only to subsequently discover that 1.0 was the better option. The equivalent command line would look like the following (there are two dashes before the ovf10 option below):
VboxManage export [name of VM] -o [name of file].ova --ovf10
VMware has a tool for extracting virtual machines from OVF files that will generate a set of files that will work with Player and other similar products of theirs. It goes under the unsurprising name of OVF Tool and usefully works from a command line session. When I first tried it with an OVF 2.0 files, I got the following error, and it stopped doing anything as a result:
Line 2: Incorrect namespace http://schemas.dmtf.org/ovf/envelope/2 found.
The only solution was to create a version 1.0 file and use a command like the following:
ovftool --lax [name of file].ova [directory location of VM files]/[name of file].vmx
The --lax option is needed to ensure successful execution, even with an OVF 1.0 file as the input. Once I had done this on my Ubuntu GNOME system, the virtual machine could be opened up on VMware Player and I could use the latest build of Windows 10 at full screen, something that was not possible with VirtualBox. This may be how I survey the various builds of the operating that appear before its final edition is launched later this year.
Migrating a Windows 7 Virtual Machine from VirtualBox to VMware Player
Seeing how well Windows 8 was running in a VMware Player virtual machine and that was without installing VMware Tools in the guest operating system, I was reminded about how sluggish my Windows 7 VirtualBox VM had become. Therefore, I decided to try a migration of the VM from VirtualBox to VMware. My hope was this: it would be as easy as exporting to an OVA file (File > Export Appliance... in VirtualBox) and importing that into VMware (File > Open a VM in Player). However, even selecting OVF compatibility was insufficient for achieving this, and the size of the virtual disks meant that the export took a while to run as well. The solution was to create a new VM in VirtualBox from the OVA file and use the newly created VMDK files with VMware. That worked successfully to give me a speedier, more responsive Windows 7 VM for my pains.
Access to host directories needed reinstatement using a combination of the VMware Shared Folders feature and updating drive mappings in Windows 7 itself to use what appeared to it as network drives in the Shared Folders directory on the \\vmware-host domain. For that to work, VMware Tools needed to be installed in the guest OS (go to Virtual Machine > Install VMware Tools to make available a virtual CD from which the installation can be done) as I discovered when trying the same thing with my Windows 8 VM, where I dare not instate VMware Tools due to their causing trouble when I last attempted it.
Moving virtual machine software brought about its side effects, though. Software like Windows 7 detects that it's on different hardware, so reactivation can be needed. While Windows 7 reactivation was a painless online affair, it wasn't the same for Photoshop CS5. That meant that I needed help from Adobe's technical support people top get past the number of PC's for which the software already had been activated. In hindsight, deactivation should have been done before the move, but that's a lesson that I know well now. Technical support sorted my predicament politely and efficiently while reinforcing the aforementioned learning point. Moving virtual machine platform is very like moving from one PC to the next, and it hadn't clicked with me quite how real those virtual machines can be when it comes to software licensing.
Apart from that and figuring out how to do it, the move went smoothly. An upgrade to the graphics driver on the host system and getting Windows 7 to recheck the capabilities of the virtual machine even gained me a fuller Aero experience than I had before then. Full-screen operation is quite reasonable too (the CTRL + ALT + ENTER activates and deactivates it) and photo editing now feels less boxed in too.
Installing VMware Player 4.04 on Linux Mint 13
Curiosity about the Release Preview of Windows 8 saw me running into bother when trying to see what it's like in a VirtualBox VM. While doing some investigations on the web, I saw VMware Player being suggested as an alternative. Before discovering VirtualBox, I did have a licence for VMware Workstation and was interested in seeing what Player would have to offer. The, it was limited to running virtual machines that were created using Workstation. Now, it can create and manage them itself and without any need to pay for the tool either. Registration on VMware's website is a must for downloading it, though, but that's no monetary cost.
Once I had downloaded Player from the website, I needed to install it on my machine. There are Linux and Windows versions; it was the former that I needed, and there are 32-bit and 64-bit variants, so you need to know what your system is running. With the file downloaded, you need to set it as executable and the following command should do the trick once you are in the right directory:
chmod +x VMware-Player-4.0.4-744019.i386.bundle
Then, it needs execution as a superuser. With sudo access for my user account, it was a matter of issuing the following command and working through the installation screens to instate the Player software on the system:
sudo ./VMware-Player-4.0.4-744019.i386.bundle
Those screens proved easy for me to follow, so life would have been good if that were all that was needed to get Player working on my PC. Having Linux Mint 13 means that the kernel is of the 3,2 stock and that means using a patch to finish off the Player installation because the required VMware kernel modules seem to silently fail to compile during the installation process. This only manifests itself when you attempt to start VMware Player afterwards to find a module installation screen appear. That wouldn't be an issue of itself were it not for the compilation failure of the vmnet module and subsequent inability to start VMware services on the machine. There is a prompt to peer into the log file for the operation, and that is a little uninformative for the non-specialist.
Rummaging around the web brought me to the requisite patch, and it works for Player 4.0.3 and Workstation 8.0.2 by default. Doing some tweaking allowed me to make it work for Player 4.04 too. My first step was to extract the contents of the tarball to /tmp where I could edit patch-modules_3.2.0.sh. Line 8 was changed to the following:
plreqver=4.0.4
With the amendment saved, it was time to execute the shell script as a superuser, having made it executable beforehand. This can be accomplished using the following command:
chmod +x patch-modules_3.2.0.sh && sudo ./patch-modules_3.2.0.sh
With that completed successfully, VMware Player ran as it should. An installation of Windows 8 into a new VM ran very smoothly, and I was impressed with the performance and responsiveness of the operating system within a Player VM. There are a few caveats, though. First, it doesn't run at all well with VMware Tools, so it's best to leave them uninstalled since it doesn't seem to need them either; it was possible to set the resolution to the same as my screen and use the CTRL+ALT+ENTER shortcut to drop in and out of full screen mode anyway. Second, the unattended Windows installation wasn't the way forward for setting up the VM, but it was no big deal to have that experiment thwarted. The feature remains an interesting one, though.
With Windows 8 running so well in Player, I was reminded of the sluggish nature of my Windows 7 VM and an issue with a Fedora 17 one too. The result was that I migrated the Windows 7 VM from VirtualBox to VMware, and all is so much more responsive. Getting it there took not a little tinkering, so that's a story for another entry. Based on my experiences so far, I reckon that VMware Player will remain useful to me for a little while yet. Resolving the installation difficulty was worth that extra effort.
Getting Gnome Shell going for Fedora 16 running in VirtualBox
There are a number of complaints out there about how hard it is to get GNOME Shell running for a Fedora 16 installation in a VirtualBox virtual machine. As with earlier versions of Fedora, preparation remains a matter of having make, gcc and kernel-devel (kernel headers, in other words). While I have got away with just those, adding dkms (dynamic kernel module support) to the list might be no bad idea either. To get all of those instated, it is a matter of running the following command as root or using sudo:
yum -y install make gcc kernel-devel dkms
The -y switch ensures that any Y/N prompts that usually appear are suppressed and that the installation is completed. Just leave it out if you are inclined to get second thoughts. Another item that has been needed with a previous release of Fedora is libgomp, but I haven't had to add this for Fedora 16 if I recall correctly.
Once those are in place, it is time to install the VirtualBox Guest Additions. Going to Devices > Install Guest Additions... mounts a virtual CD that can be used for the installation of the various drivers that are needed. To do the installation, first go to where the installer is located using the following command:
cd /media/VBOXADDITIONS_4.1.6_74713/
Note that this location will change according to the release and build numbers of VirtualBox, yet the process essentially will be the same aside from this. Once in there, issue the following command as root or using sudo:
./VBoxLinuxAdditions.run
Hopefully, this will complete without errors now with the precursor software that has been added beforehand. However, there is one more thing that needs doing, or you will get the GNOME 3 fallback desktop instead. It pertains to SELinux, an old adversary of mine that got in the way when I was setting up a web server on a machine running Fedora. It doesn't recognise the new VirtualBox drivers as it should, so the following command needs executing as root or using sudo:
restorecon -R -v /opt
Doing this restores the SELinux contexts for the /opt directories within which the VirtualBox software directories are found. The -R switch tells it to act recursively and -v makes it verbose. When it has done its work, hopefully successfully, it is time to reboot the virtual machine, and you should have a GNOME Shell desktop interface when you log in.
Getting VirtualBox working on Ubuntu after a kernel upgrade
In previous posts, I have talked about getting VMware Workstation back on its feet again after a kernel upgrade. It also seems that VirtualBox is prone to the same sort of affliction. However, while VMware Workstation fails to start at all, VirtualBox at least starts itself even if it cannot get a virtual machine going and generates errors instead.
My usual course of action is to fire up Synaptic and install the drivers for the relevant kernel. Looking for virtualbox-ose-modules-[kernel version and type] and installing that usually resolves the problem. For example, at the time of writing, the latest file available for my system would be virtualbox-ose-modules-2.6.24-19-generic. If you are a command line fan, the command for this would be:
sudo apt-get install virtualbox-ose-modules-2.6.24-19-generic
The next thing to do would be to issue the command to start the vboxdrv service, and you'd be all set:
sudo /etc/init.d/vboxdrv start
There is one point of weakness (an Achilles heel, if you like) with all of this: the relevant modules need to be available in the first place and I hit a glitch after updating the kernel to 2.6.24-20 when they weren't; I do wonder why Canonical fail to keep both in step with one another and why the new kernel modules don't come through the updates automatically either. However, there is a way around this too. That means installing virtualbox-ose-source via either Synaptic or the command line:
sudo apt-get install virtualbox-ose-source
The subsequent steps involve issuing more commands to perform a reinstallation from the source code:
sudo m-a prepare
sudo m-a auto-install virtualbox-ose
Once these are complete, the next is to start the vbox drv as described earlier and to add yourself to the vboxusers group if you're still having trouble:
sudo adduser [your username] vboxusers
The source code installation option certainly got me up and running again, and I'll be keeping it on hand for use should the situation raise its head once more.