Technology Tales

Adventures & experiences in contemporary technology

Sorting out a system update failure for FreeBSD

3rd April 2014

With my tendency to apply Linux updates using the command, I was happy to see that something similar was possible in FreeBSD too. The first step is to fire up a terminal session and drop into root using the su command. That needs the root superuser password in order to continue and the next step is to update the local repositories using the following command:

pkg update

After that, it is time download updated packages and install these by issuing this command:

pkg upgrade

Most of the time, that is sufficient but I discovered that there are times when the above fails and additional interventions are needed. What I had uncovered were dependency error messages and I set to looking around the web for remedies to this. One forum question that was similar to what I had met with the suggestion of consulting the file called UPDATING in /usr/ports/. An answer like that looks unhelpful but for the inclusion of advice where extra actions were needed. Also, there is a useful article on updating FreeBSD ports that gives more in the way of background knowledge so you understand the more about what needs doing.

Following both that and the UPDATING  file resulted in my taking the following sequence of steps. The first act was to download and initialise the Ports Collection, a set of build instructions.

portsnap fetch extract

The above is a one time only action so future updates are done as follows:

portsnap fetch update

With an up to date Ports Collection in place, it was time to install portman:

pkg install portman

A look through /usr/ports/UPDTAING revealed the commands I needed for updating Python and Perl to address the dependency problem that I was having:

portmaster -o devel/py-setuptools27 devel/py-setuptools
portmaster -r py\*setuptools

With those completed, I re-ran pkg update again and all was well. The extra actions needed to get that result will not get forgotten and I am sharing them on here so I know where they are. If anyone else has use for them, that would be even better.

A fallback method of installing Nightingale in Linux

3rd December 2013

When I upgraded to Ubuntu GNOME 13.10 and went for the 64-bit variant, I tried a previously tried and tested approach for installing Nightingale that used a PPA only for it not to work. At that point, the repository had not caught up with the latest Ubuntu release (it has by the time of writing) and other pre-compiled packages would not work either. However, there was one further possibility left and that was downloading a copy of the source code and compiling that. My previous experiences of doing that kind of thing have not been universally positive so it was not my first choice but I gave it a go anyway.

In order to get the source code, I first needed to install Git so I could take a copy from the version controlled repository and the following command added the tool and all its dependencies:

sudo apt-get install git autoconf g++ libgtk2.0-dev libdbus-glib-1-dev libtag1-dev libgstreamer-plugins-base0.10-dev zip unzip

With that lot installed, it was time to checkout a copy of the latest source code and I went with the following:

git clone https://github.com/nightingale-media-player/nightingale-hacking.git

The next step was to go into the nightingale-hacking sub-folder and issue the following command:

./build.sh

That should produce a sub-directory named nightingale that contains the compiled executable files. If this exists, it can be copied into /opt. If not, then create a folder named nightingale under /opt using copy the files from ~/nightingale-hacking/compiled/dist into that location. Ubuntu GNOME 13.10 comes with GNOME Shell 3.8, the next step took a little fiddling before it was sorted: adding an icon to application menu or dashboard. This involved adding a file called nightingale.desktop in /usr/share/applications/ with the following contents:

[Desktop Entry]
Name=Nightingale
Comment=Play music
TryExec=/opt/nightingale/nightingale
Exec=/opt/nightingale/nightingale
Icon=/usr/share/pixmaps/nightingale.xpm
Type=Application
X-GNOME-DocPath=nightingale/index.html
X-GNOME-Bugzilla-Bugzilla=Nightingale
X-GNOME-Bugzilla-Product=nightingale
X-GNOME-Bugzilla-Component=BugBuddyBugs
X-GNOME-Bugzilla-Version=1.1.2
Categories=GNOME;Audio;Music;Player;AudioVideo;
StartupNotify=true
OnlyShowIn=GNOME;Unity;
Keywords=Run;
Actions=New
X-Ubuntu-Gettext-Domain=nightingale

[Desktop Action New]
Name=Nightingale
Exec=/opt/nightingale/nightingale
OnlyShowIn=Unity

It was created from a copy of another *.desktop file and the categories in there together with the link to the icon were as important as the title and took a little tinkering before all was in place.  Also, you may find that /opt/nightingale/chrome/icons/default/default.xpm needs to be become /usr/share/pixmaps/nightingale.xpm using the cp command before your new menu entry gains an icon to go with it. While the steps that I describe here worked for me, there is more information on the Nightingale wiki if you need it.

Wiping of hard drives with Linux

2nd December 2013

More than a decade of computer upgrades and rebuilds can leave obsolete kit in your hands and the arrival of legislation controlling the dumping of electronic goods during this time can leave one wondering how anyone can dispose of them. Thankfully, I discovered that the local council refuse site only a few miles away from me accepts such things for recycling and saw me a good few times over the last summer with obsolete and non-working gadgets that has stayed with me far too long. Some were as bulky as a computer monitor and a printer but others were relatively diminutive.

Disposing of non-working and utterly obsolete equipment is an easy choice but I find this is harder when a device still works as intended and even might have a use yet. When you realise that computer motherboards still come with PS/2, floppy and IDE ports, things get trickier. My Gigabyte Z87-HD3 mainboard just has one PS/2 when predecessors would have had two and the same applies to IDE sockets and there still is a floppy drive socket on there too, a surprising sight for anyone used to thinking that such things are utterly outmoded these days. So, PC technology isn’t relinquishing backwards compatibility just yet since that mainboard is part of a system with an Intel Core i5-4670K CPU and 24 GB of RAM on there.

Even with that presence of an IDE port, I was not tempted to use leftover 10 GB and 20GB hard drives that I have had for just over a decade. Ten years ago, that sort of capacity would been respectable were it not for our voracious appetite for data storage thanks to photography, video and music. Apart from the size constraints, the speed of those drives cannot compare well with what we have today either and I quickly saw that when I replaced a Samsung 160 HD of a similar age with a Samsung SSD.

The result of this line of thought was that I was minded to recycle the drives so I started to think about wiping and Linux has a good tool for this in the form of the dd command. It can overwrite data on the disks so as to render the information virtually irretrievable. Also, Linux has a number of dummy devices that can supply junk data for overwriting purposes. They are like /dev/null which is used to suppress the issuing of output to the command. The first is /dev/zero which supplies octal zeros and I have used this. However, there also is /dev/random and /dev/urandom for those wanting a more random element to the overwriting.

To overwrite data on a disk with zeroes while having feedback on progress, the following command achieves the required result:

sudo dd if=/dev/zero | pv | sudo dd of=/dev/sdd bs=16M

The whole operation needs to be executed with root privileges and the if parameter of dd specifies the input data and this is sent to a pv command that shows a progress bar that dd would not produce by itself while sending the output on to another dd command with the disk to be overwritten specified using the of parameter. The bs parameter in that second dd command specifies the block size for the disk writing job. Unfortunately, pv is not installed by default so you need to add it yourself. On a Debian, Ubuntu or Linux Mint system, the command is the following:

sudo apt-get install pv

That pv sandwich also is invaluable for those times when dd is needed to copy partitions between different physical or virtual (in a virtual machine) disks. Without it, you might wonder what exactly is happening in the silence and that especially is concerning when you are retrying an operation that failed previously and it takes a while to complete each time.

Installing Citrix Receiver 13.0 in Ubuntu GNOME 13.10 64-bit

28th November 2013

Installing the latest version of Citrix Receiver (13.0 at the time of writing) on 64-bit Ubuntu should be as simple as downloading the required DEB package and double-clicking on the file so that Ubuntu Software Centre can work its magic. Unfortunately, the 64-bit DEB file is faulty so that means that the Ubuntu community how-to guide for Citrix still is needed. In fact, any user of Linux Mint or another distro that uses Ubuntu as its base would do well to have a look at that Ubuntu link.

For sake of completeness, I still am going to let you in on the process that worked for me. Once the DEB file has been downloaded, the first task is to creating a temporary folder where the DEB file’s contents can be extracted:

mkdir ica_temp

With that in place, it then is time to do the extraction and it needs two commands with the second of these need to extract the control file while the first extracts everything else.

sudo dpkg-deb -x icaclient- ica_temp
sudo dpkg-deb --control icaclient- ica_temp/DEBIAN

It is the control file that has been the cause of all the bother because it refers to unavailable dependencies that it really doesn’t need anyway. To open the file for editing, issue the following command:

sudo gedit ica_temp/DEBIAN/control

Then change line 7 (it should begin with Depends:) to: Depends: libc6-i386 (>= 2.7-1), lib32z1, nspluginwrapper. There are other software packages in there that Ubuntu no longer supports and they are not needed anyway. With the edit made and the file saved, the next step is to build a new DEB package with the corrected control file:

dpkg -b ica_temp icaclient-modified.deb

Once you have the package, the next step is to install it using the following command:

sudo dpkg -i icaclient-modified.deb

If it fails, then you have missing dependencies and the following command should sort these before a re-run of the above command again:

sudo apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386

With Citrix Receiver installed, there is one more thing that is needed before you can use it freely. This is to put Thawte security certificate files into /opt/Citrix/ICAClient/keystore/cacerts. What I had not realised until recently was that many of these already are in /usr/share/ca-certificates/mozilla and linking to them with the following command makes them available to Citrix receiver:

sudo ln -s /usr/share/ca-certificates/mozilla/* /opt/Citrix/ICAClient/keystore/cacerts/

Another approach is to download the Thawte certificates and extract the archive to /tmp/. From there they can be copied to /opt/Citrix/ICAClient/keystore/cacerts and I copied the Thawte Personal Premium certificate as follows:

sudo cp /tmp/Thawte Root Certificates/Thawte Personal Premium CA/Thawte Personal Premium CA.cer /opt/Citrix/ICAClient/keystore/cacerts/

Until I found out about what was in the Mozilla folder, I simply picked out the certificate mentioned in the Citrix error message and copied it over like the above. Of course, all of this may seem like a lot of work to those who are non-tinkerers and I have added a repaired 64-bit DEB package that incorporates all of the above and should not need any further intervention aside from installing it using GDebi, Ubuntu’s Software Centre, dpkg or anything else that does what’s needed.

A reappraisal of Windows 8 and 8.1 licensing

15th November 2013

With the release of Windows 8 around this time last year, I thought that the full retail version that some of us got for fresh installations on PC’s, real or virtual, had become a thing of the past. In fact, it did seem that every respecting technology news website and magazine was saying just that. The release that you would buy from Microsft or from mainstream computer stores was labelled as an upgrade. That made it look as if you needed the OEM or System Builder edition for those PC’s that needed a new Windows installation and that the licence that you bought was then attached to the machine from when it got installed on there.

As is usual with Microsoft, the situation is less clear cut than that. For instance, there was some back-pedalling to allow OEM editions of Windows to be licensed for personal use on real or virtual PC’s. With Windows & and its predecessors, it even was possible to be able to install afresh on a PC without Windows by first installing on inactivated copy on there and then upgrading that as if it was a previous version of Windows. Of course, an actual licence was of the previous version of Windows was needed for full compliance if not the actual installation. At times, Microsoft muddies waters so as to keep its support costs down.

Even with Microsoft’s track record in mind, it still did surprise me when I noticed that Amazon was selling what appeared to be full versions of both Windows 8.1 and Windows 8.1 Pro. Having set up a 64-bit VirtualBox virtual machine for Windows 8.1, I got to discovering the same for software purchased from the Microsoft web site. However, unlike the DVD versions, you do need an active Windows installation if you fancy a same day installation of the downloaded software. For those without Windows on a machine, this can be as simple as downloading either the 32-bit or the 64-bit 90 day evaluation editions of Windows 8.1 Enterprise and using that as a springboard for the next steps. This not only be an actual in-situ installation but there options to create an ISO or USB image of the installation disk for later installation.

In my case, I created a 64-bit ISO image and used that to reboot the virtual machine that had Windows 8.1 Enterprise on there before continuing with the installation. By all appearances, there seemed to be little need for a pre-existing Windows instance for it to work so it looks as if upgrades have fallen by the wayside and only full editions of Windows 8.1 are available now. The OEM version saves money so long as you are happy to stick with just one machine and most users probably will do that. As for the portability of the full retail version, that is not something that I have tested and I am unsure that I will go beyond what I have done already.

My main machine has seen a change of motherboard, CPU and memory so it could have de-activated a pre-existing Windows licence. However, I run Linux as my main operating system and, apart possibly from one surmountable hiccup, this proves surprisingly resilient in the face of such major system changes. For running Windows, I turn to virtual machines and there were no messages about licence activation during the changeover either. Microsoft is anything but confiding when it comes to declaring what hardware changes inactivate a licence. Changing a virtual machine from VirtualBox to VMware or vice versa definitely so does it so I tend to avoid doing that. One item that is fundamental to either a virtual or a real PC is the mainboard and I have seen suggestions that this is the critical component for Windows licence activation and it would make sense if that was the case.

However, this rule is not hard and fast either since there appears to be room for manoeuvre should your PC break. It might be worth calling Microsoft after a motherboard replacement to see if they can help you and I have seen that it is. All in all, Microsoft often makes what appear to be simple rules only to override them when faced with what happens in the real world. Is that why they can be unclear about some matters at times? Do they still hanker after how they want things to be even when they are impossible to keep like that?

A way to get Rigo working again in Sabayon

23rd October 2013

After having Sabayon running on a PC until it came to pieces after an attempted version upgrade, I went away from the Linux distro for a while and Linux Mint now runs on the aforementioned machine. It only was a certain curiosity that got me installing it into a virtual machine on VirtualBox to see if my command line method of keeping the system up to date was the cause or whether rolling or partially-rolling distros have a certain fragility that is not seen in their discrete release counterparts.

Recently that ran into a hitch, the Sabayon package manager Rigo failed to start up for me. After waiting to see if it sorted itself on its own, I looked into returning to those command line ways and that line of enquiry led me to a method of restoring Rigo’s functionality from Sabayon’s wiki page on the underlying Entropy. The first step was to issue a command to become root:

su

That needed the appropriate password and the next command issued updated Sabayon’s repositories:

equo update

Once that had done its thing, it was time to install new versions of Entropy and Rigo:

equo install entropy rigo

With that complete, it was time to exit the root session with the exit command. Then, it was time to try running Rigo and it worked as expected. Any thoughts of adding in the superseded Sulfur (Rigo’s predecessor) were banished on seeing that success.

Surveying changes coming in GNOME 3.10

20th October 2013

GNOME 3.10 came out last month but it took until its inclusion into the Arch and Antergos repositories for me to see it in the flesh. Apart from the risk of instability, this is the sort of thing at which rolling distributions excel. They can give you a chance to see the latest software before it is included anywhere else. For the GNOME desktop environment, it might have meant awaiting the next release of Fedora in order to glimpse what is coming. This is not always a bad thing because Ubuntu GNOME seems to be sticking with using a release behind the latest version. With many GNOME Shell extension writers not updating their extensions until Fedora has caught up with the latest release of GNOME for a stable release, this is no bad thing and it means that a version of the desktop environment has been well bedded in by the time it reaches the world of Ubuntu too. Debian takes this even further by using a stable version from a few years ago and there is an argument in favour of that from a solidity perspective.

Being in the habit of kitting out GNOME Shell with extensions, I have a special interest in seeing which ones still work or could work with a little tweaking and those which have fallen from favour. In the top panel, the major change has been to replace the sound and user menus with a single aggregate menu. The user menu in particular has been in receipt of the attentions of extension writers and their efforts either need re-work or dropping after the latest development. The GNOME project seems to have picked up an annoying habit from WordPress in that the GNOME Shell API keeps changing and breaking extensions (plugins in the case of WordPress). There is one habit from the WordPress that needs copying though and that is with documentation, especially of that API for it is hardly anywhere to be found.

GNOME Shell theme developers don’t escape and a large border appeared around the panel when I used Elementary Luna 3.4 so I turned to XGnome Enhanced (found via GNOME-Look.org) instead. The former no longer is being maintained since the developer no longer uses GNOME Shell and has not got the same itch to scratch; maybe someone else could take it over because it worked well enough until 3.8? So far, the new theme works for me so that will be an option should there a move to GNOME 3.10 on one of my PC’s at some point in the future.

Returning to the subject of extensions, I had a go at seeing how the included Applications Menu extension works now since it wasn’t the most stable of items before. That has improved and it looks very usable too so I am not awaiting the updating of the Frippery equivalent. That the GNOME Shell backstage view has not moved on that much from how it was in 3.8 could be seen as a disappointed but the workaround will do just fine. Aside from the Frippery Applications Menu, there are other extensions that I use heavily that have yet to be updated for GNOME Shell 3.10. After a spot of success ahead of a possible upgrade to Ubuntu GNOME 13.10 and GNOME Shell 3.8 (though I remain with version 13.04 for now), I decided to see I could port a number of these to the latest version of the user interface. Below, you’ll find the results of my labours so feel free to make use of these updated items if you need them before they are update on the GNOME Shell Extensions website:

Frippery Bottom Panel

Frippery Move Clock

Remove App Menu

Show Desktop

There have been more changes coming in GNOME 3.10 than GNOME Shell, which essentially is a JavaScript construction. The consolidation of application title bars in GNOME applications continues but a big exit button has appeared in the affected applications that wasn’t there before. Also there remains the possibility of applying the previously shared modifications to Nautilus (also known as Files) and a number of these usefully extend themselves to other applications such as Gedit too. Speaking of Gedit, this gains a very useful x of y numbering for the string searching functionality with x being the actual number of the occurrence of a certain piece of text in a file and y being its total number of occurrences.GNOME Tweak Tool has got an overhaul too and lost the setting that makes a folder path box appear in Nautilus instead of a location part, opening Dconf-Editor and going to org > gnome > nautilus > preferences and completing the tick box for always-use-location-entry will do the needful.

Essentially, the GNOME project is continuing along the path on which it set a few years ago. Though I would rather that GNOME Shell would be more mature, invasive changes are coming still and it leaves me wondering if or when this might stop. Maybe that was the consequence of mounting a controversial experiment when users were happy with what was there in GNOME 2. The arrival of Fedora 20 should bring with it an increase in the number of GNOME shell extensions that have been updated. So long as it remains stable Antergos is good have a look at the latest version of GNOME for now and Cinnamon fans may be pleased the Cinnamon 2.0 is another desktop option for the Arch-based distribution. An opportunity to say more about that may arrive yet once the Antergos installer stops failing at a troublesome package download; a separate VM is being set aside for a look at Cinnamon because it destabilised GNOME during a previous look.

A look at Ubuntu GNOME 13.10

12th October 2013

With its final release being near at hand, I decided to have a look at the beta release of Ubuntu GNOME 13.10 to get a sense of what might be coming. A misstep along the way had me inadvertently download and install the 64-bit edition of 13.04 into a VirtualBox virtual machine. The intention to update that to its soon to be released successor was scuppered by instability so I never did get to try out an in situ upgrade to 13.10. What I had in mind was to issue the following command:

gksu update-manager -d

However, I found another one when considering how Ubuntu Server might be upgraded without the GUI application that is the Update Manager. To update to a development version, the following command is what you need:

sudo do-release-upgrade -d

To upgrade to a final release of of a new version of Ubuntu, drop the -d switch from the above to use the following:

sudo do-release-upgrade

There is one further option that isn’t recommended for moving between Ubuntu versions but I use it to get updates such as new kernel subversions that are released:

sudo apt-get dist-upgrade

Rather than trying out the above, I downloaded the latest ISO image for the beta release of Ubuntu GNOME 13.10 and installed onto a VM that instead. Though it is the 32 bit version of the distro that is installed on my main home PC, it has been the 64 bit version that I have been trying. So far, that seems to be behaving itself even if it feels a little sluggish but that could be down to the four year old PC that hosts the virtual machine. For a while, I have been playing with the possibility of an upgrade involving an Intel Core i5 4670K CPU and 16 GB of RAM (useful for running multiple virtual machines at a time) along with any motherboard that supports those so looking at a 64 bit operating system has its uses.

The Linux kernel may be 3.11 but that is not my biggest concern. Neither is the fact that LibreOffice 4.1.2.3 was included and GIMP wasn’t, especially when that could be added easily anyway and it is version 2.8.6 that you get. The move to GNOME Shell 3.8 was what drew me to seeing what was coming because I have been depending on a number extensions. As with WordPress and plugins, GNOME Shell seems to have a tempestuous relationship with some of its extensions and I wanted to see which ones still worked. There also has been a change to the backstage application view in that you either get all installed applications displayed when you browse them or you have to start typing the name of the one you want to select it. Losing the categorical view that has been there until GNOME Shell 3.6 is a step backwards and I hope that version 3.10 has seen some sort of a reinstatement. There is a way to add these categories and the result is not as it once was either; also, it shouldn’t be necessary for anyone to dive into a systems innards to address things like this. With all the constant change, it is little wonder that Cinnamon has become a standalone entity with the release of its version 2.0 and that Debian’s toyed with not going with GNOME for its latest version (7.1 at the time of writing and it picked a good GNOME Shell version in 3.4).

Having had a look at other distribution that already have GNOME Shell 3.8, I knew that a few of my extensions worked with it. The list includes Frippery Bottom Panel, Frippery Move Clock, Places Status Indicator, Removable Drive Menu, Remove Rounded Corners (not really needed with the GNOME Shell theme that I use, Elementary Luna 3.4, but I retain it anyway), Show Desktop Button, User Themes and Ignore_Request_Hide_Titlebar. Because of the changes to the backstage view, I added Frippery Applications Menu in preference to Applications Menu because I have found that to be unstable. Useful new discoveries have included Curtains Up and GNOME Shell Open Terminal while Shell Restart User Menu Entry has made a return and found a use this time around too.

There have been some extensions that were not updated to work with GNOME Shell 3.8 that I have got working. In some cases, it was as simple as updating the metadata.json file for an extension with new version numbers of 3.8 and 3.84 to the list associated with the shell version property. All extensions are to be found in the .local/share/gnome-shell/extensions location in your home directory and each has a dedicated file containing the aforementioned file.

With others, it was a matter of looking in the Looking Glass (execute lg in the box that ALT + F2 brings up on your screen to access this) and seeing what error messages were to be found in there before attempting to correct these in either the extensions’ extension.js files or whatever JavaScript (*.js) file was causing the problem. With either or both of these remedies, I managed to port the four extensions below to GNOME Shell 3.8. In fact, you can download these zip files and install them yourself to see how you get on with them.

Advanced Settings in User Menu

Antisocial Menu

Remove App Menu

Restart Shell Entry

There is a Remove Panel App Menu that works with GNOME Shell 3.8 but I found that it got rid of the Places menu instead of the panel’s App Menu so I tried porting the older extension to see if it behaved itself and it does. With these in place, I have bent Ubuntu GNOME 13.10 to my will ahead of its final release next week and that includes customising Nautilus too. Other than a new version of GNOME Shell, it looks as if it will come with less in the way of drama and a breather like that is no bad thing given that personal computing continues to remain in a state of flux these days.

Turning off seccomp sandbox in vsftpd

21st September 2013

Within the last week, I set up virtual web server using Arch Linux to satisfy my own curiosity since the DIY nature of Arch means that you can build up exactly what you need without having any real constraints put upon you. What didn’t surprise me about this was that it took me more work than the virtual server that I created using Ubuntu Server but I didn’t expect ProFTPD to be missing from the main repositories. The package can be found in the AUR but I didn’t fancy the prospect of dragging more work on myself so I went with vsftpd (Very Secure FTP Daemon) instead. In contrast to ProFTPD, this is available in the standard repositories and there is a guide to its use in the Arch user documentation.

However, while vsftpd worked well just after installation, connections to the virtual FTP soon failed with FileZilla  began issuing uninformative messages. In fact, it was the standard command line FTP client on my Ubuntu machine that was more revealing. It issued the following message that let me to the cause after my engaging the services of Google:

500 OOPS: priv_sock_get_cmd

With version 3.0 of vsftpd, a new feature was introduced and it appears that this has caused problems for a few people. That feature is seccomp sandboxing and it can turned off by adding the following line in /etc/vsftpd.conf:

seccomp_sandbox=NO

That solved my problem and version 3.0.2 of vsftpd should address the issue with seccomp sandboxing anyway. In case, this solution isn’t as robust as it should be because seccomp isn’t supported in the Linux kernel that you are using, turning off the new feature still needs to be an option though.

Creating a test web server using Ubuntu Server 13.04 and VirtualBox

1st September 2013

Having seen Linux Format cover tools like Vagrant and Puppet that manage virtual machines, I have been attracted by the prospect of a virtual web server running on my own PC. Certainly, having the LAMP software stack in a VM means that the corresponding tools don’t need to be added to a host system should its operating system need a fresh installation.

As intriguing as tools like Vagrant may be, I decided that I needed to learn a bit more about getting server instances set up in VirtualBox anyway. Thus, I went and downloaded the latest version of Ubuntu Server and gave that a go. One lesson that I learned was that Bridged Networking needs to be added to the VM before installation of the operating system unless you fancy overcoming the challenge of getting Ubuntu Server to recognise an altered or additional network interface. In my case, I added an extra adapter for the Bridged Networking and left the original in place as NAT. The reason for having Bridged Networking set up is that it allows access to the virtual web server from the host once you know the IP address and that information can be obtained by executing the ifconfig command on the virtual machine.

With the networking sorted, the next step was to install the 64-bit edition of Ubuntu Server. Unlike its desktop counterpart, this is all driven by text menus but remains fairly intuitive and there is hardly anything there that you wouldn’t see with another Linux distribution. A useful addition is the addition of a menu to select the type of server services that you’d like to see installed. From this, I chose the web server and SSH options and I seem to remember that there was a database server option too. If there was an FTP server option, I would have chosen that too but it was no ordeal to add ProFTPd later on anyway.

All of this set was done through the VirtualBox GUI just to keep life more straightforward. Even so, I only selected 12 MB of video memory and was tempted to cut the overall memory back from 512 MB but leaving things be for now. However, what I have begun to do is start and stop the virtual machine from the command line since servers are headless operations anyway. With SSH enabled, there is little need to have the VirtualBox GUI going. The command for starting the server is below:

VBoxManage startvm "Ubuntu Server" --type=headless

There is a VBoxHeadless command for the same end too but VBoxManage does what I need. The startvm option is what tells VBoxManage to start the server and the virtual machine’s name is enclosed in quotes. The --type=headless ensures that no window pops up. To stop the virtual web server cleanly, a command like the following is needed:

VBoxManage controlvm "Ubuntu Server" acpipowerbutton

Again, the VBoxManage command gets used and the acpipowerbutton option ensures that a clean shutdown is performed. Not doing so results in the server not fully starting up according to my experiences thus far. Getting the virtual web server to start and stop with the host machine itself starting and stopping but this looks more complex so I plan to leave things a while before trying that experiment.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.