TOPIC: PACKAGE MANAGER
Setting up GNOME 3 on Arch Linux
22nd July 2011It must have been my curiosity that drove me to explore Arch Linux a few weeks ago. Its inclusion on a Linux Format DVD and a few kind words about its being a cutting edge distribution were enough to set me installing it into a VirtualBox virtual machine for a spot of investigation. Despite warnings to the contrary, I took the path of least resistance with the installation, even though I did look among the packages to see if I could select a desktop environment to be added as well. Not finding anything like GNOME in there, I left everything as defaulted and ended up with a command line interface, as I suspected. The next job was to use the pacman command to add the extras that were needed to set in place a fully functioning desktop.
For this, the Arch Linux wiki is a copious source of information, even if it didn't stop me doing things out of sequence. That I didn't go about perusing it linearly was part of the cause of this, but you have to know which place to start first as well. As a result, I have decided to draw everything together here so that it's all in one place and in a more sensible order, even if it wasn't the one that I followed.
The first thing to do is add X.org using the following command:
pacman -Syu xorg-server
The -Syu
switch tells pacman to update the package list, upgrade any packages that require it, and adds the listed package if it isn't in place already; that's X.org in this case. For my testing, I added xor-xinit
too. This puts that startx
command in place. This is the command for adding it:
pacman -S xorg-xinit
With those in place, I would add the VirtualBox Guest Additions next. GNOME Shell requires 3D capability, so you need to have this done while the machine is off or when setting it up in the first place. This command will add the required VirtualBox extensions:
pacman -Syu virtualbox-guest-additions
Once that's done, you need to edit /etc/rc.conf
by adding vboxguest vboxsf vboxvideo
within the brackets on the MODULES line and adding rc.vboxadd
within the brackets on the DAEMONS line. On restarting, everything should be available to you, but the modprobe
command is there for any troubleshooting.
With the above pre-work done, you can set to installing GNOME, and I added the basic desktop from the gnome package and the other GNOME applications from the gnome-extra one. GDM is the login screen manager, so that's needed too, and the GNOME Tweak Tool is a very handy thing to have for changing settings that you otherwise couldn't. Here are the commands that I used to add all of these:
pacman -Syu gnome
pacman -Syu gnome-extra
pacman -Syu gdm
pacman -Syu gnome-tweak-tool
With those in place, some configuration files were edited so that a GUI was on show instead of a black screen with a command prompt, as useful as that can be. The first of these was /etc/rc.conf
where dbus
was added within the brackets on the DAEMONS
line and fuse
was added between those on the MODULES one.
Creating a file named .xinitrc
in the root home area with the following line to that file makes running a GNOME session from issuing a startx
command:
exec ck-launch-session gnome-session
With all those in place, all that was needed to get a GNOME 3 login screen was a reboot. Arch is so pared back that I could log in as root, not the safest of things to be doing, so I added an account for more regular use. After that, it has been a matter of tweaking the GNOME desktop environment and adding missing applications. The bare-bones installation that I allowed to happen meant that there were a surprising number of them, but that isn't difficult to fix using pacman
.
All of this emphasises that Arch Linux is for those who want to pick what they want from an operating system rather than having that decided for you by someone else, an approach that has something going for it with some of the decisions that make their presence felt in computing environments from time to time. While there's no doubt that this isn't for everyone, the documentation is complete enough for the minimalism not to be a problem for experienced Linux users, and I certainly managed to make things work for me once I got them in the right order. Another thing in its favour is that Arch also is a rolling distribution, so you don't need to have to go through the whole set up routine every six months, unlike some others. So far, it does seem stable enough and even has set me to wondering if I could pop it on a real computer sometime.
You always can install things yourself...
26th November 2009With Linux distributions offering you everything on a plate, there is a temptation to stick with what they offer rather than taking things into your own hands. For example, Debian's infrequent stable releases and the fact that they don't seem to change software versions throughout the lifetime of such a release means that things such as browser versions are fixed for the purposes of stability; Lenny has stuck with Firefox 3.06 and called it IceWeasel for some unknown reason. However, I soon got to grab a tarball for 3.5 and popped its contents into /opt
where the self-contained package worked without a hitch. The same modus operandi was used to add Eclipse PDT and that applied to Ubuntu too until buttons stopped working, forcing a jumping of ship to NetBeans.
Of course, you could make a mess when veering away from what is in a distribution, but that should be good enough reason not to get carried away with software additions. With the availability of DEB packages for things like Adobe Reader, RealPlayer, VirtualBox, Google Chrome and Opera, keeping things clean isn't so hard. While your mileage may vary when it comes to how well things work out for you, I have only ever had the occasional problem anyway.
What reminded me of this was a recent irritation with the OpenOffice package included in Ubuntu 9.10 whereby spell checking wasn't working. While there were thoughts about in situ fixes like additional dictionary installations, I ended up plumping for what could be called the lazy option: grabbing a tarball full of DEB packages from the OpenOffice website and extracting its contents into /tmp
and, once the URE package was in place, installing from there using the command:
dpkg -i o*
To get application shortcuts added to the main menu, it was a matter of diving into the appropriate subfolder and installing from the GNOME desktop extension package. Of course, Ubuntu's OpenOffice variant was removed as part of all this but, if you wanted to live a little more dangerously, the external installation goes into /opt
which means that there shouldn't be too much of a conflict anyway. In any case, the DIY route got me the spell checking in OpenOffice Writer that I needed, so all was well and another Ubuntu rough edge eradicated from my life, for now anyway.
Converting from CGM to Postscript
24th November 2009One thing that I recently had to investigate was the possibility of converting CGM vector graphics files into Postscript and from there into PDF. Having used ImageMagick for converting images before, that was an obvious option. However, that cannot process CGM files on its own and needs a delegate or helper application as well. This is the case with raw digital camera files too, with UFRaw being the program chosen. For CGM images, the more obscure RALCGM is what's needed, and tracking it down is a bit of an art. Though the history is that it was developed at the U.K.'s Rutherford Appleton Laboratory, it appears that it was left to go off into the wilderness rather than someone keeping an eye on things. With that in mind, here are the installation packages for Windows and Linux (RPM):
RALCGM is a handy command line tool that can covert from CGM to Postscript on its own without any need for ImageMagick at all. From what I have seen, fonts on graphical output may look greyer than black, but it otherwise does its job well. However, considering that it is a freely available tool, one cannot complain too much. There are other packages for doing vector to raster conversion and the ones that I have seen do have GUI's but the freedom to look at for cost software wasn't mine to have. The required command looks something like the following:
ralcgm -d PS -oL test.cgm test.ps
The switch -d PS
uses the software's Postscript driver and -oL
specifies landscape orientation. If you like to find out more, here's a PDF rendition of the help file that comes with the thing:
Adding Microsoft core fonts to Debian
18th June 2009When setting up Ubuntu, I usually add in Microsoft's core fonts by installing the msttcorefonts
package using either Synaptic or apt-get. Though I am not sure why I didn't try doing the same thing for Debian until now, it's equally feasible. Just pop over to System > Administration > Software Sources and ensure that the check-boxes for the contrib
and non-free
categories are checked like you see below.
You could also achieve the same end by editing /etc/apt/sources.list and adding the non-free
and contrib
keywords to make lines look like these before issuing the command apt-get update
as root:
deb http://ftp.debian.org/debian/ lenny main non-free contrib
deb-src http://ftp.debian.org/debian/ lenny main non-free contrib
All that you are doing with the manual editing route is performing the same operations that the more friendly front end would do for you anyway. After that, it's a case of going with the installation method of your choice and restarting Firefox or IceWeasel to see the results.
Transferring data between SAS and R
5th June 2008A question regarding the ability to transfer of data between SAS and R set me off on a spot of investigation a while back, and I have always planned to share the results of my labours. Once I managed to locate the required documentation, things became clearer with further inspection. Functions from the foreign package seem to offer the most from the data import and export point of view, so they're what I'll be featuring in this posting.
Here, I am starting with importing, and using the read.ssd
function makes life so much easier for getting SAS data into R. When I discovered that the foreign package may not be loaded by default, that could be determined easily using the following command:
search()
If package:foreign
isn't in the list, then you need to issue the following function call:
library(foreign)
Of course, if the foreign package isn't installed, none of this will work. It should live in the library sub-folder of the main R installation directory, but if it isn't there, then downloading the relevant binary package from CRAN is in order. Assuming that all is installed, then a command like the following will perform the needful:
read.ssd("c:/data","data1",sascmd="C:/Program Files/SAS Institute/SAS/V8/sas.exe")
This creates a temporary SAS program that converts the SAS data set into a transport file for reading by another R function that is called in the background, read.xport
. From my experience, it all seems to work fairly seamlessly.
To get data out of R and into SAS is a multi-stage process, even with the foreign package. While there are other ways, using the write.foreign
seems more useful than most. Here is an example function call:
write.foreign(data1,"C:/test.txt","C:/test.sas",package="SAS",dataname="data1",validvarname="V7")
While no SAS data sets are created at this stage, a text file is generated along with a SAS program for converting it into a data set. Running the SAS program is a separate step that follows the creation of the two files. Even if it is less streamlined than read.ssd
, write.foreign
does make it easier to transfer data into SAS than having to write a program from scratch to read in write.table
output.
In summary, R can neither read nor write SAS data sets by itself, so you need SAS installed to really make things happen. SAS gets called by read.ssd
and I feel that it would be better if was called by write.foreign
also rather than a SAS program generated for execution later on. Even so, it is good to see some custom functionality being provided that makes life easier. There's also the hmisc
package, but my experiences while working with that on S-Plus have been such that it compares less favourably with foreign on the reliability front. Saying that, things may have changed since I last tried it.
A collection of lessons learnt about web hosting
28th March 2008Putting this blog back on its feet after a spot of web hosting bother caused me to learn a bit more about web hosting than I otherwise might have done. Here's a selection, and they are in no particular order:
- Store your passwords securely and where you can find them because you never know how a foul up of your own making can strike. For example, a faux pas with a configuration file is all that's needed to cause havoc for a database site such as a WordPress blog. After all, nobody's perfect and your hosting provider may not get you out of trouble as quickly as you might like.
- Get a MySQL database or equivalent as part of your package, rather than buying one separately. If your provider allows a trial period, then changing from one package to another could be cheaper and easier than if you bought a separate database and needed to jettison it because you changed from, say, a Windows package to a Linux one or vice versa.
- It might be an idea to avoid a reseller unless the service being offered is something special. Going for the sake of lower cost can be a false economy, and it might be better to cut out the middleman altogether and go direct to their provider. Being able to distinguish a reseller from a real web host would be nice, but I don't see that ever becoming a reality; it is hardly in the resellers' interests, after all.
- Should you stick with a provider that takes several days to resolve a serious outage? The previous host of this blog had a major MySQL server outage that lasted for up to three days, and seeing that was one of the factors that made me turn tail to go to a more trusted provider that I have used for a number of years. The smoothness of the account creation process might be another point worthy of consideration.
- Sluggish system support really can frustrate, especially if there is no telephone support provided and the online ticketing system seems to take forever to deliver solutions. I would advise strongly that a host who offers a helpline is a much better option than someone who doesn't. Saying all of that, I think that it's best to be patient and, when your website is offline, that might not be as easy you'd hope it to be.
- Setting up hosting or changing from one provider to another can take a number of days because of all that needs doing. So, it's best to allow for this and plan ahead. Account creation can be quick but setting up the website can take time while domain name transfer can take up to 24 hours.
- It might not take the same amount of time to set up Windows hosting as its Linux equivalent. I don't know if my experience was typical, but I have found that the same provider set up Linux hosting far quicker (within 30 minutes) than it did for a Windows-based package (several hours).
- Be careful what package you select; it can be easy to pick the wrong one, depending on how your host's sight is laid out and what they are promoting at the time.
- You can have a Perl/PHP/MySQL site working on Windows, even with IIS being used in place instead of Apache. The Linux/Apache/Perl/PHP/MySQL approach might still be better, though.
- The Windows option allows for .Net, ASP and other such Microsoft technologies to be used. I have to say that my experience and preference is for open-source technologies, so Linux is my mainstay, but learning about the other side can never hurt from a career point of view. After, I am writing this on a Windows Vista powered laptop to see how the other half lives, as much as anything else.
- Domains serviced by hosting resellers can be visible to the systems of those from whom they buy their wholesale hosting. This frustrated my initial attempts to move this blog over because I couldn't get an account set up for technologytales.com because a reseller had it already on the same system. It was only when I got the reseller to delete the account with them that things began to run more smoothly.
- If things are not going as you would like them, getting your account deleted might be easier than you think, so don't procrastinate because you think it is a hard thing to do. Of course, it goes without saying that you should back things up beforehand.
A UNIX shell running on Windows
15th November 2007Here's an idea that I got for a post before I spent that torrid weekend with Windows that caused me to jump ship to Linux. The idea of having a UNIX command line while still remaining on Windows did appeal to me at the time, and Cygwin seems to provide an intriguing way to do this. At its most basic, it is a set of DLL's that allow you to run standard UNIX commands in a shell like what you see below. However, it is extensible with a good number of packages that you can choose to install. NEdit is just one that gets included, and I think that I spied Apache too. The standard installation is a web-based affair, with your downloading only the components that you need; it's worth trawling through the possibilities while you're at it.
Now that I am firmly ensconced in the world of Linux, this may be one possibility that I will park, for a while anyway. After all, I now do have the full power of the UNIX command line...
Setting up a test web server on Ubuntu
1st November 2007Installing all the bits and pieces is painless enough so long as you know what's what; Synaptic does make it thus. Interestingly, Ubuntu's default installation is a lightweight affair with the addition of any additional components involving downloading the packages from the web. The whole process is all very well integrated and doesn't make you sweat every time you need to install additional software. In fact, it resolves any dependencies for you so that those packages can be put in place too; it lists them, you select them and Synaptic does the rest.
Returning to the job in hand, my shopping list included Apache, Perl, PHP and MySQL, the usual suspects in other words. Perl was already there, as it is on many UNIX systems, so installing the appropriate Apache module was all that was needed. PHP needed the base installation as well as the additional Apache module. MySQL needed the full treatment too, though its being split up into different pieces confounded things a little for my tired mind. Then, there were the MySQL modules for PHP to be set in place too.
The addition of Apache preceded all of these, but I have left it until now to describe its configuration, something that took longer than for the others; the installation itself was as easy as it was for the others. However, what surprised me were the differences in its configuration set up when compared with Windows. There are times when we get the same software but on different operating systems, which means that configuration files get set up differently. The first difference is that the main configuration file is called apache2.conf
on Ubuntu rather than httpd.conf
as on Windows. Like its Windows counterpart, Ubuntu's Apache does use subsidiary configuration files. However, there is an additional layer of configurability added courtesy of a standard feature of UNIX operating systems: symbolic links. Rather than having a single folder with the all configuration files stored therein, there are two pairs of folders, one pair for module configuration and another for site settings: mods-available/mods-enabled and sites-available/sites-enabled, respectively. In each pair, there is a folder with all the files and another containing symbolic links. It is the presence of a symbolic link for a given configuration file in the latter that activates it. I learned all this when trying to get mod_rewrite going and changing the web server folder from the default to somewhere less susceptible to wrecking during a re-installation or, heaven forbid, a destructive system crash. It's unusual, but it does work, even if it takes that little bit longer to get things sorted out when you first meet up with it.
Apart from the Apache set up and finding the right things to install, getting a test web server up and running was a fairly uneventful process. All's working well now, and I'll be taking things forward from here; making website Perl scripts compatible with their new world will be one of the next things that need to be done.