Technology Tales

Adventures & experiences in contemporary technology

Command Line Software Management

2nd December 2009

One of the nice things about a Debian-based Linux distribution is that it is easy to pull a piece of software onto your system from a repository using either apt-get or aptitude. Some may prefer to have a GUI but I find that the command line offers certain extra transparency that stops the “what’s it doing?” type of question. that’s never to say that the GUI-based approach hasn’t a place and I only go using it when seeking out a piece of software without knowing its aptitude-ready name. Interestingly, there are signs that Canonical may be playing with the idea of making Ubuntu’s Software Centre a full application management tool with updates and upgrades getting added to the current searching, installation and removal facilities. That well may be but it’s going to take a lot of effort to get me away from the command line altogether.

Fedora and openSUSE have their software management commands too in the shape of yum and zypper, respectively. The recent flurry of new operating system releases has had me experimenting with both of those distros on a real test machine. As might be expected, the usual battery of installation, removal and update activities are well served and I have been playing with software searching using yum too. What has yet to mature is in-situ distribution upgrading à la Ubuntu. In principle, it is possible but I got a black screen when I tried moving from openSUSE 11.1 to 11.2 within VirtualBox using instructions on the openSUSE website. Not wanting to wait, I reached for a Live CD instead and that worked a treat on both virtual and real machines. Being in an experiment turn of mind, I attempted the same to get from Fedora 11 to the beta release of its version 12. A spot of repository trouble got me using a Live CD in its place. You can perform an in-situ upgrade from a full Fedora DVD but the only option is system replacement when you have a Live CD. Once installation is out of the way, YAST can be ignored in favour of zypper and yum is good enough that Fedora’s GUI-using alternative can be ignored. It’s nice to see good transparent ideas taking hold elsewhere and may make migration between distros much easier too.

A certain lack of speed

2nd November 2009

A little while, I encountered a problem with ImageMagick processing DNG files in Ubuntu 9.04. Not realising that I could solve me own problem by editing a file named delegates.xml, I took to getting a Debian VM to do the legwork for me. That’s where you’ll find all the commands used when helper software is used by ImageMagick to help it on its way. On its own, ImageMagick cannot deal with DNG files so the command line variant of UFRaw (itself a front end for DCRaw) is used to create a PNM file that ImageMagick can handle. The problem a few months back was that the command in delegates.xml wasn’t appropriate for a newer version of UFRaw and I got it into my head that things like this were hard-wired into ImageMagick. Now, I know better and admit my error.

With 9.1o, it seems that the command in delegates.xml has been corrected but another issue had raised its head. UFRaw 0.15, it seems, isn’t the speediest when it comes to creating PNM files and, while my raw file processing script works after a spot of modification to deal with changes in output from the identify command used, it takes far too long to run. GIMP also uses UFRaw so I wonder if the same problem has surfaced there too but it has been noticed by the Debian team and they have a package for version 0.16 of the software in their unstable branch that looks as if it has sorted the speed issue. However, I am seeing that 0.15 is in the testing branch so I’d be tempted to stick with Lenny (5.x) if any successor turns out to have slower DNG file handling with ImageMagick and UFRaw. In my estimation, 0.13 does what I need so why go for a newer release if it turns out ot be slower?

Using the Windows Command Line for Security Administration

24th July 2009

While there are point and click tools for the job, being able to set up new user groups, attaching them to folders and assign uses to them using the command line has major advantages when there are a number to be set up and logs of execution can be retained too. In light of this, it seems a shame that terse documentation along with its being hard to rack down answers to any questions using Google, or whatever happens to be your search engine of choice, makes it less easy to discern what commands need to be run. This is where a book would help but the whole experience is in direct contrast to the community of information providers that is the Linux user community, with Ubuntu being a particular shining example. Saying that, the Windows help system is not so bad once you can track down what you need. For instance, knowing that you need commands like CACLS and NET LOCALGROUP, the ones that have been doing the back work for me, it offers useful information quickly enough. To illustrate the usefulness of the aforementioned commands, here are a few scenarios.

Creating a new group:

net localgroup [name of new group] /comment:”[more verbose description of new group]” /add

Add a group to a folder:

cacls [folder address] /t /e /p [name of group]

The /t switch gets cacls to apply changes to the ACL for the specified folder and all its subfolders, recursive action in other words, while the /e specifies ACL editing rather than its replacement and /p induces replacement of permissions for a given user or group. Using :n, :f, :c or :r directly after the name of a specified user or group assigns no, full, change (write) or read access, respectively. Replacing /p with /r revokes access and leaving off the :n/:f/:c/:r will remove the group or user from the folder.

Add a user to a group:

net localgroup [name of group] [user name (with domain name if on a network)] /add

In addition to NET LOCALGROUP, there is also NET GROUP for wider network operations, something that I don’t have cause to do. Casting the thinking net even wider, I suspect that VB scripting and its ability to tweak the Windows Management Interface might offer more functionality than what is above (PowerShell also comes to mind while we are on the subject) but I am sharing what has been helping me and it can be hard to find if you don’t know where to look.

Finding out what kernel version is running

5th May 2008

Here’s the command that does the deed for me on Ubuntu:

uname -a

I usually only need it to find out what header files I need for any VMware repeat installations or reconfigurations.

UNIX Process Management

1st June 2007

Here are a few UNIX commands that I have recently encountered that help with process management and are particularly useful when jobs are running in the background. Here they are:

nohup

It’s short for no hang up and stops termination of a job when a user logs off. Another result is that all console messages being directed to a file called nohup.out in the directory current to the job being run, or in the user’s home directory where write access to the current working directory is unavailable.

ps

This returns a list of processes, their ID’s and their statuses. By default, this is for your own processes but you can look beyond this with the myriad of options that can be passed. For instance, the -U switch allows you to look at a job for other users while the -f one shows more information than the standard call and this even includes the commands submitted to start the ongoing processes.

kill

The name says it all and it’s far quicker than the rigmarole that you have to endure with the Windows task manager; I wonder if there is a command line approach to process termination on Windows.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.