Trying out a new way to upgrade Linux Mint in situ while going from 17.3 to 18.1

There was a time when the only recommended way to upgrade Linux Mint from one version to another was to do a fresh installation with back-ups of data and a list of the installed applications created from a special tool.

Even so, it never stopped me doing my own style of in situ upgrade though some might see that as a risky option. More often than not, that actually worked without causing major problems in a time when Linux Mint releases were more tightly tied to Ubuntu’s own six-monthly cycle.

In recent years, Linux Mint’s releases have kept in line with Ubuntu’s Long Term Support (LTS) editions instead. That means that any major change comes only every two years with minor releases in between those. The latter are delivered through Linux Mint’s Update Manager so the process is a simple one to implement. Still, upgrades are not forced on you so it is left to your discretion as to when you need to upgrade since all main and interim versions get the same extended level of support. In fact, the recommendation is not to upgrade at all unless something is broken on your own installation.

For a number of reasons, I stuck with that advice by sticking on my main machine with Linux Mint 17.3 instead of upgrading to Linux Mint 18. The fact that I broke things on another machine using an older method of upgrading provided even more encouragement.

However, I subsequently discovered another means of upgrading between major versions of Linux Mint that had some endorsement from the project. There still are warnings about testing a live DVD version of Linux Mint on your PC first and backing up your data beforehand. Another task is ensuring that you are upgraded from a fully up to data Linux Mint 17.3 installation.

When you are ready, you can install mintupgrade using the following command:

sudo apt-get install mintupgrade

When that is installed, there is a sequence of tasks that you need to do. The first of these is to simulate an upgrade to test for the appearance of untoward messages and resolve them. Repeating any checking until all is well gets a recommendation. The command is as follows:

mintupgrade check

Once you are happy that the system is ready, the next step is to download the updated packages so they are on your machine ahead of their installation. Only then should you begin the upgrade process. The two commands that you need to execute are below:

mintupgrade download
mintupgrade upgrade

Once these have completed, you can restart your system. In my case the whole process worked well with only my PHP installation needing attention. A clash between different versions of the scripting interpretor was addressed by removing the older one since PHP 7 is best kept for sake of testing. Beyond that, a reinstallation of VMware Player and the move from version 18 to version 18.1, there hardly was anything more to do and there was next to no real disruption. That is just as well since I depend heavily on my main PC these days. The backup option of a full installation would have left me clearing up things for a few days afterwards since I use a bespoke selection of software.

Changing file timestamps using Windows PowerShell

Recently, a timestamp got changed on an otherwise unaltered file on me and I needed to change it back. Luckily, I found an answer on the web that used PowerShell to do what I needed and am recording it here for future reference. The possible commands are below:

$(Get-Item temp.txt).creationtime=$(Get-Date “27/10/2014 04:20 pm”)
$(Get-Item temp.txt).lastwritetime=$(Get-Date “27/10/2014 04:20 pm”)
$(Get-Item temp.txt).lastaccesstime=$(Get-Date “27/10/2014 04:20 pm”)

The first of these did not interest me since I wanted to leave the file creation date as it was. The last write and access times were another matter because these needed altering. The Get-Item commandlet brings up the file so its properties can be set. Here, these include creationtime, lastwritetime and lastaccesstime. The Get-Date commandlet reads in the provided date and time for use in the timestamp assignment. While PowerShell itself is case insensitive, I have opted to show the camelcase that is produced when you are tabbing through command options for sake of clarity.

The Get-Item and Get-Date have aliases of gi and gd, respectively and the Get-Alias commandlet will show you a full list while Get-Command (gcm) gives you a list of commandlets. Issuing the following gets you a formatted list that is sent to a text file:

gcm | Format-List > temp2.txt

There is some online help but it is not quite as helpful as it ought to be so I have popped over to TechNet whenever I needed extra enlightenment. Here is a command that pops the full thing into a text file:

Get-Help Format-List -full > temp3.txt

In fact, getting a book might be the best way to find your way around PowerShell because of all its commandlets and available objects.

For now, other commands that I have found useful include the following:

Get-Service | Format-List
New-Item -Name  test.txt -ItemType “file”

The first of these gets you a list of services while the second creates a new blank text file for you and it can create new folders for you too. Other useful commandlets are below:

Get-Location (gl)
Set-Location (sl)
Copy-Item
Remove-Item
Move-Item
Rename-Item

The first of the above is like the cwd or pwd commands that you may have seen elsewhere in that the current directory location is given. Then, the second will change your directory location for you. After that, there are commandlets for copying, deleting, moving and renaming files. These too have aliases so users of the legacy Windows command line or a UNIX or Linux shell can use something that is familiar to them.

Little fixes like the one with which I started this piece are all very good to know but it is in scripting that PowerShell really is said to show its uses. Having seen the usefulness of such things in the world on Linux and UNIX, I cannot disagree with that and PowerShell has its own IDE too. That may be just as well given how much there is to learn. That especially is the case when you might need to issue the following command in a PowerShell session opened using the Run as Administrator option just to get the execution as you need it:

Set-ExecutionPolicy RemoteSigned

Issuing Get-ExecutionPolicy will show you if this is needed when the response is: Restricted. A response of RemoteSigned shows you that all is in order though you need to check that any script you then run has no nasty payload in there, which is why execution is restrictive in the first place. This sort of thing is yet another lesson to be learnt with PowerShell.

Using the Windows Command Line for Security Administration

While there are point and click tools for the job, being able to set up new user groups, attaching them to folders and assign uses to them using the command line has major advantages when there are a number to be set up and logs of execution can be retained too. In light of this, it seems a shame that terse documentation along with its being hard to rack down answers to any questions using Google, or whatever happens to be your search engine of choice, makes it less easy to discern what commands need to be run. This is where a book would help but the whole experience is in direct contrast to the community of information providers that is the Linux user community, with Ubuntu being a particular shining example. Saying that, the Windows help system is not so bad once you can track down what you need. For instance, knowing that you need commands like CACLS and NET LOCALGROUP, the ones that have been doing the back work for me, it offers useful information quickly enough. To illustrate the usefulness of the aforementioned commands, here are a few scenarios.

Creating a new group:

net localgroup [name of new group] /comment:”[more verbose description of new group]” /add

Add a group to a folder:

cacls [folder address] /t /e /p [name of group]

The /t switch gets cacls to apply changes to the ACL for the specified folder and all its subfolders, recursive action in other words, while the /e specifies ACL editing rather than its replacement and /p induces replacement of permissions for a given user or group. Using :n, :f, :c or :r directly after the name of a specified user or group assigns no, full, change (write) or read access, respectively. Replacing /p with /r revokes access and leaving off the :n/:f/:c/:r will remove the group or user from the folder.

Add a user to a group:

net localgroup [name of group] [user name (with domain name if on a network)] /add

In addition to NET LOCALGROUP, there is also NET GROUP for wider network operations, something that I don’t have cause to do. Casting the thinking net even wider, I suspect that VB scripting and its ability to tweak the Windows Management Interface might offer more functionality than what is above (PowerShell also comes to mind while we are on the subject) but I am sharing what has been helping me and it can be hard to find if you don’t know where to look.

ImageMagick and Ubuntu 9.04

Using a command line tool like ImageMagick for image processing may sound a really counter-intuitive thing to do but there’s no need to do everything on a case by case interactive basis. Image resizing and format conversion come to mind here. Helper programs are used behind the scenes too with Ghostscript being used to create Postscript files, for example.

The subject of helper programs brings me to an issue that has hampered me recently. While I am aware that there are tools like F-Spot available, I am also wont to use a combination of shell scripting (BASH & KSH), Perl and ImageMagick for organising my digital photos. My preference for using Raw camera files (DNG & CRW) means that ImageMagick cannot access these without a little helper. In the case of Ubuntu, it’s UFRaw. However, Jaunty Jackalope appears to have seen UFRaw updated to a version that is incompatible with the included version of ImageMagick (6.4.5 as opposed to 3.5.2 at the time of writing). The result is that the command issued by ImageMagick to UFRaw -- issue the command man ufraw-batch to see the details -- is not accepted by the included version of the latter, 0.15 if you’re interested. It seems that an older release of UFRaw accepted the output device ppm16 (16-bit PPM files) but this should now be specified as ppm for the output device and 16 for the output depth. In a nutshell, where the parameter output-type did the lot, you now need both output-type and output-depth.

I thought of decoupling things by using UFRaw to create 16-bit PPM files for processing by ImageMagick but to no avail. The identify command wouldn’t return the date on which the image was taken. I even changed the type to 8-bit JPEG’s with added EXIF information but no progress was made. In the end, a mad plan came to mind: creating a VirtualBox VM running Debian. The logic was that if Debian deserves its reputation for solidity, dependencies like ImageMagick and UFRaw shouldn’t be broken and I wasn’t wrong. To make it fly though, I needed to see if I could get Guest Additions installed on Debian. Out of the box, the supported kernel version must be at least 2.6.27 and Debian’s is 2.6.26 so additional work was on the cards. First, GCC, Make and the correct kernel header files need to be installed. Once those are in place, the installation works smoothly and a restart sets the goodies in motion. To make the necessary Shared folder to be available, a command like the following was executed:

mount -t vboxfs [Shared Folder name] [mount point]

Once that deed was done and ImageMagick instated, the processing that I have been doing for new DSLR images was reinstated. Ironically, Debian’s version of ImageMagick, 6.3.7, is even older than Ubuntu’s but it works and that’s the main thing. There is an Ubuntu bug report for this on Launchpad so I hope that it gets fixed at some point in the near future. However, that may mean awaiting 9.10 or Karmic Koala so I’m glad to have the workaround in the meantime.