Technology Tales

Adventures & experiences in contemporary technology

Installing FreeBSD in a VirtualBox Virtual Machine

2nd March 2014

With UNIX being the basis of Linux, I have a soft spot for trying out any UNIX that can be installed on a PC. For a while, I had OpenSolaris on the go and even vaguely recall having a look at one of the BSD’s. However, any recent attempt to install one of the latter, and there are quite a few around now, got stymied by some sort of kernel panic caused by using AMD CPU’s. With the return to the Intel fold arising from the upgrade of my main home PC last year, it perhaps was time to try again.

The recent release of FreeBSD 10.0 was the cue and I downloaded a DVD image for a test installation in a VirtualBox virtual machine with 4 GB of memory and a 32 GB virtual hard drive attached (expanding storage was chosen so not all the allocated space has been taken so far). The variant of FreeBSD chosen was the 64-bit x86 one and I set to installing it in there. Though not as pretty in appearance as those in various Linux distros, the installer was not that user unfriendly to me. Mind you, I have experience of installing Arch Linux so that might have acclimatised me somewhat.

Those installation screens ask about the keyboard mapping that you want and I successfully chose one of the UK options. There was limited opportunity for adding extras though there was a short list of few from which I made some selections. User account set up also was on offer and I would have been better off knowing what groups to assign for my personal user account so as to have to avoid needing to log in as root so often following system start up later. Otherwise, all the default options were sufficient.

When the installation process was complete, it was time to boot into the new system and all that was on offer was a command line log in session. After logging in as root, it was time to press pkg into service in order to get a desktop environment in place. The first step was to install X:

pkg install xorg

Then, it was time to install a desktop environment. While using XFCE or KDE were alternatives, I chose GNOME 2 due to familiarity and more extensive instructions on the corresponding FreeBSD handbook page. Issuing the following command added GNOME and all its helper applications:

pkg install gnome2

So that GNOME starts up at the next reboot, some extra steps are needed. The first of these is to add the following line into /etc/fstab:

proc /proc procfs rw 0 0

Then, two lines were needed in /etc/rc.conf:

gdm_enable=”YES”
gnome_enable=”YES”

The first enables the GNOME display manager and the second activates other GNOME programs that are needed for a desktop session to start. With each of these in place, I got a graphical login screen at the next boot time.

With FreeDSB being a VirtualBox Guest, it was time to consult the relevant FreeBSD manual page. Here, there are sections for a number of virtual machine tools so a search was needed to find the one for VirtualBox. VirtualBox support for FreeBSD is incomplete in that there is no installation media for BSD systems though Linux and Solaris are supported along with Windows. Therefore, it is over to the FreeBSD repositories for the required software:

pkg install virtualbox-ose-additions

Aside from the virtual machine session not capturing and releasing the mouse pointer automatically, that did everything that was needed even if it was the open source edition of the drivers and their proprietary equivalents. To resolve the mouse pointer issue, I needed to temporarily disable the GNOME desktop session in /etc/rc.conf to drop down to a console only session where xorg. conf could be generated using the following commands:

Xorg -configure
cp xorg.conf.new /etc/xorg.conf

In the new xorg.conf file, the mouse section needs to be as follows:

Section “InputDevice”
Identifier  “Mouse0”
Driver      “vboxmouse”
EndSection

If it doesn’t look like the above and it wasn’t the case for me then it needs changing. Also, any extra lines from the default set up also need removing or the mouse will not function as it should. The ALT+F1 (for accessing GNOME menus) and ALT+F2 (for running commands) keyboard shortcuts then become crucial when your mouse is not working as it should and could avert a panic too; knowing that adjusting a single configuration file will fix a problem when doing so is less accessible is not a good feeling as I discovered to my own cost. The graphics settings were fine by default but here’s what you should have in case it isn’t for you:

Section “Device”
### Available Driver options are:-
### Values: <i>: integer, <f>: float, <bool>: “True”/”False”,
### <string>: “String”, <freq>: “<f> Hz/kHz/MHz”
### [arg]: arg optional
Identifier  “Card0”
Driver      “vboxvideo”
VendorName  “InnoTek Systemberatung GmbH”
BoardName   “VirtualBox Graphics Adapter”
BusID       “PCI:0:2:0”
EndSection

The next step is to ensure that your HAL settings are as they should. I needed to create a file in /usr/local/etc/hal/fdi/policy called 90-vboxguest.fdi that contains the following:

<?xml version=”1.0″ encoding=”utf-8″?>
<!--
# Sun VirtualBox
# Hal driver description for the vboxmouse driver
# $Id: chapter.xml,v 1.33 2012-03-17 04:53:52 eadler Exp $
Copyright (C) 2008-2009 Sun Microsystems, Inc.
This file is part of VirtualBox Open Source Edition (OSE, as
available from http://www.virtualbox.org. This file is free software;
you can redistribute it and/or modify it under the terms of the GNU
General Public License (GPL) as published by the Free Software
Foundation, in version 2 as it comes in the “COPYING” file of the
VirtualBox OSE distribution. VirtualBox OSE is distributed in the
hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
Clara, CA 95054 USA or visit http://www.sun.com if you need
additional information or have any questions.
-->
<deviceinfo version=”0.2″>
<device>
<match key=”info.subsystem” string=”pci”>
<match key=”info.product” string=”VirtualBox guest Service”>
<append key=”info.capabilities” type=”strlist”>input</append>
<append key=”info.capabilities” type=”strlist”>input.mouse</append>
<merge key=”input.x11_driver” type=”string”>vboxmouse</merge>
<merge key=”input.device” type=”string”>/dev/vboxguest</merge>
</match>
</match>
</device>
</deviceinfo>

With all that set, it is time to ensure that the custom user account is added to the wheel and operator groups using this command:

pw user mod [user name] -G wheel operator

Executing the above as root means that the custom account can run the su command so that logging in as root at the start of a desktop session no longer is needed. That is what being in the wheel group allows and the anyone in the operator group can shut down or restart the system. Both are facilities readily available in Linux so I fancied having them in FreeBSD too.

Being able to switch to root in a terminal session meant that I could go on to add software like Firefox, Libreoffice, GIMP, EMACS, Geany, Netbeans, Banshee and so on. There may be a line of opinion that FreeBSD is a server operating system but all of these make it more than passable for serving as a desktop one too. There may be no package management GUI as such and the ones that come with GNOME do not work either but anyone familiar with command line working will get around that.

FreeBSD may be conservative but that has its place too and being able to build up a system one item at a time teaches far more than getting everything already sorted in one hit. So far, there is enough documentation to get me going and I hope to see where else things go too. So far, the OS hasn’t been that intimidating and that’s good to see.

The case of a wide open restriction

7th November 2007

The addition of IMAP capability to Gmail attracted a lot of attention in the blogosphere last week and I managed to flick the switch for the beast courtesy of the various instructions that were out there. However, when I pottered back to the settings, the IMAP settings had disappeared.

A brief look at the Official Gmail Blog confirmed why: the feature wasn’t to be available to those who hadn’t set their language as US English. My setting of UK English explained why I wasn’t seeing it again, a strange observation given that they are merely variants of the same language; I have no idea why I saw it the first time around.

My initial impression was that the language setting used was an operating system or browser one, but this is not how it is. In fact, it is the language that you set for Gmail itself in its settings; choosing US English was sufficient to make the IMAP settings reappear while choosing UK English made them disappear again.

Personally, I am not certain why the distinction was made in the first place but I have Evolution merrily working away with Gmail’s IMAP interface without a bother. To get it going, I needed that imap.gmail.com needed an SSL connection while smtp.gmail.com needed a TLS one. After that, I was away and no port numbers needed to be supplied, unlike Outlook.

Uses for symbolic links

24th April 2007

UNIX (and Linux) does a wonderful trick with its file and folder shortcuts; it effectively treats them as file and folder transporters that transfer associate a file or folder that exists in one folder hierarchy with another and it is treated as if it exists in that hierarchy too. For example, the images folder under /www/htdocs/blog can have a link under /www/htdocs/ that makes it appear that its contents exist in both places without any file duplication. For instance, the pwd command cannot tell a folder from a folder shortcut. To achieve this, I use what are called symbolic links and the following command achieves the outcome in the example:

ln -s /www/htdocs/blog/images /www/htdocs/images

The first file path is the destination for the link while the second one is that for the link itself. I had a problem with Google Reader not showing up images in its feed displays so symbolic links rode to the rescue as they did for resolve a similar conundrum that I was encountering when editing posts in my hillwalking blog.

Free Software Foundation Approved Distros

2nd November 2023

Not all software in Linux distributions necessarily is free or libre software. After all, most of us want to play MP3 files and I am as guilty of this as many. Then, some proprietary drivers are included with some of them baked into Linux kernels as well. All of this may make Linux easier to use but it will not please some. Hence, the Free Software Foundation (FSF) has a list of distros satisfying their guidelines and some of these are below.

Dragora

Simplicity is the apparent hallmark here.

Dyne:bolic

This is a free-software-compliant multimedia distro that proves that such things can be done without the use of proprietary codecs.

Guix System

It now appears that the GNU project now has its very own Linux distro built around the Guix (pronounced “geeks”) package manager and using the Guile programming language. The website and the screenshots look swish so it might be worth trying this out for real, and there may be a version using the Hurd kernel yet, though Linux-libre is the only option for now.

Hyperbola

This project is working with two bases: Linux-libre and BSD. The first is a derivative or Arch Linux that roots out so many non-free packages that you wonder if they might go too far. It also takes the long-term support approach so they do not have to adjust things every time something changes in Arch.

libreCMC

This results from the combination of two distinct projects that shared one common characteristic: use in embedded devices like routers, not for installing on PC’s. That may seem like a minority interest to me but we all have different needs.

Parabola

Here, Arch also is the basis with freedom as the byword. While the basis is a rolling distro, this is a long-term support offering.

ProteanOS

What we have here is another Linux distro that can be embedded on different devices and is kept lightweight to ensure universality.

PureOS

The social purpose hardware company Purism is involved in this effort, hence the naming. The distro itself is based on Debian and appears to be intended for a range of hardware, from phones to tablets to PC’s. Naturally, ISO installation images can be downloaded as well.

Trisquel

Think of this as Ubuntu with only Free Software included and you have the point of this distro. Given that Richard Stallman of the FSF has been known to like it, meeting that goal seems to be assured now.

Ututo

This was the first distro that the FSF rated for software freedom and hails from Argentina. Unfortunately, there appears to have been a lull in activity since 2107, so it is difficult to know if this remains viable.

Relocating the Apache web server document root directory in Fedora 12

9th April 2010

So as not to deface anything that is available online on the web, I have a tendency to set up an offline Apache server on a home PC to do any tinkering away from the eyes of the unsuspecting public. Though Ubuntu is my mainstay for home computing, I do have a PC with Fedora installed and I have been trying to get an Apache instance starting automatically on there without success for a few months. While I can start it by running the following command as root, I’d rather not have more manual steps than is necessary.

httpd -k start

The command used by the system when it starts is different and, even when manually run as root, it failed with messages saying that it couldn’t find the directory while the web server files are stored. Here it is:

service httpd start

The default document root location on any Linux distribution that I have seen is /var/www and all is very well with this but it isn’t a safe place to leave things if ever a re-installation is needed. Having needed to wipe /var after having it on a separate disk or partition for the sake of one installation, it doesn’t look so persistent to me. In contrast, you can safeguard /home by having it on another disk or in a dedicated partition and it can be retained even when you change the distro that you’re using. Thus, I have got into the habit of having the root of the web server document root folder in my home area and that is where I have been seeing the problem.

Because of the access message, I tried using chmod and chgrp but to no avail. The remedy has to do with reassigning the security contexts used by SELinux. In Fedora, Apache will not work with the context user_home_t that is usually associated with home directories but needs httpd_sys_content_t instead. To find out what contexts are associated with particular folders, issue the following command:

ls -Z

The final solution was to create a user account whose home directory hosts the root of the web server file system, called www in my case. Then, I executed the following command as root to get things going:

chcon -R -h -t httpd_sys_content_t /home/www

It seems that even the root of the home directory has to have an appropriate security context (/home has home_root_t so that might do the needful too). Without that, nothing will work even if all is well at the next level down. The switches for chcon command translate as follows:

-R : recursive; applies changes to all files and folders within a directory.

-h : changes apply only to symbolic links and not to where they refer in the file system.

-t : alters context type.

It took a while for all of this stuff about SELinux security contexts to percolate through to the point where I was able to solve the problem. A spot of further inspiration was needed too and even guided my search for the information that I needed. It’s well worth trying Linux Home Networking if you need more information. There are references to an earlier release of Fedora but the content still applies to later versions of Fedora right up to the current release if my experience is typical.

Free & Open-Source Software

21st July 2008

The terms Free Software and Open-Source Software often are used interchangeably though a certain Robert Stallman could have a thing or two to say about that. The first of these refers to freedom in the sense that you can do whatever you want with the software. That even means reprogramming it if it doesn’t do what you need. The whole concept began in the mid-1980’s and has grown since then.

The term Open-Source means that you can look at the source code though that apparently doesn’t mean that you always can do what you want with it. For that, it needs to be in the software licence and that’s where GPL, the GNU Public Licence, comes, though there also are competing licences such as those from BSD, which are far more permissive and business-friendly.

What GPL and its counterparts do not restrict is the ability to earn money from the software. You only have to look at what Red Hat earns to see that it can be done. It also means that open-source software (even the copylefted GPL-licensed variety) need not be free of charge.

In practice though, it is amazing what you get without paying for it. Whole Linux distributions with a wide selection of software coming with the operating system are a big example and there are many different ones too. When you see what Microsoft offers for a fee, it could come as a fair shock.

With that in mind, I thought it to be useful to offer an insight into the world of open-source software, especially give how much choice there is. It’s good to have options though that they can confound when they’re so many. However, recent instances of new software releases not being to users’ tastes make it a more important attribute than ever before.

There used to be a single list of what I thought worth highlighting, but that’s been divided now it got every, very long. Here are the categories that I have used for dividing up things so that they might be more useful:

Operating Systems

While Ubuntu or Linux Mint are among the most prominent of the Linux bunch, there are a multitude of others. Then, there are UNIX counterparts and the ones that I have found largely are based on BSD UNIX though there are OpenSolaris forks out there too. As if that weren’t all, some Linux distros are looking at using BSD UNIX kernels in addition to the ones that they usually have so that hybridisation cannot be ignored either.

Assorted Desktop Packages

This selection is all desktop software and then only a little sample of what there is to be found. Some make their way onto installation discs, but others have to be sought. All should do the work for which they’re designed, though.

Desktop Environments

While other operating systems typical offer you one interface at a time, Linux and UNIX have a tendency to offer plenty of choice. Sometimes, it’s the cause of controversy too, although major changes made to the two main players have meant fewer arguments between any advocates for either. In their stead, we have had moaning about what an open-source project has done on its users. Hopefully, that will subside though a meeting of minds may be needed for that in one case.

Web Servers

Websites would not exist without web servers and it was a choice between Apache’s open source HTTPD and Microsoft’s proprietary IIS until the upstart Nginx made its appearance. It, too, is open source and has been popping up in all sorts of places so it was time to make a short list for the sake of reference.

Databases & Programming

UNIX and Linux tend to attract those with an interest in technical computing, so programming and scripting languages remain an integral part of those operating systems, as can databases too. Here are a few of each.

Tinkering with Textpattern

26th April 2011

Textpattern 5 may be on the way but that isn’t to say that work on the 4.x branch is completely stopped though it is less of a priority at the moment. After all, version 4.40 was slipped out not so long ago as a security release, a discovery that I made while giving a section of my outdoors website a spring refresh. During that activity, the TinyMCE plugin started to grate with its issuing of error messages in the form of dialogue boxes needing user input to get rid of them every time an article was opened or saved. Because of that nuisance, the guilty hak_tinymce plugin was ejected with joh_admin_ckeditor replacing it and bringing CKEditor into use for editing my Textpattern articles. It is working well though the narrow editing area is causing the editor toolbars to take up too much vertical space but you can resize the editor to solve this though it would be better if it could be made to remember those size settings.

Another find was atb_editarea, a plugin that colour codes (X)HTML, PHP and CSS by augmenting the standard text editing for pages and stylesheets in the Presentation part of the administration interface. If I had this at the start of my redesign, it would have made doing the needful that bit more user-friendly than the basic editing facilities that Textpattern offers by default. Of course, the tinkering never stops so there’s no such thing as finding something too late in the day for it to be useful.

Textpattern may not be getting the attention that some of its competitors are getting but it isn’t being neglected either; its users and developer community see to that. Saying that, it needs to get better at announcing new versions of the CMS so they don’t slip by the likes of me who isn’t looking all the time. With a major change of version number involved, curiosity is aroused as what is coming next. So far, Textpattern appears to be taking an evolutionary course and there’s a lot to be said for such an approach.

Moving a website from shared hosting to a virtual private server

24th November 2018

This year has seen some optimisation being applied to my web presences guided by the results of GTMetrix scans. It was then that I realised how slow things were, so server loads were reduced. Anything that slowed response times, such as WordPress plugins, got removed. Usage of Matomo also was curtailed in favour of Google Analytics while HTML, CSS and JS minification followed. What had yet to happen was a search for a faster server. Now, another website has been moved onto a virtual private server (VPS) to see how that would go.

Speed was not the only consideration since security was a factor too. After all, a VPS is more locked away from other users than a folder on a shared server. There also is the added sense of control, so Let’s Encrypt SSL certificates can be added using the Electronic Frontier Foundation’s Certbot. That avoids the expense of using an SSL certificate provided through my shared hosting provider and a successful transition for my travel website may mean that this one undergoes the same move.

For the VPS, I chose Ubuntu 18.04 as its operating system and it came with the LAMP stack already in place. Have offload development websites, the mix of Apache, MySQL and PHP is more familiar to me than anything using Nginx or Python. It also means that .htaccess files become more useful than they were on my previous Nginx-based platform. Having full access to the operating system by means of SSH helps too and should mean that I have fewer calls on technical support since I can do more for myself. Any extra tinkering should not affect others either, since this type of setup is well known to me and having an offline counterpart means that anything riskier is tried there beforehand.

Naturally, there were niggles to overcome with the move. The first to fix was to make the MySQL instance accept calls from outside the server so that I could migrate data there from elsewhere and I even got my shared hosting setup to start using the new database to see what performance boost it might give. To make all this happen, I first found the location of the relevant my.cnf configuration file using the following command:

find / -name my.cnf

Once I had the right file, I commented out the following line that it contained and restarted the database service afterwards using another command to stop the appearance of any error 111 messages:

bind-address 127.0.0.1
service mysql restart

After that, things worked as required and I moved onto another matter: uploading the requisite files. That meant installing an FTP server so I chose proftpd since I knew that well from previous tinkering. Once that was in place, file transfer commenced.

When that was done, I could do some testing to see if I had an active web server that loaded the website. Along the way, I also instated some Apache modules like mod-rewrite using the a2enmod command, restarting Apache each time I enabled another module.

Then, I discovered that Textpattern needed php-7.2-xml installed, so the following command was executed to do this:

apt install php7.2-xml

Then, the following line was uncommented in the correct php.ini configuration file that I found using the same method as that described already for the my.cnf configuration and that was followed by yet another Apache restart:

extension=php_xmlrpc.dll

Addressing the above issues yielded enough success for me to change the IP address in my Cloudflare dashboard so it pointed at the VPS and not the shared server. The changeover happened seamlessly without having to await DNS updates as once would have been the case. It had the added advantage of making both WordPress and Textpattern work fully.

With everything working to my satisfaction, I then followed the instructions on Certbot to set up my new Let’s Encrypt SSL certificate. Aside from a tweak to a configuration file and another Apache restart, the process was more automated than I had expected so I was ready to embark on some fine-tuning to embed the new security arrangements. That meant updating .htaccess files and Textpattern has its own, so the following addition was needed there:

RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This complemented what was already in the main .htaccess file and WordPress allows you to include http(s) in the address it uses, so that was another task completed. The general .htaccess only needed the following lines to be added:

RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://www.assortedexplorations.com/$1 [R,L]

What all these achieve is to redirect insecure connections to secure ones for every visitor to the website. After that, internal hyperlinks without https needed updating along with any forms so that a padlock sign could be shown for all pages.

With the main work completed, it was time to sort out a lingering niggle regarding the appearance of an FTP login page every time a WordPress installation or update was requested. The main solution was to make the web server account the owner of the files and directories, but the following line was added to wp-config.php as part of the fix even if it probably is not necessary:

define('FS_METHOD', 'direct');

There also was the non-operation of WP Cron and that was addressed using WP-CLI and a script from Bjorn Johansen. To make double sure of its effectiveness, the following was added to wp-config.php to turn off the usual WP-Cron behaviour:

define('DISABLE_WP_CRON', true);

Intriguingly, WP-CLI offers a long list of possible commands that are worth investigating. A few have been examined but more await attention.

Before those, I still need to get my new VPS to send emails. So far, sendmail has been installed, the hostname changed from localhost and the server restarted. More investigations are needed but what I have not is faster than what was there before, so the effort has been rewarded already.

Creating empty text files and changing file timestamps using Windows Command Prompt & Powershell

17th May 2013

Linux and UNIX have the touch command for changing the creation dates and times for files. However, it also will create empty text files for you as well. In fact, there are times when I feel the need to do this sort of thing on Windows too and the following command accomplishes the deed when run in a Command Prompt window:

type nul > command.bat

Essentially, null output is sent to a file that is created anew, command.bat in this case. Then, you can edit it in Notepad (or whatever is your choice of text editor) and add in what you need. This will not work in Powershell so you need another command for that:

New-Item command.bat -type file

This uses the New-Item command, which also can be used to create folders as well if you so desire. Then, the command becomes the following:

New-Item c:\commands -type directory

Note that file on the previous example has become directory and there is the -force option should you need to overwrite what already exists for some reason…

That other use of the UNIX/Linux touch command can be performed from the Command Prompt too and here is an example command:

copy /b file.txt +,,

The /b switch switches on binary behaviour for the copy command though that appears to be the default action anyway. The + operator triggers concatenation and ,, gets around not having a defined destination because you cannot copy a file over itself. If that were possible, then there would no need for special syntax for changing the date and time for a file.

For doing the same thing with Powershell, try the following:

(GetChildItem test.txt).LastWriteTime=Get-Date

The GetChildItem command has aliases of gci, dir and ls and the last two of these give away its essential purpose. Here, it is used to pick out the test.txt file so that its timestamp can be replaced with the current date and time returned by the Get-Date command. The syntax looks a little more complex even if it achieves the same end. Somehow, that touch command is easier to explain. Are Linux and UNIX that complicated after all?

Rethinking photo editing

17th April 2018

Photo editing has been something that I have been doing since my first-ever photo scan in 1998 (I believe it was in June of that year but cannot be completely sure nearly twenty years later). Since then, I have been using a variety of tools for the job and wondered how other photos can look better than my own. What cannot be excluded is my preference for being active in the middle of the day when light is at its bluest as well as a penchant for using a higher ISO of 400. In other words, what I do when making photos affects how they look afterwards as much as the weather that I had encountered.

My reason for mentioning the above aspects of photographic craft is that they affect what you can do in photo editing afterwards, even with the benefits of technological advancement. My tastes have changed over time, so the appeal of re-editing old photos fades when you realise that you only are going around in circles and there always are new ones to share, so that may be a better way to improve.

When I started, I was a user of Paint Shop Pro but have gone over to Adobe since then. First, it was Photoshop Elements, but an offer in 2011 lured me into having Lightroom and the full version of Photoshop. Nowadays, I am a Creative Cloud photography plan subscriber so I get to see new developments much sooner than once was the case.

Even though I have had Lightroom for all that time, I never really made full use of it and preferred a Photoshop-based workflow. Lightroom was used to select photos for Photoshop editing, mainly using adjustments for such things as tones, exposure, levels, hue and saturation. Removal of dust spots, resizing and sharpening were other parts of a still minimalist approach.

What changed all this was a day spent pottering about the 2018 Photography Show at the Birmingham NEC during a cold snap in March. That was followed by my checking out the Adobe YouTube Channel afterwards where there were videos of the talks featured every day of the four-day event. Here are some shortcuts if you want to do some catching up yourself: Day 1, Day 2, Day 3, and Day 4. Be warned though that these videos are long in that they feature the whole day and there are enough gaps that you may wish to fast-forward through them. Even so, there is quite a bit of variety of things to see.

Of particular interest were the talks given by the landscape photographer David Noton who sensibly has a philosophy of doing as little to his images as possible. It helps that his starting points are so good that adjusting black and white points with a little tonal adjustment does most of what he needs. Vibrancy, clarity and sharpening adjustments are kept to a minimum while some work with graduated filters evens out exposure differences between skies and landscapes. It helps that all this can be done in Lightroom, so that set me thinking about trying it out for size and the trick of using the backslash (\) key to switch between raw and processed views is a bonus granted by non-destructive editing. Others may have demonstrated the creation of composite imagery, but simplicity is more like my way of working.

Confusingly, we now have the cloud-based Lightroom CC while the previous desktop counterpart is known as Lightroom Classic CC. Though the former may allow for easy dust spot removal among other things, it is the latter that I prefer because the idea of wholesale image library upload does not appeal to me for now and I already have other places for off-site image backup like Google Drive and Dropbox. The mobile app does look interesting since it allows capturing images on a such a device in Adobe’s raw image format DNG. Still, my workflow is set to be more Lightroom-based than it once was and I quite fancy what new technology offers, especially since Adobe is progressing its Sensai artificial intelligence engine. The fact that it has access to many images on its systems due to Lightroom CC and its own stock library (Adobe Stock, formerly Fotolia) must mean that it has plenty of data for training this AI engine.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.