Technology Tales

Adventures in consumer and enterprise technology

Removing redundant kernels from Ubuntu

29th October 2022

Recently, a message appeared on some web servers that I have that exhorted me to upgrade to Ubuntu 22.04.1 using the do-release-upgrade command. In the interests of remaining current, I did just that to get another message, one like the following:

The upgrade needs a total of [amount of space with units] free space on disk `/boot`.
Please free at least an additional [amount of space with units] of disk space on `/boot`.
Empty your trash and remove temporary packages of former installations
using `sudo apt-get clean`.

Using sudo apt-get clean did not resolve the problem, so the advice given was of no use. The actual problem was that there were too many old kernels cluttering up /boot and searching around the web provided that wisdom. What also came up was a single command for resolving the problem. However, removing the wrong kernel can wreck a system, ensuring that I took a more cautious approach. First, I listed the kernels to be removed and checked that they did not include the currently running one. This was done with the following command (broken up over several lines for clarity using the backslash character to denote continuation) and running uname -r found the details of the running kernel:

dpkg -l linux-{image,headers}-"[0-9]*" \

| awk '/ii/{print $2}' \

| grep -ve "$(uname -r \

| sed -r 's/-[a-z]+//')"

The dpkg command listed the installed kernels with awk, grep and sed filtering out unwanted sections of the text. The awk command takes the tabular output from dpkg and turns it into a list. The -v switch on the grep command gets the lines that do not match the search expression created by the sed command, while the -e switch makes grep look for patterns. The sed command removes all letters from the output of the uname command, where the -r switch produces the kernel release details, to leave on the release number of the current kernel. On being satisfied that nothing untoward would happen, the full command below (also broken up over several lines for clarity, using the backslash character to denote continuation) could be executed.

sudo apt purge $(dpkg -l linux-{image,headers}-"[0-9]*" \

| awk '/ii/{print $2}' \

| grep -ve "$(uname -r \

| sed -r 's/-[a-z]+//')")

This apt to purge the unwanted kernels, thus freeing up enough space for the upgrade to continue. That happened without significant incident, though there were some remediations needed on the PHP side to get the website working smoothly again.

Using inventory files with Ansible

28th October 2022

This is the second post on Ansible following my main system's upgrade to Linux Mint 21. Then, I manually ran some Ansible playbooks, only to spot messages that I had not noticed before. Here, I discuss two messages issued because of an issue with an inventory file, which is where one defines lists of servers against which playbooks are executed. The default is called hosts and is located at /etc/ansible, but the system upgrade had renamed the existing one, which meant that Ansible could not find it. The solution was to take a copy and put somewhere safer. Then, I needed to add the location of the new file to the affected ansible-playbook commands using the following construct:

ansible-playbook [playbook path] -i [inventory file path]

Before I did this, I was seeing messages including the text "Could not match supplied host pattern" or others with the following text:

[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

The cause was the same in each case, and attending to the inventory file got rid of the unwanted messages. The new file also should not be affected by system upgrades in the future.

Fixing an Ansible warning about boolean type conversion

27th October 2022

My primary use for Ansible is doing system updates using the inbuilt apt module. Recently, I updated my main system to Linux Mint 21 and a few things like Ansible stopped working. Removing instances that I had added with pip3 sorted the problem, but I then ran playbooks manually, only for various warning messages to appear that I had not noticed before. What follows below is one of these.

[WARNING]: The value True (type bool) in a string field was converted to u'True' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.

The message is not so clear in some ways, not least because it had me looking for a boolean value of True when it should have been yes. A search on the web revealed something about the apt module that surprised me.: the value of the upgrade parameter is a string, when others like it take boolean values of yes or no. Thus, I had passed a bareword of yes when it should have been declared in quotes as "yes". To my mind, this is an inconsistency, but I have changed things anyway to get rid of the message.

Removing a Julia package using REPL or script commands

5th October 2022

While I have been programming with SAS for a few decades, and it remains a linchpin in the world of clinical development in the pharmaceutical industry, other technologies like R and Python are gaining a foothold. Two years ago, I started to look at those languages with personal projects being a great way of facilitating this. In addition, I got to hear of Julia and got to try that too. That journey continues since I have put it into use for importing and backing up photos, and there are other possible uses too.

Recently, I updated Julia to version 1.8.2 but ran into a problem with the DataArrays package that I had installed, so I decided to remove it since it was added during experimentation. Though the Pkg package that is used for package management is documented, I had not got to that, which meant that some web searching ensued. It turns out that there are two ways of doing this. One uses the REPL: after pressing the ] key, the following command gets issued:

rm DataArrays

When all is done, pressing the delete or backspace keys returns things to normal. This also can be done in a script as well as the REPL, and the following line works in both instances:

using Pkg; Pkg.rm("DataArrays")

While the semicolon is used to separate two commands issued on the same line, they can be on different lines or issued separately just as well. Naturally, DataArrays is just an example here; you just replace that with the name of whatever other package you need to remove. Since we can get carried away when downloading packages, there are times when a clean-up is needed to remove redundant packages, so knowing how to remove any clutter is invaluable.

Accessing Julia REPL command history

4th October 2022

In the BASH shell used on Linux and UNIX, the history command calls up a list of recent commands used and has many uses. There is a .bash_history file in the root of the user folder that logs and provides all this information, so there are times when you need to exclude some commands from there, but that is another story.

The Julia REPL environment works similarly to many operating system command line interfaces, so I wondered if there was a way to recall or refer to the history of commands issued. So far, I have not come across an equivalent to the BASH history command for the REPL itself, but there the command history is retained in a file like .bash_history. The location varies on different operating systems, though. On Linux, it is ~/.julia/logs/repl_history.jl while it is %USERPROFILE%\.julia\logs\repl_history.jl on Windows. While I tend to use scripts that I have written in VSCode rather than entering pieces of code in the REPL, the history retains its uses. Thus, I am sharing it here for others. In the past, the location changed, but these are the ones with Julia 1.8.2, the version that I have at the time of writing.

Changing the Ansible Vault editor from Vi to Nano

15th August 2022

Recently, I got to experiment with Ansible after reading about the orchestration tool in a copy of Admin magazine. It came in handy for updating a few web servers that I have, as well as updating my main Linux workstation. For the former, automated entry of SSH passwords sufficed, but the same did not apply for sudo usage on my local machine. This meant that I needed to use Ansible Vault to store the administrator password, and doing so opened up a file in the Vi editor. Since I am not familiar with Vi and wanted to get things sorted quickly, I fancied using something more user-friendly like Nano.

Doing this meant adding the following line to .bashrc:

export EDITOR=nano

Saving and closing the file followed by reloading the session set me up for what was needed.

Getting custom Python imports to work in Visual Studio Code

18th February 2022

While I continue to use Spyder as my preferred Python code editor, I also tried out Visual Studio Code. Handily, this Integrated Development Environment also has facilities for working with R and Julia code as well as Markdown text editing and adding the required extensions is enough for these applications; it helps that there is an unofficial Grammarly extension for content creation.

My Python code development makes use of the Pylance extension, and it works a little differently from Spyder when it comes to including files using import statements. Spyder will look into the folder where the base script is located, but the default behaviour of Pylance is that it looks in the root path of your workspace. This meant that any code that ran successfully in Spyder failed in Visual Studio Code.

To solve this issue, I added the location using the python.analysis.extraPaths setting for the workspace. I opened Settings by going to File > Preferences > Settings in the menu. I typed python.analysis.extraPaths in the search box. This showed me the correct section. I clicked on Add Item, entered the required path, and clicked OK. This resolved the problem, and everything worked properly afterwards.

Automated entry of SSH passwords

17th February 2022

A useful feature for shell scripting is automatic password entry when logging into other servers. This often involves plain text files, which are not secure. Fortunately, I found an alternative. The first step is to use the keygen tool included with SSH. The command is shown below. The -t switch defines the key type, RSA in this example. You can add a passphrase, but I chose not to for convenience. You should evaluate your security requirements before implementing this approach.

ssh-keygen -t rsa

The next step is to use the ssh-copy-id command to generate the keys for a set of login credentials. For this, it is better to use a user account with restricted access to keep as much server security as you can. Otherwise, the process is as simple as executing a command like the following and entering the password at the prompt for doing so.

ssh-copy-id [user ID]@[server address]

Getting this set up has been useful for running a file upload script to keep a web server synchronised, and it is better to have the credentials encrypted rather than kept in a plain text file.

Carrying colour coding across multi-line custom log messages in SAS

16th February 2022

While custom error messages are good to add to SAS macros, you can get inconsistent colouration of the message text in multi-line messages. That was something that I just overlooked until I recently came across a solution. That uses a hyphen at the end of the ERROR/WARNING/NOTE prefix instead of the more usual colon. Any prefixes ending with a hyphen are not included in the log text, and the colouration ignores the carriage return that ordinary would change the text colour to black. The simple macro below demonstrates the effect.

Macro Code:

%macro test;
%put ERROR: this is a test;
%put ERROR- this is another test;
%put WARNING: this is a test;
%put WARNING- this is another test;
%put NOTE: this is a test;
%put NOTE- this is another test;
%mend test;

%test

Log Output:

ERROR: this is a test
       this is another test
WARNING: this is a test
         this is another test
NOTE: this is a test
      this is another test

Controlling display of users on the logon screen in Linux Mint 20.3

15th February 2022

Recently, I tried using Commento with a static website that I was developing and this needed PostgreSQL rather than MySQL or MariaDB, which many content management tools use. That meant a learning curve that made me buy a book, as well as the creation of a system account for administering PostgreSQL. These are not the kind of things that you want to be too visible, so I wanted to hide them.

Since Linux Mint uses AccountsService, you cannot use lightdm to do this (the comments in /etc/lightdm/users.conf suggest as much). Instead, you need to go to /var/lib/AccountsService/users and look for a file called after the username. If one exists, all that is needed is for you to add the following line under the [User] section:

SystemAccount=true

If there is no file present for the user in question, then you need to create one with the following lines in there:

[User]
SystemAccount=true

Once the configuration files are set up as needed, AccountsService needs to be restarted and the following command does that deed:

sudo systemctl restart accounts-daemon.service

Logging out should reveal that the user in question is not listed on the logon screen as required.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.