Technology Tales

Adventures in consumer and enterprise technology

TOPIC: INTER-PROCESS COMMUNICATION

File comparison using PowerShell

16th August 2025

In the past, I have compared files on the Linux/UNIX command line as well as the legacy Windows command line. Recently, I decided to try it using PowerShell. Here is the command structure:

Compare-Object (Get-Content ".\[name of one text file]") (Get-Content ".\[name of another text file]") > [path and name of output file]

Admittedly, this is more verbose than the others that I have mentioned above. Nevertheless, it does the job and sends everything to a text file for review. The Compare-Object piece does the comparison once the Get-Content portions have read in the content.

Keeping a graphical eye on CPU temperature and power consumption on the Linux command line

20th March 2025

Following my main workstation upgrade in January, some extra monitoring has been needed. This follows on from the experience with building its predecessor more than three years ago.

Being able to do this in a terminal session keeps things lightweight, and I have done that with text displays like what you see below using a combination of sensors and nvidia-smi in the following command:

watch -n 2 "sensors | grep -i 'k10'; sensors | grep -i 'tdie'; sensors | grep -i 'tctl'; echo "" | tee /dev/fd/2; nvidia-smi"

Everything is done within a watch command that refreshes the display every two seconds. Then, the panels are built up by a succession of commands separated with semicolons, one for each portion of the display. The grep command is used to pick out the desired output of the sensors command that is piped to it; doing that twice gets us two lines. The next command, echo "" | tee /dev/fd/2, adds an extra line by sending a space to STDERR output before the output of nvidia-smi is displayed. The result can be seen in the screenshot below.

However, I also came across a more graphical way to do things using commands like turbostat or sensors along with AWK programming and ttyplot. Using the temperature output from the above and converting that needs the following:

while true; do sensors | grep -i 'tctl' | awk '{ printf("%.2f\n", $2); fflush(); }'; sleep 2; done | ttyplot -s 100 -t "CPU Temperature (Tctl)" -u "°C"

This is done in an infinite while loop to keep things refreshing; the watch command does not work for piping output from the sensors command to both the awk and ttyplot commands in sequence and on a repeating, periodic basis. The awk command takes the second field from the input text, formats it to two places of decimals and prints it before flushing the output buffer afterwards. The ttyplot command then plots those numbers on the plot seen below in the screenshot with a y-axis scaled to a maximum of 100 (-s), units of °C (-u) and a title of CPU Temperature (Tctl) (-t).

A similar thing can be done for the CPU wattage, which is how I learned of the graphical display possibilities in the first place. The command follows:

sudo turbostat --Summary --quiet --show PkgWatt --interval 1 | sudo awk '{ printf("%.2f\n", $1); fflush(); }' | sudo ttyplot -s 200 -t "Turbostat - CPU Power (watts)" -u "watts"

Handily, the turbostat can be made to update every so often (every second in the command above), avoiding the need for any infinite while loop. Since only a summary is needed for the wattage, all other output can be suppressed, though everything needs to work using superuser privileges, unlike the sensors command earlier. Then, awk is used like before to process the wattage for plotting; the first field is what is being picked out here. After that, ttyplot displays the plot seen in the screenshot below with appropriate title, units and scaling. All works with output from one command acting as input to another using pipes.

All of this offers a lightweight way to keep an eye on system load, with the top command showing the impact of different processes if required. While there are graphical tools for some things, command line possibilities cannot be overlooked either.

Another way to supply the terminal output of one BASH command to another

26th April 2023

My usual way for sending the output of one command to another is to be place one command after another, separated by the pipe (|) operator, adjusting the second command as needed. However, I recently found that this approach does not work well for docker pull commands until I uncovered another option.

The solution is to enclose the input command in $( ) within the output command. Within the parentheses, any kind of command can be declared and includes anything with piping as part of it. As long as text is being printed to the terminal, it can be fed to the second command and used as required. Thus, you can have something like the following:

docker pull $([command outputting name of image to download])

This approach has helped with other kinds of automation of docker image and container use and deployment because it is so general. There may be other uses found for the approach yet.

Killing those runaway processes that refuse to die

5th July 2008

I must admit that there have been times when I logged off from my main Ubuntu box at home to dispatch a runaway process that I couldn't kill, and then log back in again. The standard signal being sent to the process by the very useful kill command just wasn't sending the nefarious CPU-eating nuisance the right kind of signal. Thankfully, there is a way to control the signal being sent and there is one that does what's needed:

kill -9 [ID of nuisance process]

For Linux users, there appears to be another option for terminating a process that doesn't need the ps and grep command combination: it's killall. Generally, killall terminates all processes and its own has no immunity to its quest. Hence, it's an administrator only tool with a very definite and perhaps rarely required use. The Linux variant is more useful because it also will terminate all instances of a named process at a stroke and has the same signal control as the kill command. It is used as follows:

killall -9 nuisanceprocess

I'll certainly be continuing to use both of the above; it appears that Wine needs termination like this at times and VMware Workstation lapsed into the same sort of antisocial behaviour while running a VM running a development version of Ubuntu's Intrepid Ibex (or 8.10 if you prefer). Anything that keeps you from constantly needing to restart Linux sessions on your PC has to be good.

The power of pipes

12th July 2007

One of the great features of the UNIX shell is that you can send the output from one command to another for further processing. Take the following example for instance:

ls -l | grep "Jul 12"

This takes the long directory file listing output and sends it to grep for subsetting (all files created today in this example) before it is returned to the screen. The | character is the pipe trigger, and you can have as many pipes in your command as you want, though readability may dictate how far you want to go.

Exploring AJAX

7th June 2007

When I started it, my online photo gallery started out simply as a set of interlinked HTML pages. Over time, I discovered frames (yes, them!) and started to make use of JavaScript to make the slideshows slicker. In those days, I was working off free webspace provided by my ISP and client-side scripting was the only tool that I had for enhancing functionality. Having tired of the vagaries of client-side scripting while the browser wars were in full swing and incompatibilities reigned supreme, I went with paid hosting to get access to tools like Perl and PHP for server-side processing. Because their flexibility compared to JavaScript was a breath of fresh air to me, I am still a fan of the server-side approach.

The journey that I have just described is one that I now know was followed by many website builders around the same time. Nevertheless, I have still held on to JavaScript for some things, particularly for updating the DOM as part of making the pages more responsive to user interaction. In the last few years, a hybrid approach has been gaining currency: AJAX. This offers the ability to modify parts of a page without needing to reload the whole thing, generating a considerable amount of interest among web application developers.

The world of AJAX is evidently a complex one, though the underlying principle can be explained in simple terms. The essential idea is that you use JavaScript to call a server-side script, PHP is as good an example as any, that returns either text or XML that can be used to update part of a web page in situ without the need to reload it as per the traditional way of working. It has opened up so many possibilities from the interface design point of view that AJAX became a hot topic that still receives much attention today. One bugbear is efficiency because I have seen an AJAX application lock up a PC with a little help from IE6. There will always remain times when server-side processing is the best route, even if that needs to be balanced against the client-side approach.

Like its forbear DHTML, AJAX is really a development approach using a number of different technologies in combination. The DHTML elements such as (X)HTML, CSS, DOM and JavaScript are very much part of the AJAX world but server-side elements such as HTTP, PHP, MySQL and XML are also very much part of the fabric of the landscape. In fact, while AJAX can use plain text as the transfer format, XML is the one implied by the AJAX acronym and XSLT is used to transform XML into HTML. However, AJAX is not limited to the aforementioned technologies; for instance, I cannot see why Perl cannot play a role in place of PHP and ASP, both of which can be used for the same things.

Even in these standards-compliant days, browser support for AJAX remains diverse, to say the least, and it is akin to having MSIE in one corner and the rest in the other. Mind you, Microsoft did introduce the tools in the first place only for them to use ActiveX, while Mozilla created a new object type rather than continue this method of operation. Given that ActiveX is a Windows-only technology, I can see why Mozilla did what they did, and it is a sensible decision. In fact, IE7 appears to have picked up the Mozilla way of doing things.

Even with the apparent convergence, there will continue to be a need for the AJAX JavaScript libraries that are currently out there. Incidentally, Adobe has included one called Spry with Dreamweaver CS3. Nevertheless, I still like to find out how things work at the basic level and feel somewhat obstructed when I cannot do this. I remember perusing Wrox’s Professional AJAX and found the constant references to the associated function library rather grating; the writing style didn’t help either.

My taking a more granular approach has got me reading SAMS Teach Yourself AJAX in 10 Minutes as a means for getting my foot in the door. As with their Teach Yourself … in 24 Hours series, the title is a little misleading since there are 22 lessons of 10 minutes in duration (the 24 Hours moniker refers to there being 24 lessons, each of one hour in length). Anything composed of 10-minute lessons, even 22 of them, is never going to be comprehensive but, as a means for getting started, I have to say that the approach seems effective based on this volume. It has certainly whetted my appetite for giving AJAX a go, and it’ll be interesting to see how things progress from here.

Using SAS FILENAME statement to extract directory file listings into SAS

30th May 2007

The filename statement's pipe option allows you to direct the output of operating system commands into SAS for further processing. Usefully, the Windows dir command (with its /s switch) and the UNIX and Linux equivalent ls allow you to get a file listing into SAS. For example, here's how you extract the list of files in your UNIX or Linux home directory into SAS:

filename DIRLIST pipe 'ls ~';
data dirlist;
    length filename $200;
    infile dirlist length=reclen;
    input buffer $varying200. reclen;
run;

Using the ftp option on the filename statement allows you to get a list of the files in a directory on a remote server, even one with a different operating system to that used on the client (PC or server), very useful for cases where cross-platform systems are involved. Here's some example code:

filename dirlist ftp ' ' ls user='user' host='host' prompt;
data _null_;
    length filename $200;
    infile dirlist length=reclen;
    input buffer $varying200. reclen;
run;

The PROMPT option will cause SAS to ask you for a password, and the null string is where you would otherwise specify the name of a file.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.