An alternate approach to setting up a local Git repository with a remote GitHub connection
24th March 2025For some reason, I ended with two versions of this at the draft stage, forcing me to compare and contrast before merging them together to produce what you find here. The inspiration was something that I encountered a while ago: getting a local repository set up in a perhaps unconventional manner.
The simpler way of working would be to set up a repo on GitHub and clone it to the local machine, yet other needs are the cause of doing things differently. In contrast, this scheme stars with initialising the local directory first using the following command after creating it with some content and navigating there:
git init
This marks the directory as a Git repository, allowing you to track changes by creating a hidden .git
directory. Because security measures often require verification of directories when executing Git commands, it is best to configure a safe directory with the following command to avoid any issues:
git config --global --add safe.directory [path to directory]
In the above, replace the path with your specific project directory. This ensures that Git recognises your directory as safe and authorised for operations, avoiding any messages whenever you work in there.
With that completed, it is time to add files to the staging area, which serves as an area where you can review and choose changes to be committed to the repository. Here are two commands that show different ways of accomplishing this:
git add README.md
git add .
The first command stages the README.md file, preparing it for the next commit, while the second stages all files in the directory (the .
is a wildcard operator that includes everything in there).
Once your files are staged, you are ready to commit them. A commit is essentially a snapshot of your changes, marking a specific point in the project’s history. The following command will commit your staged changes:
git commit -m "first commit"
The -m
flag allows you to add a descriptive message for context; here, it is “first commit.” This message helps with understanding the purpose of the commit when reviewing project history.
Before pushing your local files online, you will need to create an empty repository on GitHub using the GitHub website if you do not have one already. While still on the GitHub website, click on the Code button and copy the URL shown under the HTTPS tab that is displayed. This takes the form https://github.com/username/repository.git
and is required for running the next command in your local terminal session:
git remote add origin https://github.com/username/repository.git
This command establishes a remote connection under the alias origin
. By default, Git sets the branch name to ‘master’. However, recent conventions prefer using ‘main’. To rename your branch, execute:
git branch -M main
This command will rename your current branch to ‘main’, aligning it with modern version control standards. Finally, you must push your changes from the local repository to the remote repository on GitHub, using the following command:
git push -u origin main
The -u
flag sets the upstream reference, meaning future push and pull operations will default to this remote branch. This last step completes the process of setting up a local repository, linking it to a remote one on GitHub, staging any changes, committing these and pushing them to the remote repository.
Enhancing focus and wellbeing by eliminating digital distractions while browsing the web
23rd March 2025Such is the state of the world at the moment that I ration my news intake for the sake of my mental wellbeing. That also includes the content that websites present to me. Last November, I was none too please to see Perplexity showing me something unwanted on its home page. However, there appeared to be no way to turn this off, in contrast to the default page shown in a new browser tab. Then, I decided to tolerate the intrusion, only for the practice to develop over time.
Then, I happened on uBlock Origin after finding that it will block unwanted parts of web pages. While it was a bit hit-and-miss to get things going on the Perplexity website, it did the job after some trial and error. Things can change, which means the blocking may need refinement. Even so, I can handle that. When YouTube became another place where I needed to block distractions like previews of other videos during a webinar.
Now, uBlock Origin has become the only ad blocker that and I still use with Firefox. Others like Ghostery broke websites, especially that of the UK Met Office with its cookie blocking; the Ryanair one was another casualty, and became one that fell foul of Pi-hole too. Thus, they were left after me for a single shot approach. Though some websites may complain, anything that cuts out distractions has to help productivity and emotional wellbeing.
VirtualBox memory allocation error: Solving Linux Mint host issues after LLM usage
22nd March 2025It happened to me today when I tried starting up Windows virtual machines in VirtualBox on my main Linux Mint workstation as a host after a long layover for these. They failed to start, only for these messages to appear:
Out of memory condition when allocating memory with low physical backing. (VERR_NO_LOW_MEMORY).
Result Code:
NS_ERROR_FAILURE (0x80004005)
Component:
ConsoleWrap
Interface:
IConsole {6ac83d89-6ee7-4e33-8ae6-b257b2e81be8}
Since the messages are cryptic in the circumstances, I had to seek out their meaning. The system has plenty of memory, so it could be that. Various suggestions came my way like installing the VirtualBox Extension Pack or reinstalling VirtualBox Extensions in the affected VM. The first had no effect, while the second was impossible.
However, there was one more suggestion: fragmentation of memory, much like file fragmentation on a disk drive. Thus, I opted for a reboot, which sorted out things, making it look as if that were the problem. If it comes up again, I might try compacting the memory with the following command, leaving for a while to complete due to any temporary system slowdown:
echo 1 > /proc/sys/vm/compact_memory
Because there had been some on-machine usage of an LLM, I now reckon that caused the malaise. These can be as heavy on memory as they are on processors, so fragmentation can result. That is yet another likely lesson learned from experimenting with this much-hyped technology.
How to make Firefox vertical scrollbars more visible on Windows 11
21st March 2025While some articles on the web have reading time added to them, thus the vertical scrollbar of a web browser can act as a hint of the length of a piece. Unfortunately, they are being made less conspicuous for the sake of aesthetics and at the expense of utility. Since Firefox is the browser that I use most of the time, addressing the matter there became a priority for me, Here then is how you configure things on Windows 11.
The first step is to open a new tab before entering about:config in the URL bar and pressing the return key on your keyboard. If doing this for the first time, you will meet a warning screen that you can disable. Agreeing to the warning conveys you to the next screen where you can enter the string “scrollbar” and use the enter key to bring up a swathe of settings.
There are two that you need to set to false by double-clicking on the pre-existing value of true: widget.windows.overlay-scrollbars.enabled
and widget.non-native-theme.win.scrollbar.use-system-size
. There is one more setting that you need to tweak: widget.non-native-theme.scrollbar.size.override
should have a value greater than zero, the default. Using one of ten did what I wanted once I restarted Firefox. After that, I have things as I want them to be, though you may want to refine the width setting for your needs.
Keeping a graphical eye on CPU temperature and power consumption on the Linux command line
20th March 2025Following my main workstation upgrade in January, some extra monitoring has been needed. This follows on from the experience with building its predecessor more than three years ago.
Being able to do this in a terminal session keeps things lightweight, and I have done that with text displays like what you see below using a combination of sensors
and nvidia-smi
in the following command:
watch -n 2 "sensors | grep -i 'k10'; sensors | grep -i 'tdie'; sensors | grep -i 'tctl'; echo "" | tee /dev/fd/2; nvidia-smi"
Everything is done within a watch
command that refreshes the display every two seconds. Then, the panels are built up by a succession of commands separated with semicolons, one for each portion of the display. The grep
command is used to pick out the desired output of the sensors
command that is piped to it; doing that twice gets us two lines. The next command, echo "" | tee /dev/fd/2
, adds an extra line by sending a space to STDERR output before the output of nvidia-smi
is displayed. The result can be seen in the screenshot below.
However, I also came across a more graphical way to do things using commands like turbostat
or sensors
along with AWK programming and ttyplot
. Using the temperature output from the above and converting that needs the following:
while true; do sensors | grep -i 'tctl' | awk '{ printf("%.2f\n", $2); fflush(); }'; sleep 2; done | ttyplot -s 100 -t "CPU Temperature (Tctl)" -u "°C"
This is done in an infinite while
loop to keep things refreshing; the watch
command does not work for piping output from the sensors
command to both the awk
and ttyplot
commands in sequence and on a repeating, periodic basis. The awk
command takes the second field from the input text, formats it to two places of decimals and prints it before flushing the output buffer afterwards. The ttyplot
command then plots those numbers on the plot seen below in the screenshot with a y-axis scaled to a maximum of 100 (-s
), units of °C
(-u
) and a title of CPU Temperature (Tctl)
(-t
).
A similar thing can be done for the CPU wattage, which is how I learned of the graphical display possibilities in the first place. The command follows:
sudo turbostat --Summary --quiet --show PkgWatt --interval 1 | sudo awk '{ printf("%.2f\n", $1); fflush(); }' | sudo ttyplot -s 200 -t "Turbostat - CPU Power (watts)" -u "watts"
Handily, the turbostat
can be made to update every so often (every second in the command above), avoiding the need for any infinite while
loop. Since only a summary is needed for the wattage, all other output can be suppressed, though everything needs to work using superuser privileges, unlike the sensors
command earlier. Then, awk
is used like before to process the wattage for plotting; the first field is what is being picked out here. After that, ttyplot
displays the plot seen in the screenshot below with appropriate title, units and scaling. All works with output from one command acting as input to another using pipes.
All of this offers a lightweight way to keep an eye on system load, with the top
command showing the impact of different processes if required. While there are graphical tools for some things, command line possibilities cannot be overlooked either.
Avoiding repeated token requests by installing the Git credential helper on Linux Mint
19th March 2025On a new machine, I found asking for the same access token repeatedly. Since this is a long string, that is convenient and does not take long to become irritating. Thus, I sought a way to make it more streamlined. My initial attempt produced the following message:
git: 'credential-libsecret' is not a git command
The main cause for the above was the absence of the libsecret
credential helper, crucial for managing credentials securely in a keyring, from my system. The solution was to install the required packages from the command line:
sudo apt install libsecret-1-0 libsecret-1-dev
Following installation, the next step was to navigate to the appropriate directory and execute the make
command to compile the files within the directory, transforming them into an executable credential helper:
cd /usr/share/doc/git/contrib/credential/libsecret; sudo make
With the credential helper fully built, Git needed to be configured to use it by executing the following:
git config --global credential.helper /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret
Since one error message is enough for any new activity, it made sense to confirm that the credential helper resided in the correct location. That was accomplished by issuing this command:
ls -l /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret
All was well in my case, saving the need to reinstall Git or repeat the manual compilation of the credential helper. When all was done, I was ready to automate things further.
Dealing with this Python error message on Windows: UnicodeEncodeError: ‘charmap’ codec can’t encode characters in position 56-57: character maps to <undefined>
14th March 2025Recently, I got caught out by the above message when summarising some text using Python and Open AI’s API while working within VS Code. There was no problem on Linux or macOS, but it was triggered on the Windows command line from within VS Code. Unlike the Julia or R REPL’s, everything in Python gets executed in the console like this:
& "C:/Program Files/Python313/python.exe" script.py
The Windows command line shell operated with cp1252 character encoding, and that was tripping up the code like the following:
with open("out.txt", "w") as file:
file.write(new_text)
The cure was to specify the encoding of the output text as utf-8:
with open("out.txt", "w", encoding='utf-8') as file:
file.write(new_text)
After that, all was well and text was written to a file like in the other operating systems. One other thing to note is that the use of backslashes in file paths is another gotcha. Adding an r before the quotes gets around this to escape the contents, like using double backslashes. Using forward slashes is another option.
with open(r"c:\temp\out.txt", "w", encoding='utf-8') as file:
file.write(new_text)
Finding human balance in an age of AI code generation
12th March 2025Recently, I was asked about how I felt about AI. Given that the other person was not an enthusiast, I picked on something that happened to me, not so long ago. It involved both Perplexity and Google Gemini when I was trying to debug something: both produced too much code. The experience almost inspired a LinkedIn post, only for some of the thinking to go online here for now. A spot of brainstorming using an LLM sounds like a useful exercise.
Going back to the original question, it happened during a meeting about potential freelance work. Thus, I tapped into experiences with code generators over several decades. The first one involved a metadata-driven tool that I developed; users reported that there was too much imperfect code to debug with the added complexity that dealing with clinical study data brings. That challenge resurfaced with another bespoke tool that someone else developed, and I opted to make things simpler: produce some boilerplate code and let users take things from there. Later, someone else again decided to have another go, seemingly with more success.
It is even more challenging when you are insufficiently familiar with the code that is being produced. That happened to me with shell scripting code from Google Gemini that was peppered with some Awk code. There was no alternative but to learn a bit more about the language from Tutorials Point and seek out an online book elsewhere. That did get me up to speed, and I will return to these when I am in need again.
Then, there was the time when I was trying to get a Julia script to deal with Google Drive needing permissions to be set. This started Google Gemini into adding more and more error checking code with try catch blocks. Since I did not have the issue at that point, I opted to halt and wait for its recurrence. When it did, I opted for a simpler approach, especially with the gdrive CLI tool starting up a web server for completing the process of reactivation. While there are times when shell scripting is better than Julia for these things, I added extra robustness and user-friendliness anyway.
During that second task, I was using VS Code with the GitHub Copilot plugin. There is a need to be careful, yet that can save time when it adds suggestions for you to include or reject. The latter may apply when it adds conditional logic that needs more checking, while simple code outputting useful text to the console can be approved. While that certainly is how I approach things for now, it brings up an increasingly relevant question for me.
How do we deal with all this code production? In an environment with myriads of unit tests and a great deal of automation, there may be more capacity for handling the output than mere human inspection and review, which can overwhelm the limitations of a human context window. A quick search revealed that there are automated tools for just this purpose, possibly with their own learning curves; otherwise, manual working could be a better option in some cases.
After all, we need to do our own thinking too. That was brought home to me during the Julia script editing. To come up with a solution, I had to step away from LLM output and think creatively to come up with something simpler. There was a tension between the two needs during the exercise, which highlighted how important it is to learn not to be distracted by all the new technology. Being an introvert in the first place, I need that solo space, only to have to step away from technology to get that when it was a refuge in the first place.
For anyone with a programming hobby, they have to limit all this input to avoid being overwhelmed; learning a programming language could involve stripping out AI extensions from a code editor, for instance, LLM output has its place, yet it has to be at a human scale too. That perhaps is the genius of a chat interface, and we now have Agentic AI too. It is as if the technology curve never slackens, at least not until the current boom ends, possibly when things break because they go too far beyond us. All this acceleration is fine until we need to catch up with what is happening.
Incorporating tmux in a terminal workflow
11th March 2025As part of a recent workstation upgrade and subsequent AI explorations to see what runs on a GPU, I got to use tmux to display two panes within a terminal session on Linux Mint, each with output from a different system monitoring command; one of these was top for monitoring system processes in a more in-depth way. Some of that need has passed, yet I retain tmux and even set to open in a new terminal session by adding the following code to my .bashrc
file:
if command -v tmux &> /dev/null && [ -z "$TMUX" ]; then
tmux new
fi
This tests if tmux is installed and that this is not running in an existing tmux session before opening a new tmux session. You can also attach to an existing session or use a new default session if you like. That changes the second line of the above code to this:
tmux attach -t default || tmux new -s default
Wanting to have everything fresh in a new session, I decided against that. While I have gone away from using tmux panes for the moment, there is a cheat sheet that could have uses if I do, and another post elsewhere describes resizing the panes too, which came in very useful for that early dalliance while system monitoring.
From convex to concave: reflections on decades of computer monitor usage
10th March 2025Within the last week, I changed my monitor and am without an Iiyama in my possession for the first time since 1997. The first one was a 17″ CRT screen that accompanied my transition from education into work. Those old screens were not long-lasting, though, especially since it replaced a 15″ Dell screen that had started to work less well than I needed; the larger size was an added attraction after I saw someone with a 21″ Iiyama at the university where I was pursuing a research degree.
Work saw me using a 21″ Philips screen myself for a time before Eizo flat screen displays were given to us as part of a migration to Windows 2000. That inspired me to get a 17″ Iiyama counterpart to what I had at work. Collecting that sent me on an errand to a courier’s depot on the outskirts of Macclesfield. The same effort may have been accompanied by my dropping my passport, which I was using for identification. That thankfully was handed into the police, so I could get it back from them, even if I was resigned to needing a new one. More care has been taken since then to avoid a repeat.
The screen worked well, though I kept the old one as a backup for perhaps far too long. It took some years to pass before I eventually hauled it to the recycling centre; these days, I might try a nearby charity shop before setting off on such a schlep. In those times, LCD screens lasted so well that they could accumulate if you were not careful. The 17″ Iiyama accompanied my migration from Windows to Linux and a period of successful and ill-fated PC upgrades, especially a run of poor luck in 2009.
2010 saw me change my place of work, and a 24″ Iiyama was acquired just before then. Again, its predecessor was retained in case anything went awry and eventually went to a charity shop from where I could go into a new life. There was no issue with the new acquisition, and it went on to do nearly twelve years of work for me. A 34″ Iiyama replaced it a few years ago, yet I wonder if that decision was the best. Apart from more than a decade of muck on the screen, nothing else was amiss. Even a major workstation upgrade in 2021 did little to challenge it. Even so, it too went to a charity shop searching for a new home.
This year’s workstation overhaul did few favours to that 34″ successor. While it was always sluggish to wake, it did nothing like going into a cycle of non-responsiveness that it had on numerous occasions in the last few months. Compatibility with a Mac Mini could be better, too. The result is that I am writing these words using a Philips B346C1 instead, and it has few of the issues that beset the Iiyama, save for needing to remove and insert an HDMI cable for a Mac Mini at times.
Screen responsiveness is a big improvement, especially when switching between machines using a KVMP switch. Wake up times are noticeably shorter, and there is much better reliability. However, it did take a deal of time to optimise its settings to my liking. The OSD may be more convenient than the Iiyama, yet having Windows software that did the same thing made configuration at lot easier. While getting acceptable output across Windows, Linux and macOS has been a challenge, there is a feeling that things are nearly there.
Another matter is the fact that this is a curved screen. In some ways, that is akin to the move from a 24″ screen to a 34″ one when fonts and other items needing enlarging for the bigger screen. After a burst of upheaval, eventually things do settle down and acclimatisation ensues. Even though further tinkering cannot be ruled out, there is a sounder base for computing after the changeover.