Technology Tales

Notes drawn from experiences in consumer and enterprise technology

TOPIC: PROGRAMMING LANGUAGES

Getting custom Python imports to work in Visual Studio Code

18th February 2022

While I continue to use Spyder as my preferred Python code editor, I also tried out Visual Studio Code. Handily, this Integrated Development Environment also has facilities for working with R and Julia code as well as Markdown text editing and adding the required extensions is enough for these applications; it helps that there is an unofficial Grammarly extension for content creation.

My Python code development makes use of the Pylance extension, and it works a little differently from Spyder when it comes to including files using import statements. Spyder will look into the folder where the base script is located, but the default behaviour of Pylance is that it looks in the root path of your workspace. This meant that any code that ran successfully in Spyder failed in Visual Studio Code.

To solve this issue, I added the location using the python.analysis.extraPaths setting for the workspace. I opened Settings by going to File > Preferences > Settings in the menu. I typed python.analysis.extraPaths in the search box. This showed me the correct section. I clicked on Add Item, entered the required path, and clicked OK. This resolved the problem, and everything worked properly afterwards.

Broadening data science horizons: Useful Python packages for working with data

14th October 2021

My response to changes in the technology stack used in clinical research is to develop some familiarity with programming and scripting platforms that complement and compete with SAS, a system with which I have been programming since 2000. While one of these has been R, Python is another that has taken up my attention, and I now also have Julia in my sights as well. There may be others to assess in the fullness of time.

While I began to explore the Data Science world in the autumn of 2017, it was in the autumn of 2019 that I began to complete LinkedIn training courses on the subject. Good though they were, I find that I need to actually use a tool to better understand it. At that time, I did get to hear about Python packages like Pandas, NumPy, SciPy, Scikit-learn, Matplotlib, Seaborn and Beautiful Soup though it took until of spring of this year for me to start gaining some hands-on experience with using any of these.

During the summer of 2020, I attended a BCS webinar on the CodeGrades initiative, a programming mentoring scheme inspired by the way classical musicianship is assessed. In fact, one of the main progenitors is a trained classical musician and teacher of classical music who turned to Python programming when starting a family to have a more stable income. The approach is that a student selects a project and works their way through it, with mentoring and periodic assessments carried out in a gentle and discursive manner. Of course, the project has to be engaging for the learning experience to stay the course, and that point came through in the webinar.

That is one lesson that resonates with me with subjects as diverse as web server performance and the ongoing pandemic supplying data, and there are other sources of public data to examine as well before looking through my own personal archive gathered over the decades. Though some subjects are uplifting while others are more foreboding, the key thing is that they sustain interest and offer opportunities for new learning. Without being able to dream up new things to try, my knowledge of R and Python would not be as extensive as it is, and I hope that it will help with learning Julia too.

In the main, my own learning has been a solo effort with consultation of documentation along with web searches that have brought me to the likes of Real Python, Stack Abuse, Data Viz with Python and R and others for longer tutorials as well as threads on Stack Overflow. Usually, the web searching begins when I need a steer on a particular or a way to resolve a particular error or warning message, but books are always worth reading even if that is the slower route. While those from the Dummies series or from O'Reilly have proved must useful so far, I do need to read them more completely than I already have; it is all too tempting to go with the try the "programming and search for solutions as you go" approach instead.

To get going, many choose the Anaconda distribution to get Jupyter notebook functionality, but I prefer a more traditional editor, so Spyder has been my tool of choice for Python programming and there are others like PyCharm as well. Because Spyder itself is written in Python, it can be installed using pip from PyPi like other Python packages. It has other dependencies like Pylint for code management activities, but these get installed behind the scenes.

The packages that I first met in 2019 may be the mainstays for doing data science, but I have discovered others since then. It also seems that there is porosity between the worlds of R and Python, so you get some Python packages aping R packages and R has the Reticulate package for executing Python code. There are Python counterparts to such Tidyverse stables as dplyr and ggplot2 in the form of Siuba and Plotnine, respectively. Though the syntax of these packages are not direct copies of what is executed in R, they are close enough for there to be enough familiarity for added user-friendliness compared to Pandas or Matplotlib. The interoperability does not stop there, for there is SQLAlchemy for connecting to MySQL and other databases (PyMySQL is needed as well) and there also is SASPy for interacting with SAS Viya.

While Python may not have the speed of Julia, there are plenty of packages for working with larger workloads. Of these, Dask, Modin and RAPIDS all have their uses for dealing with data volumes that make Pandas code crawl. As if to prove that there are plenty of libraries for various forms of data analytics, data science, artificial intelligence and machine learning, there also are the likes of Keras, TensorFlow and NetworkX. These are just a selection of what is available, and there is always the possibility of checking out others. It may be tempting to stick with the most popular packages all the time, especially when they do so much, but it never hurts to keep an open mind either.

Getting Eclipse to start without incompatibility errors on Linux Mint 19.1

12th June 2019

Recent curiosity about Java programming and Groovy scripting got me trying to start up the Eclipse IDE that I had installed on my main machine. What I got instead of a successful application startup was a message that included the following:

!MESSAGE Exception launching the Eclipse Platform:
!STACK
java.lang.ClassNotFoundException: org.eclipse.core.runtime.adaptor.EclipseStarter
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:466)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:566)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:626)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
at org.eclipse.equinox.launcher.Main.main(Main.java:1414)

The cause was a mismatch between Eclipse and the installed version of Java that it needed to run. After all, the software itself is written in the Java language and the installed version from the usual software repositories was too old for Java 11. The solution turned out to be installing a newer version as a Snap (Ubuntu's answer to Flatpak). The following command did the needful since snapd already was running on my machine:

sudo snap install eclipse --classic

The only part of the command that warrants extra comment is the --classic switch, since that is needed for a tool like Eclipse that needs to access a host file system. On executing, the software was downloaded from Snapcraft and then installed within its own bundle of dependencies. The latter adds a certain detachment from the underlying Linux installation and ensures that no messages appear because of incompatibilities like the one near the start of this post.

On Making PROC REPORT Work Harder

1st September 2010

In the early years of my SAS programming career, there seemed to be just the one procedure to use if you wanted to create a summary table. That was TABULATE and it was great for generating columns according to the value of a variable such as the treatment received by a subject in a clinical study. To a point, it could generate statistics for you too, and I often used it to sum frequency and percentage variables. Since then, it seems to have been enhanced a little and surprised me with the statistics it could produce when I had a recent play. Here's the code:

proc tabulate data=sashelp.class;
    class sex;
    var age;
    table age*(n median*f=8. mean*f=8.1 std*f=8.1 min*f=8. max*f=8. lclm*f=8.1 uclm*f=8.1),sex
	  / misstext="0";
run;

When you compare that with the idea of creating one variable per column and then defining them in PROC REPORT as many do, it looks more elegant and the results aren't bad either, though they can be tweaked further from the quick example that I generated. That last comment brings me to the point that PROC REPORT seems to have taken over from TABULATE wherever I care to look these days, and I do ask myself if it is the right tool for that for which it is being used or if it is being used in the best way.

While using Data Step to create one variable per column in a PROC REPORT output doesn't strike me as the best way to write reusable code, there are ways to make PROC REPORT do more for you. For example, by defining GROUP, ACROSS and ANALYSIS columns in an output, you can persuade the procedure to do the summarising for you and there's some example code below with the comma nesting height under sex in the resulting table. Sums are created by default if you do this, and forgoing an analysis column definition means that you get a frequency table, not at all a useless thing in numerous instances.

proc report data=sashelp.class nowd missing;
    columns age sex,height;
    define age / group "Age";
    define sex / across "Sex";
    define height / analysis mean f=missing. "Mean Height";
run;

For those times when you need to create more heavily formatted statistics (summarising range as min-max rather showing min and max separately, for example), you might feel that the GROUP/ACROSS set-up's non-display of character values puts a stop to using that approach. However, I found that making every value combination unique and attaching a cell ID helps to work around the problem. Then, you can create a format control data set from the data like in the code below and create a format from that which you can apply to the cell ID's to display things as you need them. This method does make things more portable from situation to situation than adding or removing columns depending on the values of a classification variable.

proc sql noprint;
    create table cntlin as
        select distinct "fmtname" as fmtname, cellid as start, cellid as end, decode as label
            from report;
quit;

proc format lib=work cntlin=cnlin;
run;

Understanding Perl binding operators for pattern matching

20th May 2009

While this piece is as much an aide de memoire for myself as anything else, putting it here seems worthwhile if it answers questions for others. The binding operators, =~ or !~, come in handy when you are framing conditional statements in Perl using Regular Expressions, for example, testing whether x =~ /\d+/ or not. The =~ variant is also used for changing strings using the s/[pattern1]/[pattern2]/ regular expression construct (here, s stands for "substitute"). What has brought this to mind is that I wanted to ensure that something was done for strings that did not contain a certain pattern, and that's where the !~ binding operator came in useful; ^~ might have come to mind for some reason, but it wasn't what I needed.

When web hosting limits become too restrictive

27th April 2009

Fasthosts has, in their wisdom, decided to limit the execution time for ASP scripts to 15 seconds and 10 seconds for any others. I haven't used Perl sufficiently in this shared hosting set up to determine how that is affected. In contrast, I can share my experiences on the PHP side and you may have noticed occasional glitches. They have also disabled the set_time_limit PHP function, so you cannot easily address the matter yourself where you need to do it. You almost get the feeling that they don't trust the abilities, actions and oversight of their users. Personally, I reckon that the ten-second limit is too short and that something of the order of 20 or 30 seconds would be better. If it all gets too restrictive, I suppose that there are other providers, though I think that I would avoid resellers after a previous less than glorious experience. There's the dedicated server option too if I was feeling flush, not so likely given the economic times in which we live.

java.net.MalformedURLException: unknown protocol: j

15th December 2007

While I know that there are better things to call a blog post than to use part of an error message that I got from Saxonica's Saxon when I was converting XML files into PHP equivalents for the visitor information section of my main website, it is handy for anyone else needing to look up a solution when they encounter it. In my case, I use the open source Saxon-B rather than the commercial Saxon-SA, and it fulfils all of my needs. Version 8 and later (it has now reached 9.0.0.2) handle the XSLT 2.0 features that I need to make the transformations really clever.

Also, because Saxon is available as a jar file, it is cross-platform so long as you have Java on board. There are, however, some slight differences in behaviour. Now, I run the thing on Linux, where any Windows-style file locations are not recognised. When I had the file path in a DTD declaration starting with J:\, that was thought to be a protocol like file, http, https, ftp and so on because of the colon. Since there's no j protocol, Java gets confused, issuing the rather obscure error that titles this post. Otherwise, the migration of the Perl script that creates XSLT files and fires off the required XML to PHP transformations was a fairly straightforward exercise once file locations and shebang line were set right.

Setting up a test web server on Ubuntu

1st November 2007

Installing all the bits and pieces is painless enough so long as you know what's what; Synaptic does make it thus. Interestingly, Ubuntu's default installation is a lightweight affair with the addition of any additional components involving downloading the packages from the web. The whole process is all very well integrated and doesn't make you sweat every time you need to install additional software. In fact, it resolves any dependencies for you so that those packages can be put in place too; it lists them, you select them and Synaptic does the rest.

Returning to the job in hand, my shopping list included Apache, Perl, PHP and MySQL, the usual suspects in other words. Perl was already there, as it is on many UNIX systems, so installing the appropriate Apache module was all that was needed. PHP needed the base installation as well as the additional Apache module. MySQL needed the full treatment too, though its being split up into different pieces confounded things a little for my tired mind. Then, there were the MySQL modules for PHP to be set in place too.

The addition of Apache preceded all of these, but I have left it until now to describe its configuration, something that took longer than for the others; the installation itself was as easy as it was for the others. However, what surprised me were the differences in its configuration set up when compared with Windows. There are times when we get the same software but on different operating systems, which means that configuration files get set up differently. The first difference is that the main configuration file is called apache2.conf on Ubuntu rather than httpd.conf as on Windows. Like its Windows counterpart, Ubuntu's Apache does use subsidiary configuration files. However, there is an additional layer of configurability added courtesy of a standard feature of UNIX operating systems: symbolic links. Rather than having a single folder with the all configuration files stored therein, there are two pairs of folders, one pair for module configuration and another for site settings: mods-available/mods-enabled and sites-available/sites-enabled, respectively. In each pair, there is a folder with all the files and another containing symbolic links. It is the presence of a symbolic link for a given configuration file in the latter that activates it. I learned all this when trying to get mod_rewrite going and changing the web server folder from the default to somewhere less susceptible to wrecking during a re-installation or, heaven forbid, a destructive system crash. It's unusual, but it does work, even if it takes that little bit longer to get things sorted out when you first meet up with it.

Apart from the Apache set up and finding the right things to install, getting a test web server up and running was a fairly uneventful process. All's working well now, and I'll be taking things forward from here; making website Perl scripts compatible with their new world will be one of the next things that need to be done.

Tidying dynamic URL’s

15th June 2007

A few years back, I came across a very nice article discussing how you would make a dynamic URL more palatable to a search engine, and I made good use of its content for my online photo gallery. The premise was that URL's that look like that below are no help to search engines indexing a website. Though this is received wisdom in some quarters, it doesn't seem to have done much to stall the rise of WordPress as a blogging platform.

http://www.mywebsite.com/serversidescript.php?id=394

That said, WordPress does offer a friendlier URL display option too, which you can see in use on this blog; they look a little like the example URL that you see below, and the approach is equally valid for both Perl and PHP. Since I have been using the same approach for the Perl scripts powering my online phone gallery, now want to apply the same thinking to a gallery written in PHP:

http://www.mywebsite.com/serversidescript.pl/id/394

The way that both expressions work is that a web server will chop pieces from a URL until it reaches a physical file. For a query URL, the extra information after the question mark is retained in its QUERY_STRING variable, while extraneous directory path information is passed in the variable PATH_INFO. For both Perl and PHP, these are extracted from the entries in an array; for Perl, this array is called is $ENV and $_SERVER is the PHP equivalent. Thus, $ENV{QUERY_STRING} and $_SERVER{'QUERY_STRING'} traps what comes after the ? while $ENV{PATH_INFO} and $_SERVER{'PATH_INFO'} picks up the extra information following the file name (/id/394/ in the example). From there on, the usual rules apply regarding cleaning of any input but changing from one to another should be too arduous.

Perl vs. PHP: A Personal Experience

11th June 2007

Ever since I converted it from a client-side JavaScript-powered affair, my online photo gallery has been written in Perl. There have been some challenges along the way, figuring out how to use hash tables has been one, but everything has worked as expected. However, I am now wondering if it is better to write things in PHP for the sake of consistency with the rest of the website. I had a go a rewriting the random photo page and, unless I have been missing something in the Perl world, things do seem more succinct with PHP. For instance, actions that formerly involved several lines of code can now be achieved in one. Reading the contents of a file into an array and stripping HTML/XML tags from a string fall into this category, and seeing the number of lines of code halving is a striking observation. I am not going to completely abandon Perl, it's a very nice language, but I do rather suspect that there is now an increased chance of my having a website whose server-side processing needs are served entirely by PHP.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.