Technology Tales

Adventures & experiences in contemporary technology

A quick look at the 7G Firewall

17th October 2021

There is a simple principal with the 7G Firewall from Perishable press: it is a set of mod_rewrite rules for the Apache web server that can be added to a .htaccess file and there also is a version for the Nginx web server as well. These check query strings, request Uniform Resource Identifiers (URI’s), user agents, remote hosts, HTTP referrers and request methods for any anomalies and blocks those that appear dubious.

Unfortunately, I found that the rules heavily slowed down a website with which I tried them so I am going have to wait until that is moved to a faster system before I really can give them a go. This can be a problem with security tools as I also found with adding a modsec jail to a Fail2Ban instance. As it happens, both sets of observations were made using the GTmetrix tool so it seems that there is a trade off between security and speed that needs to be assessed before adding anything to block unwanted web visitors.

Running cron jobs using the www-data system account

22nd December 2018

When you set up your own web server or use a private server (virtual or physical), you will find that web servers run using the www-data account. That means that website files need to be accessible to that system account if not owned by it. The latter is mandatory if you you want WordPress to be able to update itself with needing FTP details.

It also means that you probably need scheduled jobs to be executed using the privileges possess by the www-data account. For instance, I use WP-CLI to automate spam removal and updates to plugins, themes and WordPress itself. Spam removal can be done without the www-data account but the updates need file access and cannot be completed without this. Therefore, I got interested in setting up cron jobs to run under that account and the following command helps to address this:

sudo -u www-data crontab -e

For that to work, your own account needs to be listed in /etc/sudoers or be assigned to the sudo group in /etc/group. If it is either of those, then entering your own password will open the cron file for www-data and it can be edited as for any other account. Closing and saving the session will update cron with the new job details.

In fact, the same approach can be taken for a variety of commands where files only can be access using www-data. This includes copying, pasting and deleting files as well as executing WP-CLI commands. The latter issues a striking message if you run a command using the root account, a pervasive temptation given what it allows. Any alternative to the latter has to be better from a security standpoint.

Moving a website from shared hosting to a virtual private server

24th November 2018

This year has seen some optimisation being applied to my web presences guided by the results of GTMetrix scans. It was then that I realised how slow things were, so server loads were reduced. Anything that slowed response times, such as WordPress plugins, got removed. Usage of Matomo also was curtailed in favour of Google Analytics while HTML, CSS and JS minification followed. What had yet to happen was a search for a faster server. Now, another website has been moved onto a virtual private server (VPS) to see how that would go.

Speed was not the only consideration since security was a factor too. After all, a VPS is more locked away from other users than a folder on a shared server. There also is the added sense of control, so Let’s Encrypt SSL certificates can be added using the Electronic Frontier Foundation’s Certbot. That avoids the expense of using an SSL certificate provided through my shared hosting provider and a successful transition for my travel website may mean that this one undergoes the same move.

For the VPS, I chose Ubuntu 18.04 as its operating system and it came with the LAMP stack already in place. Have offload development websites, the mix of Apache, MySQL and PHP is more familiar to me than anything using Nginx or Python. It also means that .htaccess files become more useful than they were on my previous Nginx-based platform. Having full access to the operating system by means of SSH helps too and should mean that I have fewer calls on technical support since I can do more for myself. Any extra tinkering should not affect others either, since this type of setup is well known to me and having an offline counterpart means that anything riskier is tried there beforehand.

Naturally, there were niggles to overcome with the move. The first to fix was to make the MySQL instance accept calls from outside the server so that I could migrate data there from elsewhere and I even got my shared hosting setup to start using the new database to see what performance boost it might give. To make all this happen, I first found the location of the relevant my.cnf configuration file using the following command:

find / -name my.cnf

Once I had the right file, I commented out the following line that it contained and restarted the database service afterwards using another command to stop the appearance of any error 111 messages:

bind-address 127.0.0.1
service mysql restart

After that, things worked as required and I moved onto another matter: uploading the requisite files. That meant installing an FTP server so I chose proftpd since I knew that well from previous tinkering. Once that was in place, file transfer commenced.

When that was done, I could do some testing to see if I had an active web server that loaded the website. Along the way, I also instated some Apache modules like mod-rewrite using the a2enmod command, restarting Apache each time I enabled another module.

Then, I discovered that Textpattern needed php-7.2-xml installed, so the following command was executed to do this:

apt install php7.2-xml

Then, the following line was uncommented in the correct php.ini configuration file that I found using the same method as that described already for the my.cnf configuration and that was followed by yet another Apache restart:

extension=php_xmlrpc.dll

Addressing the above issues yielded enough success for me to change the IP address in my Cloudflare dashboard so it pointed at the VPS and not the shared server. The changeover happened seamlessly without having to await DNS updates as once would have been the case. It had the added advantage of making both WordPress and Textpattern work fully.

With everything working to my satisfaction, I then followed the instructions on Certbot to set up my new Let’s Encrypt SSL certificate. Aside from a tweak to a configuration file and another Apache restart, the process was more automated than I had expected so I was ready to embark on some fine-tuning to embed the new security arrangements. That meant updating .htaccess files and Textpattern has its own, so the following addition was needed there:

RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This complemented what was already in the main .htaccess file and WordPress allows you to include http(s) in the address it uses, so that was another task completed. The general .htaccess only needed the following lines to be added:

RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://www.assortedexplorations.com/$1 [R,L]

What all these achieve is to redirect insecure connections to secure ones for every visitor to the website. After that, internal hyperlinks without https needed updating along with any forms so that a padlock sign could be shown for all pages.

With the main work completed, it was time to sort out a lingering niggle regarding the appearance of an FTP login page every time a WordPress installation or update was requested. The main solution was to make the web server account the owner of the files and directories, but the following line was added to wp-config.php as part of the fix even if it probably is not necessary:

define('FS_METHOD', 'direct');

There also was the non-operation of WP Cron and that was addressed using WP-CLI and a script from Bjorn Johansen. To make double sure of its effectiveness, the following was added to wp-config.php to turn off the usual WP-Cron behaviour:

define('DISABLE_WP_CRON', true);

Intriguingly, WP-CLI offers a long list of possible commands that are worth investigating. A few have been examined but more await attention.

Before those, I still need to get my new VPS to send emails. So far, sendmail has been installed, the hostname changed from localhost and the server restarted. More investigations are needed but what I have not is faster than what was there before, so the effort has been rewarded already.

Installing Firefox Developer Edition in Linux Mint

22nd April 2018

Having moved beyond the slow response and larger memory footprint of Firefox ESR, I am using Firefox Developer Edition in its place even if it means living without a status bar at the bottom of the window. Hopefully, someone will create an equivalent of the old add-on bar extensions that worked before the release of Firefox Quantum.

Firefox Developer Edition may be pre-release software with some extras for web developers like being able to to drill into an HTML element and see its properties but I am finding it stable enough for everyday use. It is speedy too, which helps, and it has its own profile so it can co-exist on the same machine as regular releases of Firefox like its ESR and Quantum variants.

Installation takes a little added effort though and there are various options available. My chosen method involved Ubuntu Make. Installing this involves setting up a new PPA as the first step and the following commands added the software to my system:

sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
sudo apt-get update
sudo apt-get install ubuntu-make

With the above completed, it was simple to install Firefox Developer edition using the following command:

umake web firefox-dev

Where things got a bit more complicated was getting entries added to the Cinnamon Menu and Docky. The former was sorted using the cinnamon-menu-editor command but the latter needed some tinkering with my firefox-developer.desktop file found in .local/share/applications/ within my user area to get the right icon shown. Discovering this took me into .gconf/apps/docky-2/Docky/Interface/DockPreferences/%gconf.xml where I found the location of the firefox-developer.desktop that needed changing. Once this was completed, there was nothing else to do from the operating system side.

Within Firefox itself, I opted to turn off warnings about password logins on non-https websites by going to about:config using the address bar, then looking for security.insecure_field_warning.contextual.enabled and changing its value from True to False. Some may decry this but there are some local websites on my machine that need attention at times. Otherwise, Firefox is installed with user access so I can update it as if it were a Windows or MacOS application and that is useful given that there are frequent new releases. All is going as I want it so far.

More thinking on travelling without a laptop

10th December 2016

When it comes to the technology that I carry with me on trips away, I have begun to start weighing devices on my kitchen scales. The results are a little revealing. The HP Pavilion dm5 that has gone with me to Ireland and other places weighs between 2.5 and 3 kg while my Apple iPad Mini 2 comes in at 764 grams. My 12.9″ iPad Pro with its Logitech keyboard weighs between these at 110 to 1200 grams. The idea of consolidating computing devices for travel has been discussed on here before now and the main thing stopping my just going with the iPad Pro was the viewing of photos without filling up its 32 GB of storage space.

Since then, I just may have found a workaround and it is another gadget, this time weighing only a few hundred grams: a 1 TB WD My Passport Wireless portable hard drive. Aside from having a SD card slot that allows the automatic backup of photos, it also can connect with tablets and phones using WiFi broadband.

WD My Passport Wireless

It is the WD My Cloud app that makes the connections to mobile devices useful and it works smoothly on iOS and Android devices too. Nevertheless, there is more functionality on the latter ones such as DNG file support and an added slide show feature that works with JPEG files. Both of these are invaluable for viewing photos and I feel a little short-changed that they are not available on iOS. Hopefully, that will get resolved sooner rather than later.

Thankfully, my Pentax K5 II DSLR camera can be persuaded to save DNG and JPEG files simultaneously so that they can be viewed full screen on both types of devices without having to transfer them onto the tablet first as you would with Apple’s SD card reader. Usefully, that gets around my oversight in buying iPads with only 32 GB of storage each. That now looks like a false economy given what I am trying now.

Such is the weight difference, just taking along my Apple iPad Pro and the WD device will save around 1 kg and there is less fuss at airport security screening too. While my HTC phone would suffice for seeing photos as slide shows, I am wondering if my battered Google Nexus 9 could come too. The only dilemma then would be how to pack things since I am not sure how a large iPad screen would seem to cabin crew or other passengers during take off and landing. That makes using the Nexus 9 onboard more of a proposition and the iPad might go into the hold luggage to make life a little easier. Still, that choice is a minor concern now that I can try travelling overseas without a laptop to see how I get along.

More thoughts on Windows 10

11th August 2015

Now that I have left Windows 8.x behind me and there are a number of my machines running Windows 10, I have decided to revisit my impressions of the operating system. The first Technical Preview was something that I installed in a virtual machine and I have been keeping an eye on things have developed since then and intend to retain a Windows Insider installation to see what might be heading our way as Windows 10 evolves as now expected.

After elaborating on the all important upgrade process earlier, I am now moving onto other topics. The Start Menu is a big item but there are others as you will see below.

Start Menu

Let’s start with an admission: the prototype Start Menu that we got in the initial Windows 10 Technical Preview was more to my liking. Unpinning all the tiles allowed the menu to collapse back to the sort of width that anyone familiar with Windows 7 would have liked. If there was a setting to expunge all tiles at once and produce this state, I would have been well happy.

It was latter that we got to learn that Microsoft was not about consign the Windows 8 Modern interface entirely to history as many would have wanted. Some elements remain with us such as a Start Menu with a mandatory area for tiles and the ability to have it display full screen. Some are live but this can be turned off on a tile by tile basis and unneeded ones can be removed altogether. It is even possible to uninstall most apps by right clicking on a tile or other Start Menu entry and select the required option from the resulting context menu. For others, there is a command line alternative that uses Powershell to do removals. After this pruning, things were left in such a state that I have not been moved to restore Classic Shell so far.

The Start Menu settings used be in the same place as those for the taskbar but they are found in the new Settings tool. Some are in the Personalisation section and it has its own Start subsection for setting full screen mode or highlighting of new apps among other things. The equivalent Colours subsection is where you find other settings like assigning background colours based on those in a desktop background image, which itself is assigned in it own subsection in the Personalisation area.

Virtual Desktops

Initially, I failed to see the point in how Microsoft implemented these and favoured Virtuawin instead. My main complaint was the taskbar showed buttons for all open apps regardless of the screen in which they are opened. However, that was changed so your taskbar shows different buttons for each virtual desktop, just like the way that Linux and UNIX do things. Switching between desktops may not be as smooth of those yet but the default setting is a move in the right direction and you can change it if you want.

Cortana

This was presented to the world as a voice operated personal assistant like Apple’s Siri but I cannot say that I am keen on such things so I decided to work as I usually do instead. Keyboard interaction works fine and I have neutered things to leave off web searches on Bing to use the thing much in the same way as the search box on the Windows 7 Start Menu. It may be able to do more than that but I am more than happy to keep my workflow unchanged for now. Cortana’s settings are available via its pop-up menu. Collapsing the search box to an icon to save space for your pinned and open applications is available from the Search section of the taskbar context menu (right clicking the taskbar produces this).

Settings

In Windows 8.x, the Control Panel was not the only area for settings but remained feature complete but the same is not the case for Windows 10 where the new Settings panel is starting to take over from it. The two co-exist for now but it seems clear that Settings is where everything is headed.

The Personalisation section of the tool has been mentioned in relation to the Start Menu but there are plenty of others. For instance, the Privacy one is one that definitely needs reviewing and I found myself changing a lot of the default settings in there. Naturally, there are some other sections in Settings that need hardly any attention from most of us and these include Ease of access (accessibility), Time & language, Devices and Network & Internet. The System section has a few settings like tablet mode that may need review and the Update & security one has backup and recovery subsections that may be of interest. The latter of these is where you find the tools for refreshing the state of the system following instability or returning to a previous Windows version (7 or 8.x) within thirty days of the upgrade.

Migrating to Windows 10

10th August 2015

While I have had preview builds of Windows 10 in various virtual machines for the most of twelve months, actually upgrading physical and virtual devices that you use for more critical work is a very different matter. Also, Windows 10 is set to be a rolling release with enhancements coming on an occasional basis so I would like to see what comes before it hits the actual machines that I need to use. That means that a VirtualBox instance of the preview build is being retained to see what happens to that over time.

Some might call it incautious but I have taken the plunge and completely moved from Windows 8.1 to Windows 10. The first machine that I upgraded was more expendable and success with that encouraged me to move onto others before even including a Windows 7 machine to see how that went. The 30-day restoration period allows an added degree of comfort when doing all this. The list of machines that I upgraded were a VMware VM with 32-bit Windows 8.1 Pro (itself part of a 32-bit upgrade cascade involving Windows 7 Home and Windows 8 Pro), a VirtualBox VM with 64-bit Windows 8.1, a physical PC that dual booted Linux Mint 17.2 and 64-bit Windows 8.1 and an HP Pavilion dm4 laptop (Intel Core i3 with 8 GB RAM and a 1 TB SSHD) with Windows 7.

The main issue that I uncovered with the virtual machines is that the Windows 10 update tool that is downloaded onto Windows 7 and 8.x does not accept the graphics capability on there. This is a bug because the functionality works fine on the Windows Insider builds. The solution was to download the appropriate Windows 10 ISO image for use in the ensuing upgrade. There are 32-bit and 64-bit disk images with Windows 10 and Windows 10 Pro installation files on each. My own actions used both disk images.

During the virtual machine upgrades, most of the applications that considered important were carried over from Windows 8.1 to Windows without a bother. Anyone would expect Microsoft’s own software like Word, Excel and others to make the transition, but others like Adobe’s Photoshop and Lightroom made it too, as did Mozilla’s Firefox, albeit requiring a trip to Settings to set it as the default option for opening web pages. Less well-known desktop applications like Zinio (digital magazines) or Mapyx Quo (maps for cycling, walking and the like) were the same. Classic Shell was an exception but the Windows 10 Start Menu suffices for now anyway. Also, there was a need to reinstate Bitdefender Antivirus Plus using its new Windows 10 compatible installation file. Still, the experience was a big change from the way things used to be in the days when you used to have to reinstall nearly all your software following a Windows upgrade.

The Windows 10 update tool worked well for the Windows 8.1 PC, so no installation disks were needed. Neither was the bootloader overwritten so the Windows option needed selecting from GRUB every time there was a system reboot as part of the installation process, a temporary nuisance that was tolerated since booting into Linux Mint was preserved. Again, no critical software was lost in the process apart from Kaspersky Internet Security, which needed the Windows 10 compatible version installed, much like Bitdefender, or Epson scanning software that I found was easy to reinstall anyway. Usefully, Anquet’s Outdoor Map Navigator (again used for working with walking and cycling maps) continued to function properly after the changeover.

For the Windows 7 laptop, it was much the same story, albeit with the upgrade being delivered using Windows Update. Then, the main Windows account could be connected to my Outlook account to get everything tied up with the other machines for the first time. Before the obligatory change of background picture, the browns in the one that I was using were causing interface items to appear in red, not exactly my favourite colour for application menus and the like. Now they are in blue and all the upheaval surrounding the operating system upgrade had no effect on the Dropbox or Kaspersky installations that I had in place before it all started. If there is any irritation, it is that unpinning of application tiles from the Start Menu or turning off of live tiles is not always as instantaneous as I would have liked and that is all done now anyway.

While writing the above, I could not help thinking that more observations on Windows 10 may follow, but these will do for now. Microsoft had to get this upgrade process right and it does appear that they have, so credit is due to them for that. So far, I have Windows 10 to be stable and will be seeing how things develop from here, especially when those new features arrive occasionally as is the promise that has been made to us users. Hopefully, that will be as painless as it needs to be to ensure trust is retained.

Setting up a WD My Book Live NAS on Ubuntu GNOME 13.10

1st December 2013

The official line from Western Digital is that they do not support the use of their My Book Live NAS drives with Linux or UNIX. However, what that means is that they only develop tools for accessing their products for Windows and maybe OS X. It still doesn’t mean that you cannot access the drive’s configuration settings by pointing your web browser at http://mybooklive.local/. In fact, not having those extra tools is no drawback at all since the drive can be accessed through your file manager of choice under the Network section and the default name is MyBookLive too so you easily can find the thing once it is connected to a router or switch anyway.

Once you are in the servers web configuration area, you can do things like changing its name, updating its firmware, finding out what network has been assigned to it, creating and deleting file shares, password protecting file shares and other things. These are the kinds of things that come in handy if you are going to have a more permanent connection to the NAS from a PC that runs Linux. The steps that I describe have worked on Ubuntu 12.04 and 13.10 with the GNOME  desktop environment.

What I was surprised to discover that you cannot just set up a symbolic link that points to a file share. Instead, it needs to be mounted and this can be done from the command line using mount or at start-up with /etc/fstab. For this to happen, you need the Common Internet File System utilities and these are added as follows if you need them (check on the Software Software or in Synaptic):

sudo apt-get install cifs-utils

Once these are added, you can add a line like the following to /etc/fstab:

//[NAS IP address]/[file share name] /[file system mount point] cifs
credentials=[full file location]/.creds,
iocharset=utf8,
sec=ntlm,
gid=1000,
uid=1000,
file_mode=0775,
dir_mode=0775
0 0

Though I have broken it over several lines above, this is one unwrapped line in /etc/fstab with all the fields in square brackets populated for your system and with no brackets around these. Though there are other ways to specify the server, using its IP address is what has given me the most success and this is found under Settings > Network on the web console. Next up is the actual file share name on the NAS and I have used a custom them instead of the default of Public. The NAS file share needs to be mounted to an actual directory in your file system like /media/nas or whatever you like and you will need to create this beforehand. After that, you have to specify the file system and it is cifs instead of more conventional alternatives like ext4 or swap. After this and before the final two space delimited zeroes in the line comes the chunk that deals with the security of the mount point.

What I have done in my case is to have a password protected file share and the user ID and password have been placed in a file in my home area with only the owner having read and write permissions for it (600 in chmod-speak). Preceding the filename with a “.” also affords extra invisibility. That file then is populated with the user ID and password like the following. Of course, the bracketed values have to be replaced with what you have in your case.

username=[NAS file share user ID]
password=[NAS file share password]

With the credentials file created, its options have to be set. First, there is the character set of the file (usually UTF-8 and I got error code 79 when I mistyped this) and the security that is to be applied to the credentials (ntlm in this case). To save having no write access to the mounted file share, the uid and gid for your user needs specification with 1000 being the values for the first non-root user created on a Linux system. After that, it does no harm to set the file and directory permissions because they only can be set at mount time; using chmod, chown and chgrp later on has no effect whatsoever. Here, I have set permissions to read, write and execute for the owner and the user group while only allowing read and execute access for everyone else (that’s 775 in the world of chmod).

All of what I have described here worked for me and had to gleaned from disparate sources like Mount Windows Shares Permanently from the Ubuntu Wiki, another blog entry regarding the permissions settings for a CIFS mount point and an Ubuntu forum posting on mounting CIFS with UTF-8 support. Because of the scattering of information, I just felt that it needed to all together in one place for others to use and I hope that fulfils someone else’s needs in a similar way to mine.

Turning off seccomp sandbox in vsftpd

21st September 2013

Within the last week, I set up virtual web server using Arch Linux to satisfy my own curiosity since the DIY nature of Arch means that you can build up exactly what you need without having any real constraints put upon you. What didn’t surprise me about this was that it took me more work than the virtual server that I created using Ubuntu Server but I didn’t expect ProFTPD to be missing from the main repositories. The package can be found in the AUR but I didn’t fancy the prospect of dragging more work on myself so I went with vsftpd (Very Secure FTP Daemon) instead. In contrast to ProFTPD, this is available in the standard repositories and there is a guide to its use in the Arch user documentation.

However, while vsftpd worked well just after installation, connections to the virtual FTP soon failed with FileZilla  began issuing uninformative messages. In fact, it was the standard command line FTP client on my Ubuntu machine that was more revealing. It issued the following message that let me to the cause after my engaging the services of Google:

500 OOPS: priv_sock_get_cmd

With version 3.0 of vsftpd, a new feature was introduced and it appears that this has caused problems for a few people. That feature is seccomp sandboxing and it can turned off by adding the following line in /etc/vsftpd.conf:

seccomp_sandbox=NO

That solved my problem and version 3.0.2 of vsftpd should address the issue with seccomp sandboxing anyway. In case, this solution isn’t as robust as it should be because seccomp isn’t supported in the Linux kernel that you are using, turning off the new feature still needs to be an option though.

Piggybacking an Android Wi-Fi device off your Windows PC’s internet connection

16th March 2013

One of the disadvantages of my Google/Asus Nexus 7 is that it needs a Wi-Fi connection to use. Most of the time this is not a problem since I also have a Huawei mobile WiFi hub from T-Mobile and this seems to work just about anywhere in the U.K. Away from the U.K. though, it won’t work because roaming is not switched on for it and that may be no bad thing with the fees that could introduce. My HTC Desire S could deputise but I need to watch costs with that too.

There’s also the factor of download caps and those apply both to the Huawei and to the HTC. Recently, I added Anquet‘s Outdoor Map Navigator (OMN) to my Nexus 7 through the Google Play store for a fee of £7 and that allows access to any walking maps that I have bought from Anquet. However, those are large downloads so the caps start to come into play. Frugality would help but I began to look at other possibilities that make use of a laptop’s Wi-Fi functionality.

Looking on the web, I found two options for this that work on Windows 7 (8 should be OK too): Connectify Hotspot and Virtual Router Manager. The first of these is commercial software but there is a Lite edition for those wanting to try it out; that it is not a time limited demo is not something that I can confirm though that did not seem to be the case since it looked as if only features were missing from it that you’d get if you paid for the Pro variant. The second option is an open source one and is free of charge apart from an invitation to donate to the project.

Though online tutorials show the usage of either of these to be straightforward, my experiences were not all that positive at the outset. In fact, there was something that I needed to do and that is why this post has come to exist at all. That happened even after the restart that Conectify Hotspot needed as part of its installation; it runs as a system service so that’s why the restart was needed. In fact, it was Virtual Router Manager that told me what the issue was and it needed no reboot. Neither did it cause network disconnection of a laptop like the Connectify offering did on me and that was the cause of its ejection from that system; limitations in favour of its paid addition aside, it may have the snazzier interface but I’ll take effective simplicity any day.

Using Virtual Router Manager turns out to be simple enough. It needs a network name (also known as an SSID), a password to restrict who accesses the network and the internet connection to be shared. In my case, the was Local Area Connection on the drop down list. With all the required information entered,  I was ready to start the router using the Start Network Router button. The text on this changes to Stop Network Router when the hub is operational or at least it should have done for me on the first time that I ran it. What I got instead was the following message:

The group or resource is not in the correct state to perform the requested operation.

The above may not say all that much but it becomes more than ample information if you enter it into the likes of Google. Behind the scenes, Virtual Router Manager is using native Windows functionality is create a WiFi hub from a PC and it appears to be the Microsoft Virtual Wi-Fi Miniport Adapter from what I have seen. When I tried setting up an adhoc Wi-Fi network from a laptop to the Nexus 7 using Windows’ own network set up capability via its Control Panel, it didn’t do what I needed so there might be something that third party software can do. So, the interesting thing about the solution to my Virtual Router Manager problem was that it needed me to delve into the innards of Windows a little.

Firstly, there’s running Command Prompt (All Programs > Accessories) from the Start Menu with Administrator privileges. It helps here if the account with which you log into Windows is in the Administrators group since all you have  to do then is right click on the Start Menu entry and choose Run as administrator entry in the pop-up context menu. With a command line window now open, you then need to issue the following command:

netsh wlan set hostednetwork mode=allow ssid=[network name] key=[password] keyUsage=persistent

When that had done its thing, Virtual Router Manager worked without a hitch though it did turn itself after a while and that may be no bad thing from the security standpoint. On the Android side, it was a matter of going in Settings > Wi-Fi and choose the new network that have been creating on the laptop. This sort of thing may apply to other types of tablet (Dare I mention iPads?) so you could connect anything to the hub without needing to do any more on the Windows side.

For those wanting to know what’s going on behind the scenes on Windows, there’s a useful tutorial on Instructables that shows what third party software is saving you from having to do. Even if I never go down the more DIY route, I probably have saved myself having to buy a mobile Wi-Fi hub for any trips to Éire. For now, the Irish 3G dongle that I already have should be enough.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.