Technology Tales

Adventures & experiences in contemporary technology

Removing advertisements from uTorrent

12th July 2014

BitTorrent may have got some bad press due to its use for downloading copyrighted material such as music and movies but it does have its legitimate uses too. In my case, many a Linux distro has been downloaded in this way and it does take the weight off servers by distributing the load across users instead.

Speaking of Linux, my general choice of client has been Transmission and there are others. In the Windows world, there is a selection that includes BitTorrent, Inc. themselves. However, many favour uTorrent (or μTorrent) so that’s the one that I tried and there free and subscription-based options. To me, the latter feels like overkill when an eternal licence could be made available as an easy way to dispatch the advertisements on display in the free version.

As much as I appreciate the need for ads to provide revenue to a provider of otherwise free software, they do need to be tasteful and those in uTorrent often were for dating websites that had no scruples about exposing folk to images that were unsuitable for a work setting. Those for gaming websites were more tolerable in comparison. With the non-availability of an eternal licence option, I was left pondering alternatives like qBittorrent instead. That is Free Software too so it does have that added advantage.

However, I uncovered an article on LifeHacker that sorted my problem with uTorrent. The trick is to go into Options > Preferences via the menus and then go to the Advanced section in the dialogue box that appears. In there, go looking for each of the following options and set each one to false in turn:

  • offers.left_rail_offer_enabled/left_rail_offer
  • gui.show_plus_upsell
  • offers.sponsored_torrent_offer_enabled/sponsored_torrent_offer_enabled
  • bt.enable_pulse
  • gui.show_notorrents_node
  • offers.content_offer_autoexec

In practice, I found some of the above already set to false and another missing but set those that remained from true to false cleaned up the interface so I hope never to glimpse those unsuitable ads again. The maker of uTorrent need to look at the issue or revenue could get lost and prospective users could see the operation as being cheapened by what is displayed. As for me, I am happy to have gained something in the way of control.

Setting up a WD My Book Live NAS on Ubuntu GNOME 13.10

1st December 2013

The official line from Western Digital is that they do not support the use of their My Book Live NAS drives with Linux or UNIX. However, what that means is that they only develop tools for accessing their products for Windows and maybe OS X. It still doesn’t mean that you cannot access the drive’s configuration settings by pointing your web browser at http://mybooklive.local/. In fact, not having those extra tools is no drawback at all since the drive can be accessed through your file manager of choice under the Network section and the default name is MyBookLive too so you easily can find the thing once it is connected to a router or switch anyway.

Once you are in the servers web configuration area, you can do things like changing its name, updating its firmware, finding out what network has been assigned to it, creating and deleting file shares, password protecting file shares and other things. These are the kinds of things that come in handy if you are going to have a more permanent connection to the NAS from a PC that runs Linux. The steps that I describe have worked on Ubuntu 12.04 and 13.10 with the GNOME  desktop environment.

What I was surprised to discover that you cannot just set up a symbolic link that points to a file share. Instead, it needs to be mounted and this can be done from the command line using mount or at start-up with /etc/fstab. For this to happen, you need the Common Internet File System utilities and these are added as follows if you need them (check on the Software Software or in Synaptic):

sudo apt-get install cifs-utils

Once these are added, you can add a line like the following to /etc/fstab:

//[NAS IP address]/[file share name] /[file system mount point] cifs
credentials=[full file location]/.creds,
iocharset=utf8,
sec=ntlm,
gid=1000,
uid=1000,
file_mode=0775,
dir_mode=0775
0 0

Though I have broken it over several lines above, this is one unwrapped line in /etc/fstab with all the fields in square brackets populated for your system and with no brackets around these. Though there are other ways to specify the server, using its IP address is what has given me the most success and this is found under Settings > Network on the web console. Next up is the actual file share name on the NAS and I have used a custom them instead of the default of Public. The NAS file share needs to be mounted to an actual directory in your file system like /media/nas or whatever you like and you will need to create this beforehand. After that, you have to specify the file system and it is cifs instead of more conventional alternatives like ext4 or swap. After this and before the final two space delimited zeroes in the line comes the chunk that deals with the security of the mount point.

What I have done in my case is to have a password protected file share and the user ID and password have been placed in a file in my home area with only the owner having read and write permissions for it (600 in chmod-speak). Preceding the filename with a “.” also affords extra invisibility. That file then is populated with the user ID and password like the following. Of course, the bracketed values have to be replaced with what you have in your case.

username=[NAS file share user ID]
password=[NAS file share password]

With the credentials file created, its options have to be set. First, there is the character set of the file (usually UTF-8 and I got error code 79 when I mistyped this) and the security that is to be applied to the credentials (ntlm in this case). To save having no write access to the mounted file share, the uid and gid for your user needs specification with 1000 being the values for the first non-root user created on a Linux system. After that, it does no harm to set the file and directory permissions because they only can be set at mount time; using chmod, chown and chgrp later on has no effect whatsoever. Here, I have set permissions to read, write and execute for the owner and the user group while only allowing read and execute access for everyone else (that’s 775 in the world of chmod).

All of what I have described here worked for me and had to gleaned from disparate sources like Mount Windows Shares Permanently from the Ubuntu Wiki, another blog entry regarding the permissions settings for a CIFS mount point and an Ubuntu forum posting on mounting CIFS with UTF-8 support. Because of the scattering of information, I just felt that it needed to all together in one place for others to use and I hope that fulfils someone else’s needs in a similar way to mine.

Turning off seccomp sandbox in vsftpd

21st September 2013

Within the last week, I set up virtual web server using Arch Linux to satisfy my own curiosity since the DIY nature of Arch means that you can build up exactly what you need without having any real constraints put upon you. What didn’t surprise me about this was that it took me more work than the virtual server that I created using Ubuntu Server but I didn’t expect ProFTPD to be missing from the main repositories. The package can be found in the AUR but I didn’t fancy the prospect of dragging more work on myself so I went with vsftpd (Very Secure FTP Daemon) instead. In contrast to ProFTPD, this is available in the standard repositories and there is a guide to its use in the Arch user documentation.

However, while vsftpd worked well just after installation, connections to the virtual FTP soon failed with FileZilla  began issuing uninformative messages. In fact, it was the standard command line FTP client on my Ubuntu machine that was more revealing. It issued the following message that let me to the cause after my engaging the services of Google:

500 OOPS: priv_sock_get_cmd

With version 3.0 of vsftpd, a new feature was introduced and it appears that this has caused problems for a few people. That feature is seccomp sandboxing and it can turned off by adding the following line in /etc/vsftpd.conf:

seccomp_sandbox=NO

That solved my problem and version 3.0.2 of vsftpd should address the issue with seccomp sandboxing anyway. In case, this solution isn’t as robust as it should be because seccomp isn’t supported in the Linux kernel that you are using, turning off the new feature still needs to be an option though.

Installing Nightingale music player on Ubuntu 13.04

25th June 2013

Ever since the Songbird project concentrated its efforts to support only Windows and OS X, the Firefox-based music player has been absent from a Linux user’s world. However, the project is open source and a fork called Nightingale now fulfils the same needs. Intriguingly, it too is available for Windows for OS X users so I am left wondering why that overlap has happened. However, Songbird also is available as a web app and as an app on both Android and iOS while Nightingale sticks to being a desktop application.

To add it to Ubuntu, you need to set up a new repository. That can be done using the Software Centre but issuing a command in a terminal can be so much quicker and cleaner so here it is:

sudo add-apt-repository ppa:nightingaleteam/nightingale-release

Apart from entering your password, there will be prompt to continue by pressing the carriage return key or cancelling with CTRL + C. For our purposes, it is the first action that’s needed and once that’s done the needful, you can execute the following command:

sudo apt-get update && sudo apt-get install nightingale

This is in two parts: the first updates the repositories on your system and second actually installs the software. When that is complete, you are ready run Nightingale and, with the repository, staying up to date is not chore either. In fact, using the above commands brings another advantage and it is that they should in any Ubuntu derivatives such as Linux Mint.

Piggybacking an Android Wi-Fi device off your Windows PC’s internet connection

16th March 2013

One of the disadvantages of my Google/Asus Nexus 7 is that it needs a Wi-Fi connection to use. Most of the time this is not a problem since I also have a Huawei mobile WiFi hub from T-Mobile and this seems to work just about anywhere in the U.K. Away from the U.K. though, it won’t work because roaming is not switched on for it and that may be no bad thing with the fees that could introduce. My HTC Desire S could deputise but I need to watch costs with that too.

There’s also the factor of download caps and those apply both to the Huawei and to the HTC. Recently, I added Anquet‘s Outdoor Map Navigator (OMN) to my Nexus 7 through the Google Play store for a fee of £7 and that allows access to any walking maps that I have bought from Anquet. However, those are large downloads so the caps start to come into play. Frugality would help but I began to look at other possibilities that make use of a laptop’s Wi-Fi functionality.

Looking on the web, I found two options for this that work on Windows 7 (8 should be OK too): Connectify Hotspot and Virtual Router Manager. The first of these is commercial software but there is a Lite edition for those wanting to try it out; that it is not a time limited demo is not something that I can confirm though that did not seem to be the case since it looked as if only features were missing from it that you’d get if you paid for the Pro variant. The second option is an open source one and is free of charge apart from an invitation to donate to the project.

Though online tutorials show the usage of either of these to be straightforward, my experiences were not all that positive at the outset. In fact, there was something that I needed to do and that is why this post has come to exist at all. That happened even after the restart that Conectify Hotspot needed as part of its installation; it runs as a system service so that’s why the restart was needed. In fact, it was Virtual Router Manager that told me what the issue was and it needed no reboot. Neither did it cause network disconnection of a laptop like the Connectify offering did on me and that was the cause of its ejection from that system; limitations in favour of its paid addition aside, it may have the snazzier interface but I’ll take effective simplicity any day.

Using Virtual Router Manager turns out to be simple enough. It needs a network name (also known as an SSID), a password to restrict who accesses the network and the internet connection to be shared. In my case, the was Local Area Connection on the drop down list. With all the required information entered,  I was ready to start the router using the Start Network Router button. The text on this changes to Stop Network Router when the hub is operational or at least it should have done for me on the first time that I ran it. What I got instead was the following message:

The group or resource is not in the correct state to perform the requested operation.

The above may not say all that much but it becomes more than ample information if you enter it into the likes of Google. Behind the scenes, Virtual Router Manager is using native Windows functionality is create a WiFi hub from a PC and it appears to be the Microsoft Virtual Wi-Fi Miniport Adapter from what I have seen. When I tried setting up an adhoc Wi-Fi network from a laptop to the Nexus 7 using Windows’ own network set up capability via its Control Panel, it didn’t do what I needed so there might be something that third party software can do. So, the interesting thing about the solution to my Virtual Router Manager problem was that it needed me to delve into the innards of Windows a little.

Firstly, there’s running Command Prompt (All Programs > Accessories) from the Start Menu with Administrator privileges. It helps here if the account with which you log into Windows is in the Administrators group since all you have  to do then is right click on the Start Menu entry and choose Run as administrator entry in the pop-up context menu. With a command line window now open, you then need to issue the following command:

netsh wlan set hostednetwork mode=allow ssid=[network name] key=[password] keyUsage=persistent

When that had done its thing, Virtual Router Manager worked without a hitch though it did turn itself after a while and that may be no bad thing from the security standpoint. On the Android side, it was a matter of going in Settings > Wi-Fi and choose the new network that have been creating on the laptop. This sort of thing may apply to other types of tablet (Dare I mention iPads?) so you could connect anything to the hub without needing to do any more on the Windows side.

For those wanting to know what’s going on behind the scenes on Windows, there’s a useful tutorial on Instructables that shows what third party software is saving you from having to do. Even if I never go down the more DIY route, I probably have saved myself having to buy a mobile Wi-Fi hub for any trips to Éire. For now, the Irish 3G dongle that I already have should be enough.

Setting up MySQL on Sabayon Linux

27th September 2012

For quite a while now, I have offline web servers for doing a spot of tweaking and tinkering away from the gaze of web users that visit what I have on there. Therefore, one of the tests that I apply to any prospective main Linux distro is the ability to set up a web server on there. This is more straightforward for some than for others. For Ubuntu and Linux Mint, it is a matter of installing the required software and doing a small bit of configuration. My experience with Sabayon is that it needs a little more effort than this and I am sharing it here for the installation of MySQL.

The first step is too install the software using the commands that you find below. The first pops the software onto the system while second completes the set up. The --basedir option is need with the latter because it won’t find things without it. It specifies the base location on the system and it’s /usr in my case.

sudo equo install dev-db/mysql
sudo /usr/bin/mysql_install_db --basedir=/usr

With the above complete, it’s time to start the database server and set the password for the root user. That’s what the two following commands achieve. Once your root password is set, you can go about creating databases and adding other users using the MySQL command line

sudo /etc/init.d/mysql start
mysqladmin -u root password ‘password’

The last step is to set the database server to start every time you start your Sabayon system. The first command adds an entry for MySQL to the default run level so that this happens. The purpose of the second command is check that this happened before restarting your computer to discover if it really happens. This procedure also is needed for having an Apache web server behave in the same way so the commands are worth having and even may have a use for other services on your system. ProFTP is another that comes to mind, for instance.

sudo rc-update add mysql default
sudo rc-update show | grep mysql

Changing to web fonts

12th February 2012

While you can add Windows fonts to Linux installations, I have found that their display can be flaky to say the least. Linux Mint and Ubuntu display them as sharp as I’d like but I have struggled to get the same sort of results from Arch Linux while I am not so sure about Fedora or openSUSE either.

That has caused me to look at web fonts for my websites with Google Web Fonts doing what I need with both Open Sans and Arimo doing what I need so far. There have been others with which I have dallied, such as Droid Sans, but these are the ones on which I have settled for now. Both are in use on this website now and I added calls for them to the web page headers using the following code (lines are wrapping due to space constraints):

<link href="http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,400,300,600,700" rel="stylesheet" type="text/css">
<link href='http://fonts.googleapis.com/css?family=Arimo:400,400italic,700,700italic' rel='stylesheet' type='text/css'>

With those lines in place, it then is a matter of updating font-family and font declarations in CSS style sheets with “Open Sans” or “Arimo” as needed while keeping alternatives defined in case the Google font service goes down for whatever reason. A look at a development release of the WordPress Twenty Twelve theme caused me to come across Open Sans and I like it for its clean lines and Arimo, which was found by looking through the growing Google Web Fonts catalogue, is not far behind. Looking through that catalogue now causes for me a round of indecision since there is so much choice. For that reason, I think it better to be open to the recommendations of others.

Sorting out MySQL on Arch Linux

5th November 2011

Seeing Arch Linux running so solidly in a VirtualBox virtual box has me contemplating whether I should have it installed on a real PC. Saying that, recent announcements regarding the implementation of GNOME 3 in Linux Mint have caught my interest even if the idea of using a rolling distribution as my main home operating system still has a lot of appeal for me. Having an upheaval come my way every six months when a new version of Linux Mint is released is the main cause of that.

While remaining undecided, I continue to evaluate the idea of Arch Linux acting as my main OS for day-to-day home computing. Towards that end, I have set up a working web server instance on there using the usual combination of Apache, Perl, PHP and MySQL. Of these, it was MySQL that went the least smoothly of all because the daemon wouldn’t start for me.

It was then that I started to turn to Google for inspiration and a range of actions resulted that combined to give the result that I wanted. One problem was a lack of disk space caused by months of software upgrades. Since tools like it in other Linux distros allow you to clear some disk space of obsolete installation files, I decided to see if it was possible to do the same with pacman, the Arch Linux command line package manager. The following command, executed as root, cleared about 2 GB of cruft for me:

pacman -Sc

The S in the switch tells pacman to perform package database synchronization while the c instructs it to clear its cache of obsolete packages. In fact, using the following command as root every time an update is performed both updates software and removes redundant or outmoded packages:

pacman -Syuc

So I don’t forget the needful housekeeping, this will be what I use in future with the y being the switch for a refresh and the u triggering a system upgrade. It’s nice to have everything happen together without too much effort.

To do the required debugging that led me to the above along with other things, I issued the following command:

mysqld_safe --datadir=/var/lib/mysql/ &

This starts up the MySQL daemon in safe mode if all is working properly and it wasn’t in my case. Nevertheless, it creates a useful log file called myhost.err in /var/lib/mysql/. This gave me the messages that allowed the debugging of what was happening. It led me to installing net-tools and inettools using pacman; it was the latter of these that put hostname on my system and got the MySQL server startup a little further along. Other actions included unlocking the ibdata1 data file and removing the ib_logfile0 and ib_logfile1 files so as to gain something of a clean sheet. The kill command was used to shut down any lingering mysqld sessions too. To ensure that the ibdata1 file was unlocked, I executed the following commands:

mv ibdata1 ibdata1.bad
cp -a ibdata1.bad ibdata1

These renamed the original and then crated a new duplicate of it with the -a switch on the cp command forcing copying with greater integrity than normal. Along with the various file operations, I also created a link to my.cnf, the MySQL configuration file on Linux systems, in /etc using the following command executed by root:

ln -s /etc/mysql/ my.cnf /etc/my.cnf

While I am unsure if this made a real difference, uncommenting the lines in the same file that pertained to InnoDB tables. What directed me to these were complaints from mysqld_safe in the myhost.err log file. All I did was to uncomment the lines beginning with “innodb” and these were 116-118, 121-122 and 124-127 in my configuration file but it may be different in yours.

After all the above, the MySQL daemon ran happily and, more importantly, started when I rebooted the virtual machine. Thinking about it now, I believe that was a lack of disk space, the locking of a data file and the lack of InnoDB support that was stopping the MySQL service from running.Running commands like mysqld start weren’t yielding useful messages so a lot of digging was needed to get the result that I needed. In fact, that’s one of the reasons why I am sharing my experiences here.

In the end, creating databases and loading them with data was all that was needed for me to start see functioning websites on my (virtual) Arch Linux system. It turned out to be another step on the way to making it workable as a potential replacement for the Linux distributions that I use most often (Linux Mint, Fedora and Ubuntu).

A waiting game

20th August 2011

Having been away every weekend in July, I was looking forward to a quiet one at home to start August. However, there was a problem with one of my websites hosted by Fasthosts that was set to occupy me for the weekend and a few weekday evenings afterwards.

The issue appeared to be slow site response so I followed advice given to me by second line support when this website displayed the same type of behaviour: upgrade from Apache 1.3 to 2.2 using the control panel. Unfortunately for me, that didn’t work smoothly at all and there seemed to be serious file loss as a result. Raising a ticket with the support desk only got me the answer that I had to wait for completion and I now have come to the conclusion that the migration process may have got stuck somewhere along the way. Maybe another ticket is in order.

There were a number of causes of the waiting that gave rise to the title of this post. Firstly, support for low costing isn’t exactly timely and I do wonder if it’s any better for more prominent websites. Restoration of websites by FTP is another activity that takes up plenty of time as does rebuilding databases and populating them with data. Lastly, there’s changing the DNS details for a website. In hindsight, there may be ways of reducing the time demands of these. For instance, contacting a support team by telephone may be quicker unless there is a massive queue awaiting attention and there was a wait of several hours one night when a security changeover affected a multitude of Fasthosts users. Of course, it is not a panacea at the best of times as we have known since all those stories began to do the rounds in the middle of the 1990’s. Doing regular backups would help the second though the ones that I was using for the restoration weren’t too bad at all. Nevertheless, they weren’t complete so there was unfinished business that required resolution later. The last of these is helped along by more regular PC restarts so that unexpected discovery will remain a lesson for the future though I don’t plan on moving websites around for a while. After all, getting DNS details propagated more quickly really is a big help.

While awaiting a response from Fasthosts, I began to ponder the idea of using an alternative provider. Perusal of the latest digital edition of .Net (I now subscribe to the non-paper edition so as to cut down on the clutter caused by having paper copies about the place) ensued before I decided to investigate the option of using Webfusion. Having decided to stick with shared hosting, I gave their Unlimited Linux option a go. For someone accustomed to monthly billing, it was unusual to see annual biannual and triannual payment schemes too. The first of these appears to be the default option so a little care and attention is needed if you want something else. In order to encourage you to stay with Webfusion longer, the per month is on sliding scale: the longer the period you buy, the lower the cost of a month’s hosting.

Once the account was set up, I added a database and set to the long process of uploading files from my local development site using FileZilla. Having got a MySQL backup from the Fasthosts site, I used the provided PHPMyAdmin interface to upload the data in pieces not exceeding the 8 MB file size limitation. It isn’t possible to connect remotely to the MySQL server using the likes of MySQL Administrator so I bear with this not so smooth process. SSH is another connection option that isn’t available but I never use it much on Fasthosts sites anyway. There were some questions to the support people along and the first of these got a timely answer though later ones took longer before I got an answer. Still, getting advice on the address of the test website was a big help while I was sorting out the DNS changeover.

Speaking of the latter, it took a little doing and not little poking around Webfusion’s FAQ’s before I made it happen. First, I tried using name servers that I found listed in one of the articles but this didn’t seem to achieve the end that I needed. Mind you, I would have seen the effects of this change a little earlier if I had rebooted my PC earlier than I did than I did but it didn’t occur to me at the time. In the end, I switched to using my domain provider’s name servers and added the required information to them to get things going. It was then that my website was back online in some fashion so I could any outstanding loose ends.

With the site essentially operating again, it was time to iron out the rough edges. The biggest of these was that MOD_REWRITE doesn’t seem to work the same on the Webfusion server like it does on the Fasthosts ones. This meant that I needed to use the SCRIPT_URI CGI variable instead of PATH_INFO in order to keep using clean URL’s for a PHP-powered photo gallery that I have. It took me a while to figure that out and I felt much better when I managed to get the results that I needed. However, I also took the chance to tidy up site addresses with redirections in my .htaccess file in an attempt to ensure that I lost no regular readers, something that I seem to have achieved with some success because one such visitor later commented on a new entry in the outdoors blog.

Once any remaining missing images were instated or references to them removed, it was then time to do a full backup for sake of safety. The first of these activities was yet another consumer while the second didn’t take so long and I need to do this more often in case anything happens. Hopefully though, the relocated site’s performance continues to be as solid as it is now.

The question as to what to do with the Fasthosts webspace remains outstanding. Currently, they are offering free upgrades to existing hosting packages so long as you commit for a year. After my recent experience, I cannot say that I’m so sure about doing that kind of thing. In fact, the observation leaves me wondering if instating that very extension was the cause of breaking my site. In fact, it appears that the migration from Apache 1.3 to 2.2 seems to have got stuck for whatever reason. Maybe another ticket should be raised but I am not decided on that yet. All in all, what happened to that Fasthosts website wasn’t the greatest of experiences but the service offered by Webfusion is rock solid thus far. While wondering if the service from Fasthosts wasn’t as good as it once was, I’ll keep an open mind and wait to see if my impressions change over time.

Tinkering with Textpattern

26th April 2011

Textpattern 5 may be on the way but that isn’t to say that work on the 4.x branch is completely stopped though it is less of a priority at the moment. After all, version 4.40 was slipped out not so long ago as a security release, a discovery that I made while giving a section of my outdoors website a spring refresh. During that activity, the TinyMCE plugin started to grate with its issuing of error messages in the form of dialogue boxes needing user input to get rid of them every time an article was opened or saved. Because of that nuisance, the guilty hak_tinymce plugin was ejected with joh_admin_ckeditor replacing it and bringing CKEditor into use for editing my Textpattern articles. It is working well though the narrow editing area is causing the editor toolbars to take up too much vertical space but you can resize the editor to solve this though it would be better if it could be made to remember those size settings.

Another find was atb_editarea, a plugin that colour codes (X)HTML, PHP and CSS by augmenting the standard text editing for pages and stylesheets in the Presentation part of the administration interface. If I had this at the start of my redesign, it would have made doing the needful that bit more user-friendly than the basic editing facilities that Textpattern offers by default. Of course, the tinkering never stops so there’s no such thing as finding something too late in the day for it to be useful.

Textpattern may not be getting the attention that some of its competitors are getting but it isn’t being neglected either; its users and developer community see to that. Saying that, it needs to get better at announcing new versions of the CMS so they don’t slip by the likes of me who isn’t looking all the time. With a major change of version number involved, curiosity is aroused as what is coming next. So far, Textpattern appears to be taking an evolutionary course and there’s a lot to be said for such an approach.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.