Technology Tales

Adventures & experiences in contemporary technology

Creating a test web server using Ubuntu Server 13.04 and VirtualBox

1st September 2013

Having seen Linux Format cover tools like Vagrant and Puppet that manage virtual machines, I have been attracted by the prospect of a virtual web server running on my own PC. Certainly, having the LAMP software stack in a VM means that the corresponding tools don’t need to be added to a host system should its operating system need a fresh installation.

As intriguing as tools like Vagrant may be, I decided that I needed to learn a bit more about getting server instances set up in VirtualBox anyway. Thus, I went and downloaded the latest version of Ubuntu Server and gave that a go. One lesson that I learned was that Bridged Networking needs to be added to the VM before installation of the operating system unless you fancy overcoming the challenge of getting Ubuntu Server to recognise an altered or additional network interface. In my case, I added an extra adapter for the Bridged Networking and left the original in place as NAT. The reason for having Bridged Networking set up is that it allows access to the virtual web server from the host once you know the IP address and that information can be obtained by executing the ifconfig command on the virtual machine.

With the networking sorted, the next step was to install the 64-bit edition of Ubuntu Server. Unlike its desktop counterpart, this is all driven by text menus but remains fairly intuitive and there is hardly anything there that you wouldn’t see with another Linux distribution. A useful addition is the addition of a menu to select the type of server services that you’d like to see installed. From this, I chose the web server and SSH options and I seem to remember that there was a database server option too. If there was an FTP server option, I would have chosen that too but it was no ordeal to add ProFTPd later on anyway.

All of this set was done through the VirtualBox GUI just to keep life more straightforward. Even so, I only selected 12 MB of video memory and was tempted to cut the overall memory back from 512 MB but leaving things be for now. However, what I have begun to do is start and stop the virtual machine from the command line since servers are headless operations anyway. With SSH enabled, there is little need to have the VirtualBox GUI going. The command for starting the server is below:

VBoxManage startvm "Ubuntu Server" --type=headless

There is a VBoxHeadless command for the same end too but VBoxManage does what I need. The startvm option is what tells VBoxManage to start the server and the virtual machine’s name is enclosed in quotes. The --type=headless ensures that no window pops up. To stop the virtual web server cleanly, a command like the following is needed:

VBoxManage controlvm "Ubuntu Server" acpipowerbutton

Again, the VBoxManage command gets used and the acpipowerbutton option ensures that a clean shutdown is performed. Not doing so results in the server not fully starting up according to my experiences thus far. Getting the virtual web server to start and stop with the host machine itself starting and stopping but this looks more complex so I plan to leave things a while before trying that experiment.

Setting up MySQL on Sabayon Linux

27th September 2012

For quite a while now, I have offline web servers for doing a spot of tweaking and tinkering away from the gaze of web users that visit what I have on there. Therefore, one of the tests that I apply to any prospective main Linux distro is the ability to set up a web server on there. This is more straightforward for some than for others. For Ubuntu and Linux Mint, it is a matter of installing the required software and doing a small bit of configuration. My experience with Sabayon is that it needs a little more effort than this and I am sharing it here for the installation of MySQL.

The first step is too install the software using the commands that you find below. The first pops the software onto the system while second completes the set up. The --basedir option is need with the latter because it won’t find things without it. It specifies the base location on the system and it’s /usr in my case.

sudo equo install dev-db/mysql
sudo /usr/bin/mysql_install_db --basedir=/usr

With the above complete, it’s time to start the database server and set the password for the root user. That’s what the two following commands achieve. Once your root password is set, you can go about creating databases and adding other users using the MySQL command line

sudo /etc/init.d/mysql start
mysqladmin -u root password ‘password’

The last step is to set the database server to start every time you start your Sabayon system. The first command adds an entry for MySQL to the default run level so that this happens. The purpose of the second command is check that this happened before restarting your computer to discover if it really happens. This procedure also is needed for having an Apache web server behave in the same way so the commands are worth having and even may have a use for other services on your system. ProFTP is another that comes to mind, for instance.

sudo rc-update add mysql default
sudo rc-update show | grep mysql

Getting Gnome Shell going for Fedora 16 running in VirtualBox

5th December 2011

There are a number of complaints out there about how hard it is to get GNOME Shell running for a Fedora 16 installation in a VirtualBox virtual machine. As with earlier versions of Fedora, preparation remains a matter of having make, gcc and kernel-devel (kernel headers, in other words). While I have got away with just those, adding dkms (dynamic kernel module support) to the list might be no bad idea either. To get all of those instated, it is a matter of running the following command as root or using sudo:

yum -y install make gcc kernel-devel dkms

The -y switch ensures that any Y/N prompts that usually appear are suppressed and that the installation is completed. Just leave it out if you are inclined to get second thoughts. Another item that has been needed with a previous release of Fedora is libgomp but I haven’t had to add this for Fedora 16 if I remember correctly.

Once those are in place, it is time to install the VirtualBox Guest Additions.  Going to Devices > Install Guest Additions… mounts a virtual CD that can be used for the installation of the various drivers that are needed. To do the installation, first go to where the installer is located using the following command:

cd /media/VBOXADDITIONS_4.1.6_74713/

Note that this location will change according to the release and build numbers of VirtualBox but the process essentially will be the same other than this. Once in there, issue the following command as root or using sudo:

./VBoxLinuxAdditions.run

Hopefully, this will complete without errors now with the precursor software that has been added beforehand. However, there is one more thing that needs doing or you will get the GNOME 3 fallback desktop instead. It pertains to SELinux, an old adversary of mine that got in the way when I was setting up a web server on a machine running Fedora. It doesn’t recognise the new VirtualBox drivers as it should so the following command needs executing as root or using sudo:

restorecon -R -v /opt

Doing this restores the SELinux contexts for the /opt directories within which the VirtualBox software directories are found. The -R switch tells it to act recursively and -v makes it verbose. When it has done its work, hopefully successfully, it is time to reboot the virtual machine and you should have a GNOME Shell desktop interface when you log in.

Sorting out MySQL on Arch Linux

5th November 2011

Seeing Arch Linux running so solidly in a VirtualBox virtual box has me contemplating whether I should have it installed on a real PC. Saying that, recent announcements regarding the implementation of GNOME 3 in Linux Mint have caught my interest even if the idea of using a rolling distribution as my main home operating system still has a lot of appeal for me. Having an upheaval come my way every six months when a new version of Linux Mint is released is the main cause of that.

While remaining undecided, I continue to evaluate the idea of Arch Linux acting as my main OS for day-to-day home computing. Towards that end, I have set up a working web server instance on there using the usual combination of Apache, Perl, PHP and MySQL. Of these, it was MySQL that went the least smoothly of all because the daemon wouldn’t start for me.

It was then that I started to turn to Google for inspiration and a range of actions resulted that combined to give the result that I wanted. One problem was a lack of disk space caused by months of software upgrades. Since tools like it in other Linux distros allow you to clear some disk space of obsolete installation files, I decided to see if it was possible to do the same with pacman, the Arch Linux command line package manager. The following command, executed as root, cleared about 2 GB of cruft for me:

pacman -Sc

The S in the switch tells pacman to perform package database synchronization while the c instructs it to clear its cache of obsolete packages. In fact, using the following command as root every time an update is performed both updates software and removes redundant or outmoded packages:

pacman -Syuc

So I don’t forget the needful housekeeping, this will be what I use in future with the y being the switch for a refresh and the u triggering a system upgrade. It’s nice to have everything happen together without too much effort.

To do the required debugging that led me to the above along with other things, I issued the following command:

mysqld_safe --datadir=/var/lib/mysql/ &

This starts up the MySQL daemon in safe mode if all is working properly and it wasn’t in my case. Nevertheless, it creates a useful log file called myhost.err in /var/lib/mysql/. This gave me the messages that allowed the debugging of what was happening. It led me to installing net-tools and inettools using pacman; it was the latter of these that put hostname on my system and got the MySQL server startup a little further along. Other actions included unlocking the ibdata1 data file and removing the ib_logfile0 and ib_logfile1 files so as to gain something of a clean sheet. The kill command was used to shut down any lingering mysqld sessions too. To ensure that the ibdata1 file was unlocked, I executed the following commands:

mv ibdata1 ibdata1.bad
cp -a ibdata1.bad ibdata1

These renamed the original and then crated a new duplicate of it with the -a switch on the cp command forcing copying with greater integrity than normal. Along with the various file operations, I also created a link to my.cnf, the MySQL configuration file on Linux systems, in /etc using the following command executed by root:

ln -s /etc/mysql/ my.cnf /etc/my.cnf

While I am unsure if this made a real difference, uncommenting the lines in the same file that pertained to InnoDB tables. What directed me to these were complaints from mysqld_safe in the myhost.err log file. All I did was to uncomment the lines beginning with “innodb” and these were 116-118, 121-122 and 124-127 in my configuration file but it may be different in yours.

After all the above, the MySQL daemon ran happily and, more importantly, started when I rebooted the virtual machine. Thinking about it now, I believe that was a lack of disk space, the locking of a data file and the lack of InnoDB support that was stopping the MySQL service from running.Running commands like mysqld start weren’t yielding useful messages so a lot of digging was needed to get the result that I needed. In fact, that’s one of the reasons why I am sharing my experiences here.

In the end, creating databases and loading them with data was all that was needed for me to start see functioning websites on my (virtual) Arch Linux system. It turned out to be another step on the way to making it workable as a potential replacement for the Linux distributions that I use most often (Linux Mint, Fedora and Ubuntu).

Moving from Ubuntu 10.10 to Linux Mint 10

23rd April 2011

With a long Easter weekend available to me and with thoughts of forthcoming changes in the world of Ubuntu, I got to wondering about the merits of moving my main home PC to Linux Mint instead. Though there is a rolling variant based on Debian, I went for the more usual one based on Ubuntu that uses GNOME. For the record, Linux Mint isn’t just about the GNOME desktop but you also can have it with Xfce, LXDE and KDE desktops as well. While I have been known to use Lubuntu and like its LXDE implementation, I stuck with the option of which I have most experience.

Once I selected the right disk for the boot loader, the main installation of Mint went smoothly. By default, Ubuntu seems to take care of this but Mint leaves it to you. When you have your operating system files on sdc, installation on the default of sda isn’t going to produce a booting system. Instead, I ended up with GRUB errors and, while I suppose that I could have resolved these, the lazier option of repeating the install with the right boot loader location was the one that I chose. It produced the result that I wanted: a working and loading operating system.

However, there was not something not right about the way that the windows were displayed on the desktop with title bars and window management not working as they should. Creating a new account showed that it was the settings that were carried over from Ubuntu in my home area that were the cause. Again, I opted for a less strenuous option and moved things from the old account to the new one. One outcome of that decisions was that there was a lot of use of the chown command in order to get file and folder permissions set for the new account. In order to make this all happen, the new account needed to be made into an Administrator just like its predecessor; by default, more restrictive desktop accounts are created using the Users and Groups application from the Administration submenu. Once I was happy that the migration was complete, I backed up any remaining files from the old user folder and removed it from the system. Some of the old configuration files were to find a new life with Linux Mint.

In the middle of the above, I also got to customising my desktop to get the feel that is amenable. For example, I do like a panel at the top and another at the bottom. By default, Linux Mint only comes with the latter. The main menu was moved to the top because I have become used to having there and switchers for windows and desktops were added at the bottom. They were only a few from what has turned out not to be a short list of things that I fancied having: clock, bin, clearance of desktop, application launchers, clock, broken application killer, user switcher, off button for PC, run command and notification area. It all was gentle tinkering but still is the sort of thing that you wouldn’t want to have to do over and over again. Let’s hope that is the case for Linux Mint upgrades in the future. That the configuration files for all of these are stored in home area hopefully should make life easier, especially when an in-situ upgrade like that for Ubuntu isn’t recommended by the Mint team.

With the desktop arranged to my liking, the longer job of adding to the collection of software on there while pruning a few unwanted items too was next. Having had Apache, PHP and MySQL on the system before I popped in that Linux Format magazine cover disk for the installation, I wanted to restore them. To get the off-line websites back, I had made copies of the old Apache settings that simply were copied over the defaults in /etc/apache (in fact, I simply overwrote the apache directory in /etc but the effect was the same). MySQL Administrator had been used to take a backup of the old database too. In the interests of spring cleaning, I only migrated a few of the old databases from the old system to the new one. In fact, there was an element of such tidying in my mind when I decided to change Linux distribution in the first place; Ubuntu hadn’t been installed from afresh onto the system for a while anyway and some undesirable messages were appearing at update time though they were far from being critical errors.

The web server reinstatement was only part of the software configuration that I was doing and there was a lot of use of apt-get while this was in progress. A rather diverse selection was added: Emacs, NEdit, ClamAV, Shotwell (just make sure that your permissions are sorted first before getting this to use older settings because anything inaccessible just gets cleared out; F-Spot was never there is the first place in my case but it may differ for you), UFRaw, Chrome, Evolution (never have been a user of Mozilla Thunderbird, the default email client on Mint), Dropbox, FileZilla, MySQL Administrator, MySQL Query Browser, NetBeans, POEdit, Banshee (Rhythmbox is what comes with Mint but I replaced it with this), VirtualBox and GParted. This is quite a list and while I maybe should have engaged the services of dpkg to help automate things, I didn’t on this occasion though Mint seems to have a front end for it that does the same sort of thing. Given that the community favour clean installations, it’s little that something like this is on offer in the suite of tools in the standard installation. This is the type of rigmarole that one would not draw on themselves too often.

With desktop tinkering and software installations complete, it was time to do a little more configuration. In order to get my HP laser printer going, I ran hp-setup to download the (proprietary, RMS will not be happy…) driver for it because it otherwise wouldn’t work for me. Fortune was removed from the terminal sessions because I like them to be without such things. To accomplish this, I edited /etc/bash.bashrc and commented out the /usr/games/fortune line before using apt-get to clear the software from my system. Being able to migrate my old Firefox and Evolution profiles, albeit manually, has become another boon. Without doubt, there are more adjustments that I could be making but I am happy to do these as and when I get to them. So far, I have a more than usable system, even if I engaged in more customisation than many users would go doing.

It probably is useful to finish this by sharing my impressions of Linux Mint. What goes without saying is that some things are done differently and that is to be expected. Distribution upgrades are just one example but there are tools available to make clean installations that little bit easier. To my eyes, the desktop looks very clean and fond display is carried over from Ubuntu, not at all a bad thing. That may sound a small matter but it does appear to me that Fedora and openSUSE could learn a thing or too about how to display fonts on screen on their systems. It is the sort of thing that adds the spot of polish that leaves a much better impression. So far, it hasn’t been any hardship to find my way around and I can make the system fit my wants and needs. That it looks set to stay that way is another bonus. We have a lot of change coming in the Linux world with GNOME 3 on the way and Ubuntu’s decision to use Unity as their main desktop environment. While watching both of these developments mature, it looks as if I’ll be happily using Mint. Change can refresh but a bit of stability is good too.

Worth the attention?

21st July 2010

The latest edition of Web Designer has features and tutorials on modern trends one new ways to use fonts and typography in websites. One thing that’s at the heart of the attention is the @font-face CSS selector. It’s what allows you to break away from the limitations of whatever fonts your visitors might have on their PC’s to use something available remotely.

In principle, that sounds a great idea but there are caveats. The first of these is the support for the @font-face selector in the first place though the modern browsers that I have tried seem to do reasonably OK on this score. These include the latest versions of Firefox, Internet Explorer, Opera and Chrome. The new fonts may render OK but there’s a short delay in the full loading of a web page. With Firefox, the rendering seems to treat the process like an interleaved image so you may see fonts from your own PC before the remote ones come into place, a not too ideal situation in my opinion. Also, I have found that this is more noticeable on the Linux variant of the browser than its Windows counterpart. Loading a page that is predominantly text is another scenario where you’ll see the behaviour more clearly. Having a sizeable image file loading seems to make things less noticeable. Otherwise, you may see a short delay to the loading of a web page because the fonts have to be downloaded first. Opera is a particular offender here with IE8 loading things quite quickly and Chrome not being too bad either.

In the main, I have been using Google’s Fonts Directory but, in the interests of supposedly getting a better response, I tried using font files stored on a test web server only to discover that there was more of a lag with the fonts on the web server. While I do not know what Google has done with their set up, using their font delivery service appears to deliver better performance in my testing so it’ll be my choice for now. There’s Typekit too but I’ll be hanging onto to my money in the light of my recent experiences.

After my brush with remote font loading, I am inclined to wonder if the current hype about fonts applied using the @font-face directive is deserved until browsers get better and faster at loading them. As things stand, they may be better than before but the jury’s still out for me with Firefox’s rendering being a particular irritant. Of course, things can get better…

Basic string searching in MySQL table columns

29th April 2010

Last weekend, I ended up doing a spot of file structure reorganisation on the web server for my Assorted Explorations website and needed to correct some file pointers in entries on my outdoors blog. Rather than grabbing a plugin from somewhere, I decided to edit the posts table directly. First, I needed to select the affected observations and this is where I had to pick out the affected rows and edit them in MySQL Query Browser. To accomplish that, I needed basic string searching so I opened up my MySQL e-book from Apress and constructed something like the following:

select * from posts_table where post_text like '%some_text%';

The % wildcard characters are required to pick out a search string in any part of a piece of text. There may be a more sophisticated method, but this did what I needed in a quick and dirty manner without further ado. Well, it was what I needed.

Relocating the Apache web server document root directory in Fedora 12

9th April 2010

So as not to deface anything that is available online on the web, I have a tendency to set up an offline Apache server on a home PC to do any tinkering away from the eyes of the unsuspecting public. Though Ubuntu is my mainstay for home computing, I do have a PC with Fedora installed and I have been trying to get an Apache instance starting automatically on there without success for a few months. While I can start it by running the following command as root, I’d rather not have more manual steps than is necessary.

httpd -k start

The command used by the system when it starts is different and, even when manually run as root, it failed with messages saying that it couldn’t find the directory while the web server files are stored. Here it is:

service httpd start

The default document root location on any Linux distribution that I have seen is /var/www and all is very well with this but it isn’t a safe place to leave things if ever a re-installation is needed. Having needed to wipe /var after having it on a separate disk or partition for the sake of one installation, it doesn’t look so persistent to me. In contrast, you can safeguard /home by having it on another disk or in a dedicated partition and it can be retained even when you change the distro that you’re using. Thus, I have got into the habit of having the root of the web server document root folder in my home area and that is where I have been seeing the problem.

Because of the access message, I tried using chmod and chgrp but to no avail. The remedy has to do with reassigning the security contexts used by SELinux. In Fedora, Apache will not work with the context user_home_t that is usually associated with home directories but needs httpd_sys_content_t instead. To find out what contexts are associated with particular folders, issue the following command:

ls -Z

The final solution was to create a user account whose home directory hosts the root of the web server file system, called www in my case. Then, I executed the following command as root to get things going:

chcon -R -h -t httpd_sys_content_t /home/www

It seems that even the root of the home directory has to have an appropriate security context (/home has home_root_t so that might do the needful too). Without that, nothing will work even if all is well at the next level down. The switches for chcon command translate as follows:

-R : recursive; applies changes to all files and folders within a directory.

-h : changes apply only to symbolic links and not to where they refer in the file system.

-t : alters context type.

It took a while for all of this stuff about SELinux security contexts to percolate through to the point where I was able to solve the problem. A spot of further inspiration was needed too and even guided my search for the information that I needed. It’s well worth trying Linux Home Networking if you need more information. There are references to an earlier release of Fedora but the content still applies to later versions of Fedora right up to the current release if my experience is typical.

Ubuntu upgrades: do a clean installation or use Update Manager?

9th April 2009

Part of some recent “fooling” brought on by the investigation of what turned out to be a duff DVD writer was a fresh installation of Ubuntu 8.10 on my main home PC. It might have brought on a certain amount of upheaval but it was nowhere near as severe as that following the same sort of thing with a Windows system. A few hours was all that was needed but the question as to whether it is better to do an upgrade every time a new Ubuntu release is unleashed on the world or to go for a complete virgin installation instead. With Ubuntu 9.04 in the offing, that question takes on a more immediate significance than it otherwise might do.

Various tricks make the whole reinstallation idea more palatable. For instance, many years of Windows usage have taught me the benefits of separating system and user files. The result is that my home directory lives on a different disk to my operating system files. Add to that the experience of being able to reuse that home drive across different Linux distros and even swapping from one distro to another becomes feasible. From various changes to my secondary machine, I can vouch that this works for Ubuntu, Fedora and Debian; the latter is what currently powers the said PC. You might have to user superuser powers to attend to ownership and access issues but the portability is certainly there and it applies anything kept on other disks too.

Naturally, there’s always the possibility of losing programs that you have had installed but losing the clutter can be liberating too. However, assembling a script made up up of one of more apt-get install commands can allow you to get many things back at a stroke. For example, I have a test web server (Apache/MySQL/PHP/Perl) set up so this would be how I’d get everything back in place before beginning further configuration. It might be no bad idea to back up your collection of software sources either; I have yet to add all of the ones that I have been using back into Synaptic. Then there are closed source packages such as VirtualBox (yes, I know that there is an open source edition) and Adobe Reader. After reinstating the former, all my virtual machines were available for me to use again without further ado. Restoring the latter allowed me to grab version 9.1 (probably more secure anyway) and it inveigles itself into Firefox now too so the number of times that I need to go through the download shuffle before seeing the contents of a PDF are much reduced, though not completely eliminated by the Windows-like ability to see a PDF loaded in a browser tab. Moving from software to hardware for a moment, it looks like any bespoke actions such as my activating an Epson Perfection 4490 Photo scanner need to be repeated but that was all that I needed to do. Getting things back into order is not so bad but you need to allow a modicum of time for this.

What I have discussed so far are what might be categorised as the common or garden aspects of a clean installation but I have seen some behaviours that make me wonder if the usual Ubuntu upgrade path is sufficiently complete in its refresh of your system. The counterpoint to all of this is that I may not have been looking for some of these things before now. That may apply to my noticing that DSLR support seems to be better with my Canon and Pentax cameras both being picked up and mounted for me as soon as they are connected to a PC, the caveat being that they are themselves powered on for this to happen. Another surprise that may be new is that the BBC iPlayer’s Listen Again works without further work from the user, a very useful development. It very clearly wasn’t that way before I carried out the invasive means. My previous tweaking might have prevented the in situ upgrade from doing its thing but I do see the point of not upsetting people’s systems with an overly aggressive update process, even if it means that some advances do not make themselves known.

So what’s my answer regarding which way to go once Ubuntu Jaunty Jackalope appears? For sake of avoiding initial disruption, I’d be inclined to go down the Update Manager route first while reserving the right to do a fresh installation later on. All in all, I am left with the gut feeling is that the jury is still out on this one.

Work locally, update remotely

4th December 2008

Here’s a trick that might have its uses: using a local WordPress instance to update your online blog (yes, there are plenty of applications that promise to edit your online blog but these need file permissions to the likes of xmlrpc.php to be opened up). Along with the right database access credentials and the ability to log in remotely, adding the following two lines to wp-config.php does the trick:

define('WP_SITEURL', 'http://localhost/blog');

define('WP_HOME', 'http://localhost/blog');

These two constants override what is in the database and allow to update the online database from your own PC using WordPress running on a local web server (Apache or otherwise). One thing to remember here is that both online and offline directory structures are similar. For example, if your online WordPress files are in blog in the root of the online web server file system (typically htdocs for Linux), then they need to be contained in the same directory in the root of the offline server too. Otherwise, things could get confusing and perhaps messy. Another thing to consider is that you are modifying your online blog so the usual rules about care and attention apply, particularly with respect to using the same version of WordPress both locally and remotely. This is especially a concern if you, like me, run development versions of WordPress to see if there are any upheavals ahead of us like the overhaul that is coming in with WordPress 2.7.

An alternative use of this same trick is to keep a local copy of your online database in case of any problems while using a local WordPress instance to work with it. I used to have to edit the database backup directly (on my main Ubuntu system), first with GEdit but then using a sed command like the following:

sed -e s/www\.onlinewebsite\.com/localhost/g backup.sql > backup_l.sql

The -e switch uses regular expression substitution that follows it to edit the input with the output being directed to a new file. It’s slicker than the interactive GEdit route but has been made redundant by defining constants for a local WordPress installation as described above.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.