The story of a failed archival utility
Sometime circa the time when I started writing this blog I had more than 800 directories in my computer's home directory. There were projects of others and projects of mine mixed merrily together. Most of it all was not documented. Most of it was chaff.
For about a half year I had made directory entries that had creation date in them. This turned out troublesome because lots of my work was such that I'd return to work on it later.
I got an idea that I should start archiving my work, but having hundreds of directories to work through I decided to write a tool to help me archiving it all.
This tool collected a date-sorted list of entries that haven't been touched during the last 7 days from my home directory and write a message-of-the-day summary such as:
55 files to archive
run 'archive list' to see details.
I put it into cron -table to run it periodically once in a
week. To print the motd, I put cat software/archive/motd
into my .bashrc
, which is a startup and configuration
script for a Linux terminal.
The archive -script had 'list' and 'ignore' -subcommands. The list -command would show the output of last periodic run of the scan -command. That another command would insert a file into an ignore -list, which would let the archiver ignore the files that shouldn't be archived.
The item would disappear from the 'archive list' if it was moved or ignored with the 'ignore' -command. This would let me document the files and move them into a/yyyy-mm-dd/ subdirectories based on their age.
The plan was I would archive handful of files once in a week and write a description of what it was. I just stopped doing the archval work shortly after I had made the tool because I weren't motivated to do it. There ended up being total of 32 entries in the a/ directory.
Despite how it turned out, this illustrates the beauty of a *nix system. The principles and design under Unix allows you to piece up thousands of independently developed programs together into a well-functioning operating system.
The unix-lessons of designing and maintaining extremely large projects are ignored by the professional amateurs of the software industry. Problems with large monolithic software are still attempted to solve by throwing more resources at it.
On the basis the ideas of Unix are based on the divide and conquer paradigm. Break large programs and problems into small pieces that have limited forms of communication with each other.
There's always someone who claims that this won't work on their project with thousands of participants and million lines of source code. I yet have had to see such a program.