File system Crypting on Debian systems using 'encfs'


How-to taken from

Goal: protect one or more directory with crypting against a phisical attack to the server.

Required packages: fuse-source, module-assistant, encfs.


MySql database replication


The following procedure has been taken from the official MySQL web site:

First of all, the replication method here explained is also called "asynchronous", because write queries are first executed by a primary database server called Master and then (even though almost immediatly in most cases) by a secondary database server called Slave. Synchronous replication instead is a characteristic of MySQL Cluster, which will be soon the subject of the next node... I hope...

The main goals are:

  • data availability - in case of main server failure you can easily switch all your clients to the slave
  • load balancing - only SELECT queries may be sent to the slave to reduce the query processing load of the master
  • backup - you can perform database backups using a slave server without disturbing the master. The master continues to process updates while the backup is being made.

Free PDF creator


To save documents as PDF (for FREE, of course!) you can alternatively use OpenOffice, that can save opened documents as PDF, or if you need an easier tool to print PDF from every program, this is what you are looking for: FreePDF.

As other PDF generation tools, it needs ghostscript:

Run cron.php without crontab: now it's possible.


Dear drupal webmaster newbies, this is for you!

Now it's possible to run cron.php regularily, without any access to the server or without any knowledge about servers and crontab!

Just put these simple PHP lines in a block of your Drupal site, better if mostly present, or create a new block with them:

$pntr=fopen('lastcron.txt', 'w');
if(fwrite($pntr, $tmstmp))
mail ("dest","cron OK","OK");
include ("cron.php");

WGET example, (for Windows too!)


wget example:

wget -F -c -w 1 -r -L -k -l 1

Options meaning:
-F This enables you to retrieve relative links from existing HTML files on your local disk, by adding <base href="url"> to HTML, or using the --base command-line option
-c continue getting a partially-downloaded file, otherwise, if the destination file already exists, the new downloaded file will be renamed [FILENAME.EXT].1
-w XX wait the specified (XX) number of seconds between the retrievals. To prevent servers overload
-r recurse folder
-L follow relative links only
-k after the download is complete, convert the links in the document to make them suitable for local viewing
-l specify recursion maximum depth level depth. The default maximum depth is 5

You may want to use -b to execute the command in backgrount mode.

Windows version:
Manual: wget --help