Eclipse or SUNrise...

Eclipse or SUNrise...
...JAVA for sure

Tuesday, May 17, 2011

Bitcoin!


Yes, you probably hear about this stuff. If not, you have to see http://bitcoin.org. This project is spreading amazingly fast! It's not only about money, but a philosophy of using money.

Bitcoin means also computing - mining for coins and this is something, that I will look closer to. I really like that they are using GPU for the calculations (which can speed up something like 10 times when comparing to a standard desktop CPU).

I'll write more about bitcoin and its technology later...

And in the meanwhile, if you found information on my blog helpfull pleas drop some bitcoins :-)
Thanks!

127YiYGxFgtWzeSdhpXtxNnvMdjGQfpcMM

Monday, May 9, 2011

WebSphere logs archiving solution

WebSphere Application Server just like others app servers have a nice feature to keep its logs - the SystemOuts and Errs and also traces and JVM natives logs. It allows you to keep logs in given number of files of given maximum size, so it lets you to have a nice rolling log out of the box.

But what if you require to keep history of more than 99 files which is the maximum file number for WAS? There is also a maximum file size of 50MB (personally I would recommend much smaller, 20MB size of logs). So there are two problems - number of files and its size. First lets deal with the number of files. To extend it we can simply use a cron mechanism, which will move the files to another directory every 5 minutes. This simple script will do the trick:


cd /ibm/logs/Srv01/
mv trace_* archive


It assumes that you have a separate directory /ibm/logs/Srv01/archive - it would be nice if it was another file system. So WAS will produce new logs in trace_ files (example file name trace_11.05.09_11.15.33.log). Remember that the current logging activity always goes to a trace.log with no additional postfix so it is important not to move this file. If the trace.log will be full (it reaches the maximum size), WAS renames the file and adds it a postfix time stamp (as in the example above) and creates a new, blank trace.log file.

OK, so we managed to have more than 99 files, now what with its size? If we want to handle more than 99 files of size about 20 MB a day it will give us more than 2GB of space per application server per every single day! We have to compress it. It would also be nice to keep it in one archive per day so if we would like to inspect what happed on given day we would just copy only one bundle. A good information is that text files compress very nicely.

To do this we will also use a script, but a bit more complex one. Here's what I did:


# This script will compress all trace logs from yesterday in its working directory
cd /ibm/logs/Srv01/archive

echo '########## Started on ' $(date +%Y%m%d) >> logger.log
YESTERDAY=`TZ=aaa24 date +%y.%m.%d`
echo $YESTERDAY

YESTERDAY=trace_$YESTERDAY

YESTERDAY=$YESTERDAY*
echo $YESTERDAY

if ls $YESTERDAY
then
echo 'Logs sucessfuly compresed, removing those files:' >> logger.log
tar cvf - $YESTERDAY | gzip > trace_`TZ=aaa24 date +%Y%m%d`.tar.gz

rm $YESTERDAY >> logger.log
else
echo 'Failed compresing the logs!' >> logger.log
fi


OK, so a quick explanation for this script - it enters the archive directory, then it prepares a $YESTERDAY variable which forms an yesterday date (the script will be triggered every day for yesterday logs). Note that I also added a * sign, so this simple regular expression will enable me to list all files on given day (trace_11.05.09*). You can shorten my script but I created it this way only to give you a step by step guide of what it does. OK, so now if we know what files we want to archive, we just try to list them via ls command, if the script succeed, that means that there are files to be archived so it continue to tar them and also zip them to a single file. The last part is the deletion of compressed files. You can also add another if instruction to the tar command to be sure to remove the plain log files only when the zipping command will succeed.

The result archive will look like this trace_20110507.tar.gz

The script also adds a log to a logger.log file to keep a history of what happed in case you would want to check it out - remember this will run automatically!

The last task would be to add two cron lines, first one will trigger the first script and it triggers every 5 minutes. The second one triggers every 2 AM (the night time is usually the best time to utilize the servers).

5 * * * * /first_script.sh
0 2 * * * /second_script.sh

Hope you'll find it useful. I created those scripts on AIX system but they should also run on most of Linux whith little or no change.