The business with the warning on the Internet

If you are with-mach-Internet, Web 2.0, a blog or a forum, and even through contributions to this Web 2.0, runs in the present time more and more risk to get a warning. Some law firms have discovered the business model of copyright warning since it is a very worthwhile business. In addition to the warnings because of the uploading of copyrighted material on exchanges, also known as file sharing, more and more bloggers because of the use of protected images. Also in this area, the copyright law which prohibits a copyrighted works without the permission of the copyright owner to publish. Even if no commercial idea behind the publication of an image, there is a risk to get a warning.

Upon receipt of such a warning, it is important that the often very brief time periods within this warning must be adhered to. For this reason, you should not take long to wait until the advice of a specialized lawyer is used. The warning in most cases enclosed cease-and-desist order contains excessive demands and should therefore not be signed without the prior consultation of a lawyer.

In general, a so-called modified cease-and-desist order signed by the dealers who received a person, the legally required minimum requirements. Should not the copyright infringement by a itself, but by a third person over the internet connection of the dealers who received have been committed, all measures have been taken to secure the WLAN, you can often greatly reduced the claims arising from the warning or even be entirely excluded. It therefore remains to be noted that a warning not to be taken lightly and, above all, should not be ignored. Anyone who moves on the Internet should also always keep in mind whether you by his actions not maybe a copyright infringement and thus run the risk of committing to get a warning. Find also this William Hill Review.

MySQL master / slave replication

There are tons of tutorials about setting up master / slave replication for MySQL. Here are my own quick notes:
1. Master: /etc/mysql/my.cnf

server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 1
max_binlog_size = 100M

2. Slave: /etc/mysql/my.cnf

server-id = 2
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M

3. Master: granting privileges for slave user on database master

TO ”@”

4. Master: creating database dump

Start mysql console as database root and enter the following command:


DON’T shut down the mysql client, otherwise the table lock is lost. Open a second shell to the database master and enter the following command on commandline:

mysqldump -u root -p… –databases … –opt > masterdump.sql

Next, switch back to your mysql console and enter the following command:


The output will look something like:

mysql> show master status;
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
| mysql-bin.000004 | 40140874 | | |
1 row in set (0.00 sec)


Write down “File” and “Position” … you will need it later for starting replication.

Now you can unlock the tables:


5. Slave: import database dump

Copy masterdump.sql to the slave server and import the database:

mysql -u root -p… < masterdump.sql

This may take quite some time …
6. Slave: start replication

Start mysql client on slave and enter the following commands:



Setting up Master-Slave replication using xtrabackup

In a previous blog entry i described a method of how to setup master-slave replication with mysql. In steps #4 and #5 i used mysqldump and mysql-client for creating a database dump on the master and importing it on the slave. The problem with this approach is, that the database tables are locked, as long as the dump is running. For small databases this might not be a problem, but as data grows, the time to create a dump takes longer and longer. @work we apparently reached some critical level — mysqldump ran hours and hours and would probably still run, if i had not stopped it already.

Luckily there are more suitable tools for large databases available. Innodb hot backup and xtrabackup. I’ve decided to go with xtrabackup, because it’s open-source, free and actively developed. Innodb hot backup is closed-source and not for free ;-).

The following steps are ment to replace steps #4 and #5 of my previous blog post.
1. building xtrabackup

For Linux i had to build xtrabackup from the source package, because there was no binary package available for my architecture — it’s very easy, though:

harald@master:~/xtrabackup-0.9.5rc$ automake -a -c

harald@master:~/xtrabackup-0.9.5rc$ ./configure

harald@master:~/xtrabackup-0.9.5rc$ make

harald@master:~/xtrabackup-0.9.5rc$ cd innobase/xtrabackup
harald@master:~/xtrabackup-0.9.5rc/innobase/xtrabackup$ make

harald@master:~/xtrabackup-0.9.5rc/innobase/xtrabackup$ sudo cp \
innobackupex-1.5.1 /usr/local/bin
harald@master:~/xtrabackup-0.9.5rc/innobase/xtrabackup$ sudo cp \
xtrabackup /usr/local/bin

Needless to say, that xtrabackup needs to be deployed on every database server.
2. creating a database dump

After successfully building and installing xtrabackup, taking a database dump is very easy:

root@master:~# innobackupex-1.5.1 –user=… –password=… \
–defaults-file=… –databases=”…” .

The command innobackupex-1.5.1 takes the following parameters:

username to use for database connection
password to use for database connection
this parameter is required, if the my.cnf configuration file is not located at /etc/my.cnf
space-separated list of databases to backup
destination directory to save dump to

Dumping the database with xtrabackup is incredible fast compared to mysqldump. With xtrabackup it’s just a matter of minutes:

real 4m15.614s
user 0m11.710s
sys 0m14.960s

If xtrabackup was successful, it should have created a subdirectory which name is the current date/time, with all required files in it. The directory can now be copied to the slave:

root@master:~# scp -r 2010-03-02_15-02-24 root@xx.xx.xx.xx:~

3. Setting up the slave

The first thing to do on the slave is applying the binary log files to the database dump:

root@dbslave1:~# innobackupex-1.5.1 –apply-log 2010-03-02_15-02-24

100302 14:29:56 innobackupex: innobackup completed OK!

Innobackupex will show the message above, if everything was OK. Next task is to copy the database dump to it’s new location on the slave. innobackupex is doing everything for you:

root@dbslave1:~# innobackupex-1.5.1 –copy-back 2010-03-02_15-02-24

100302 14:29:56 innobackupex: innobackup completed OK!

xtrabackup should now have copied the dump to the mysql data directory. It’s a good idea to check the user and owner of the copied files and adjust them, when needed.

Last step is to start the replication. All information required to do so ist stored in the file xtrabackup_binlog_info:

root@dbslave1:~# cat 2010-03-02_15-02-24/xtrabackup_binlog_info
mysql-bin.000331 54249842

With this information available the replication can be set up as described in step #6 of my previous blog post.

proftpd + mod_sql: solving “slow login” problem

I had a very annoying problem with proftpd, which seems a common one at first sight: slow login and the problem, that a lot of ftp clients out there have a low timeout setting configured. The problem is that googling “slow connection” or “slow login” in combination with “proftpd” led me in a totally wrong direction. A lot of people seem to have a problem with DNS lookups, which can be easily fixed by adding …

UseReverseDNS off

IdentLookups  off

… to the configuration file, to turn of any DNS lookups. But this did not change anything for me. Running a ftp client in debug mode it turned out, that the authorization itself took a very long time, which led to a timeout with most ftp clients:

air:~ harald$ ftp -d

Connected to
220 xxxxxxxxxx FTP Server
ftp_login: user `' pass `' host `'
Name ( 
---> USER harald
331 Password required for harald

The password was send, and than the ftp client had to wait 10 seconds and longer for a respone. Lot’s of ftp clients have a timeout of less than 10 seconds, which results in a timed out connection for such a long response time.

After googling for quite some time without finding anything useful on this topic — besides the DNS lookup problem — i delved deeper into to the proftpd documentation and found a howto which gave me some hints of how to speed up ftp login.

As it turned out the problem was my SQLAuthenticate directive, which i just copied from the example configuration file of mod_sql. The configuration was set to:

SQLAuthenticate users usersetThe problem with this configuration is, that the userset switch seems to be very, very expensive. I still don’t know, why this switch is set in the configuration — the documentation contains no useful examples of when to use / when to avoid this switch, but eventually i found a forum post of a proftpd maintainer, where he tells, that the userset switch is not necessary to be configured. After changing above configuration to …

SQLAuthenticate users… login is fast as hell. I’m still curious why the switch was there …

AnyJ The Cross-Platform Java IDE & Source Code Engineering Solution

  • 100 % Java
  • Win32, Solaris and Linux, Mac OS X, any other platform supporting JDK 1.3
  • supports Java 1 and Java 2

What is AnyJ?

AnyJ is a Java IDE written entirely in Java.
Developers of software systems today face a variety of challenges, such as managing large and complex class libraries, team development, communication, and effective re-using and re-engineering of existing source code.
In order to deliver complex applications within tight time-to-market deadlines, developers need flexible and intelligent integrated software engineering tools.

AnyJ has been specifically designed for Java and helps in analysing and understanding complex class libraries. Thorough knowledge of a project’s structure improves productivity in writing, re-engineering, re-using, or maintaining sourcecode. AnyJ includes a powerful customizable editor, library aware, parser backed code completion, various Class Browsers, Swing GUI Builder, JavaBeans support, Servlet support, Debugger, Application Templates.

long time — no update

I’m currently preparing to switch my blog software — again. After using wordpress and serendipity for quite some time, i came to the conclusion, that i will only be satisfied with my own blog software. Therefore i’m currently developing something based on the php5 framework i developed for work. I also decided to switch language … now i can practice my english and increase the audience of people, who won’t be interested of what i am writing ;-).

cookie based redirect mit nginx

Vor einiger Zeit habe ich einen Tip beschrieben, wie man einen cookie-based redirect mit dem LightTPD konfigurieren kann. Das Problem an der Sache ist für mich, dass das für diesen Zweck verwendete Modul mod_proxy von LightTPD in dem von mir eingesetzen Entwicklungszweig (1.4.x) nicht SSL kompatibel ist und dementsprechend HTTPS Verbindungen fehlschlagen.

Seit einiger Zeit schon habe ich den Web- und (Reverse-) Proxy-Server nginx im Auge. Auch mit diesem Server ist es möglich einen Redirect einzurichten, so wie ich ihn brauche. Und: nginx unterstützt an dieser Stelle SSL!

server {
listen 80;
server_name *;

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

location / {
if ($http_cookie ~ “(; )?devredirect=harald”) {
if ($http_cookie ~ “(; )?devredirect=markus”) {


server {
listen 443;
server_name …;

ssl on;
ssl_certificate /etc/nginx/….crt;
ssl_certificate_key /etc/nginx/….key;

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

location / {
if ($http_cookie ~ “(; )?devredirect=harald”) {
if ($http_cookie ~ “(; )?devredirect=markus”) {