You are here


Upgrading m68k from etch-m68k to unstable

 After being dropped out of Debian, the m68k was stalled from some time now. There was no real upgrade path and so my machines still are running etch-m68k. Thanks to Thorsten Glaser the port is slowly keeping up with NPTL now ported to kernel and glibc for m68k. He took care to port and compile a lots of packages that are needed for upgrading from etch-m68k. Big thanks for that to Thorsten!

Anyway, I'm in the progress of upgrading my m68k machines and buildds with the help and tips from Thorsten and this is the way I'm doing this: 

  1. Change your /etc/apt/sources.list to include this:

    deb unstable main contrib
    deb etch-m68k main contrib non-free
  2. Get libuuid-perl from

    dpkg -i libuuid-perl_0.02-1_m68k.deb

  3. Get kernel & linux-base from unstable
    You need to install a recent kernel like linux-image-2.6.39-2-amiga in my case. Either download it by hand or use apt: 

    apt-get -d install linux-image-2.6.39-2-amiga linux-base
    cd /var/cache/apt/archive
    dpkg --force-depends -i linux-image-2.6.39-2-amiga_2.6.39-3_m68k.deb linux-base_3.3_all.deb

  4. If needed, remove linux-bases postinst, when you get this kind of error:

    syntax error at /var/lib/dpkg/info/linux-base.postinst line 1275, near "# UUIDs under /dev"
    Can't use global $_ in "my" at /var/lib/dpkg/info/linux-base.postinst line 1289, near "{$_"
    rm /var/lib/dpkg/info/linux-base.postinst
    dpkg --configure --pending --force-depends

  5. If everything installed fine you should be ready to boot into your new kernel.
    On Amigas you mostly likely need to edit your boot script and copy your kernel and to your AmigaOS partition. This is how my boot looks like: 

    amiboot-5.6 -k vmlinux-2.6.39 "debug=mem root=/dev/sda4 video=pal-lace devtmpfs.mount=1"

    You can omit the debug=mem. This is just for the case that the kernel crashes. You can then collect the dmesg output under AmigaOS with the dmesg tool. The other parameter devtmpfs.mount=1 is needed because we don't want udev. Using video=pal-lace is necessary because the 2.6.39 kernel crashes on initializing my PicassoII graphics card and I've unplugged the card for the time being necessary to solve the problem.
  6. Kernel 2.6.39 runs fine, but you can't ssh into machine.
    Because we don't want udevd, there's now a problem when trying to login by SSH:

    pty allocation request failed on channel 0
    stdin is not a tty

    You can fix this either by installing udev, which most websites recommend when you're looking for this error, because on Xen this is the recommended solution, but as we are on m68k and not under Xen, it's better to run with a static /dev. So you need to create /dev/pts and add the following to your /etc/fstab:

    mkdir /dev/pts

    devpts          /dev/pts        devpts  rw,noexec,nosuid,gid=5,mode=620 0 0
  7. After kernel boots into 2.6.39 you can dist-upgrade to unstable.
    When you successfully booted into your new kernel, you should be save to proceed with upgrading to unstable. For that you should first let the missing or broken depends from linux-image and linux-base being installed:

    apt-get -f install

    This should lead to install some dependencies that were missing, because using dpkg --force-depends from above. After that I upgraded dpkg, apt and apt-utils:

    apt-get install dpkg apt apt-utils

    When this succeeded, you should be save to fully dist-upgrade to unstable:

    apt-get -u dist-upgrade

    When you get errors during apt-get dist-upgrade, you might run dpkg --configure --pending or apt-get -f install, before proceeding with apt-get -u dist-upgrade. Another problem can occur with apt. When you see this error: 

    E: Could not perform immediate configuration on 'perl-modules'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2)

    you should add "-o APT::Immediate-Configure=false" to your apt-get command, for example:

    apt-get -o APT::Immediate-Configure=false -f install

    Another pitfall might be exim4-daemon-heavy, which currently segfaults. Replace it by exim4-daemon-light in that case, which works.

As stated above, my PicassoII in my A3000 doesn't seem to work under 2.6.39, whereas the PicassoIV in my A4000T does not crash the kernel.

Please don't hesitate to add additions, corrections or other kind of feedback by commenting below!

Wouter and Thorsten are currently at Debconf in Banja Luka, working on the m68k port. Wouter just finished a first version of a new debian-installer image. He asks for testing it on real hardware. Please volunteer if you can! It's available at:


Apache and SNI - problems with some clients

Never change a running system. Old but true saying, but sometimes there's no other chance. Until a few days ago I was happy with SSL vhosts running with a single SSL certificate. Then I needed to add another SSL certificate for another site with several subdomains like, and With Apache2 running on Squeeze it's possible to make use of Server Name Indication (SNI) mechanism in order to be able to use multiple SSL certs on a single IP based vhost setup.

Well, it works for some client software, but apparently it does not work well with korganizer or Firefox Sync plugin nor with Cyberduck on OS X. Here's an example config: 

SSLEngine on
SSLCertificateFile  /etc/apache2/ssl/site-A-cert.pem
SSLCertificateKeyFile  /etc/apache2/ssl/site-A-key.pem
SSLOptions StrictRequire
SSLProtocol -all +TLSv1 +SSLv3
SSLVerifyClient none
SSLProxyEngine off

This is identical to all SSL vhosts on my system. The funny thing is now that it works for two sites (site A and site B) while it doesn't work for site C. In Firefox Sync plugin I get an error that the connection couldn't be established while on Cyberduck (a webdav client for OS X) I get a requester stating that I get cert for site A on site C. Pointing the browse to the appropriate URL I get the correct cert for site C on site C.

Is there anything I miss with SNI setup in Apache?


Updated: Automatically restore files from lost+found

Today, in an IRC channel near you, the discussion came to recover files from lost+found somehow. Two years ago I wrote some scripts to automatically recover files from lost+found, so this is some sort of a repost. There are two scripts: one that generates some kind of ls-LR file to have all the information needed for the second script, which restores those files in lost+found to their original folders. Here is the information from the original blog post: - call this regularly (cron) to create the needed files that are stored in /root/. Of course you can alter the location easily and exclude other directories from being scanned. - The second script is to be run when your fsck managed to mess up with your files and stored them into lost+found directory. It takes 3 arguments: 1) the source directory where your messed up lost+found directory is, 2) the target directory to which the data will be saved and 3) a switch to actually make it happen instead of a dry-run.

You can find both files as attachment end the end of this blog post.

I've chosen to copy the files to a different place instead of moving them within the same filesystem to their original place for safety reasons. Primary goal is to retrieve the files from lost+found, not to replace a full featured backup and restore application. Because of this the script doesn't handle hard- nor symlinks correctly. It just copy files.

Of course there's still room for improvements, like handling hard-/symlinks correctly or using inode number instead of md5sums to move data back to its prior location. But it works for me[tm] well enough in this way, so I'm satisfied so far. You're welcome, though, to improve this piece of ugliness if you like.

Maybe someone else finds this usefull as well. Use it on your own risk, of course. :)

Plain text icon KB
Plain text icon KB

KDE - Login Problems with kdm on Unstable

Some days ago I upgraded my Sid system and when I restarted my X session the other day, I wasn't able to successfully login to KDE via kdm anymore. I'm getting some errors in ~/.xsession-errors: 

kdeinit4: preparing to launch /usr/bin/knotify4
Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)knotify(16474) KNotify::event: 1  ref= 0
QMetaObject::invokeMethod: No such method KUniqueApplication::loadCommandLineOptionsForNewInstance()kdeinit4: preparing to launch /usr/bin/plasma-desktop
kded4: Fatal IO error: client killedkdeinit4: Fatal IO error: client killed
kdeinit4: sending SIGHUP to children.
klauncher: Exiting on signal 1

At the user/password prompt of kdm I can login, the KDE splash screen appears and then, suddenly, the connection fails and I'm back at the kdm login again.

I tried to look for already existing bug reports, but KDE is quite large and with many programs. Are there any pointers for a bug report or even a solution/fix for the problem, dear LazyWeb?

UPDATE 21:51:
Sune suggests in the comments that it might be a Xorg problem. I've attached a xorg.txt logfile to this post. As you can see, there's a backtrace because of a sig11 error. Funny enough, when I connect via remote X from my OSX I can login into KDE, although there are visual errors. Xorg is working fine on the local server with Gnome, though. So, for me it seems related to either KDE or maybe Compiz.

UPDATE 2011-03-30:
Problem solved! LL hinted in the comments to reinstall the Nvidia driver and now it works again! Thanks! :-)

Plain text icon xorg.txt18.14 KB

Changing/Renewing GPG key procedure?

I know I'm a little late with this, but I want to renew GPG key and change it from DSA to RSA. The length of my ElGamal key is 1024, which is not that good for todays standard. When searching on Planet Debian, I found some few HowTos, especially that by Ana Guerrero.

Are there any other tips or caveats than those mentioned in her blog? Is an RSA key with a length of 4096 state of the art at the moment? Is it acceptable to send the new key sign (and maybe encrypted) to all those that already had signed my old key to get the new key signed?

Comments are welcome, dear Lazy Web!


Happy New Year - Frohes Neues Jahr 2011

Happy New Year to all my readers! Frohes Neues Jahr allen meinen Lesern!


WebDAV as webdrive on OSX

I'm using WebDAV on Lenny and on Squeeze now for some time for syncing my bookmarks and calendars which is working just fine. But now I want to extend my WebDAV in order to use it as an external storage. The only problem is: it doesn't workon OSX! D'oh!

Basically I followed several HowTos on the Net, so I ended with this configuration so far: 

     DavLockDB /path/to/DAVLockDB/DAVLockDB
     <Directory /path/to/webdav/>
        DAV On
        AuthType Digest
        AuthName "realm"
        AuthUserFile /path/tot/.htdigest
        Require valid-user
        Options +Indexes
        AllowOverride None
        Order allow,deny
        Allow from all
                RewriteEngine Off

I can connect to and browse the WebDAV directory, but I can't upload new files - neither with Finder on OSX nor with cp on command line in OSX. When using Finder I get the following errors: 

For non-Germans: it says that the process couldn't be completed because the object is still in use.

Error -36 seems to be a generic I/O error in OSX and you can find many hits when you do a search in your favorite search engine. The Apache logs report lots of these lines: 

==> /var/log/apache2/domain-ssl-error.log <==
[Thu Dec 09 21:23:35 2010] [error] [client 2001:6f8:90e:900::2] client denied by server configuration: /var/www/net/domain/path/to/Files/._Cam-EG_20101113015900_MD 3.avi

==> /var/log/apache2/domain-ssl-access.log <==
[09/Dec/2010:21:23:35 +0100] 2001:6f8:90e:900::2 - "GET /path/to/Files/._Cam-EG_20101113015900_MD%203.avi HTTP/1.1" 403 250 "-" "WebDAVFS/1.8.1 (01818000) Darwin/10.5.0 (i386)"

When copying some files with cp on OSXs command line I get these kind of errors: 

$ cp -r Desktop/AIDAluna_KameraArchiv_Geiranger /Volumes/ij/Files/
cp: /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD: Operation not permitted
cp: Desktop/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD: unable to copy extended attributes to /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD: Operation not permitted
cp: /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD/.DS_Store: No such file or directory
cp: /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD/AVCHDTN: No such file or directory

Funny enough directories were created and some files were copying although OSX complains about "Operation not permitted": 

$ du -sch  /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/*
 10M    /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/DCIM
2,5K    /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/MISC
2,0K    /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE
 10M    total

Of course the directory on the webserver has sufficient permissions and copying files to it is working just fine with Windows as well as Debian Sid. But anyway, is there something I'm missing in WebDAV configuration or can I do something in OSX to make it work? Using a third party application on OSX is something I would like to avoid, but when nothing else will help, I'm open for suggestions.


Frozen Squeeze, broken packages and bug reports

Squeeze is frozen for some time now and will be released when it's ready, of course. Currently, I have at least two packages that have reported bugs, but it seems that they are making no progress. The first package is BackupPC.One of the bugs is #600654 and the other is #601843. Where as the first is just a cosmetic bug displaying filesystem usage of the backup volume, the other seems to have a functional impact: the backup pool doesn't get cleaned up properly each night.

Sadly the maintainer seems a little bit absent, so maybe someone else can do a NMU to get BackupPC into shape for release? Would be nice, though... ;-) 

The other package is spamassassin or more exactly libnetaddr-ip-perl and bug #601601. Whereas the BackupPC maintainer is rather quiet, there's lot of activity for the other bug but still I get these errors when running Spamassassin by cron: 

netset: cannot include 0:0:0:0:0:0:0:1/128 as it has already been included
netset: cannot include 0:0:0:0:0:0:0:1/128 as it has already been included

Although I'm confident that #601601 will be solved soon, I don't really know about the BackupPC bugs, except writing additional info mails to those bugs with "add me!" comment.

UPDATE 17.11.2010:
libnetaddr-ip-perl (#601601) seems to be fixed now.


Grub on RAID and Configuration

I've been running grub for quite a long time on my machine, but when I rebooted the other day, I noticed that there's currently a problem with the grub installation on my system: it doesn't boot anymore! ;)

My machine has 3 drives, with an LVM for the data on a RAID5. Then there is another RAID1 for /boot. This worked all the years quite reliable with grub. Now grub complains that it can't find the kernel anymore. The reason seems to be (from /boot/grub/grub.cfg):

echo    'Loading Linux 2.6.32-5-amd64 ...'
linux   /boot/vmlinuz-2.6.32-5-amd64 root=UUID=36213d56-67cf-428d-b801-4171fd9d6943 ro  vga=775
echo    'Loading initial ramdisk ...'
initrd  /boot/initrd.img-2.6.32-5-amd64

For some reason I don't know there's a /boot in front of /vmlinuz... which prevents loading the kernel. There's a "set root='(md0)'" line in the config as well, but I assume that this is correct, because /dev/md0 is my /boot on RAID0. The rootfs is on /dev/md2, another RAID0. So, I can't set root='md2' because there's no /boot/grub directory in the first place.

When I editing the linux and initrd lines in the boot prompt and remove the /boot, everything is fine and my system boots up just fine.

Was there an intended change in grub-pc package that causes this behaviour or is ist just a plain bug?


Upgrading to Squeeze - and suddenly CGI doesn't work anymore

Before I upgraded from Lenny to Squeeze, all my CGI scripts were working properly. The CGI scripts has been executed by Apache and resulted in rendered webpage. After the upgrade to Squeeze all those CGI scripts stopped been executed but instead started to get displayed as a plain text.

Common to all those CGI scripts is that they have .phtml as suffix, but the bang path/shebang consists of "#!/usr/bin/python". As an example you can have a look at Buildd.Net and its scripts like this one. The section in Apaches config looks like this: 

        <Directory /home/builddnet/unstable/WWW/cgi/>
                Options -Indexes +ExecCGI +FollowSymLinks
                AddHandler cgi-script .cgi .sh .pl .py .phtml
                Order allow,deny
                Allow from all

So, I would expect that the CGI script handler will get executed when loading a *.phtml file and that the shebang would be honoured. Funny enough: when I rename that script to *.cgi it works.

I haven't figured out yet, what causes this beahviour or what have changed during the upgrade - and how to revert to the old behaviour. So, dear lazyweb, can you give me some hints and pointers?




Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer