You are here

Debian

100% CPU load due to Leap Second

This morning Gregor Samsa woke up... oh, pardon! This morning I woke up and found myself puzzled, because my home server was eating up all of my 4 cores CPU cycles. Especially mysqld was high on CPU load. 100% CPU load for the mysql-server instance and 100% CPU load for akonadiservers own mysqld instance. Restarting KDE and mysql-server didn't help on my Debian unstable machine. Next step was upgrading the system. Sometimes this helps indeed, but not today.

Looking at bugs.debian.org for mysql-server didn't reveal any help as well. So my next logical step was to ask on #debian-devel in IRC. And my question was very quick answered: 

11:28 < ij> since tonight I've got two mysqld processes running at 100% CPU, one spawned by akonadi and
            the other is the mysqld from mysql-server (unstable that is). is this an already known issue?
            haven't found anything on b.d.o for mysql-server, though
11:29 < mrvn> ij: topic
11:29 < mrvn> you need to set the time
11:30 < ij> waaaaah!
11:30 < mrvn> ij: indeed.

The topic was at that time: 

 100% CPU? Reset leap second http://xrl.us/bnde4w

So, it was caused by the leap second. Although you might suspect mysql doing some nasty things (which, IMHO, is always a good guess ;)), the issue is this time within the Linux kernel itself, as a commit on git.kernel.org clearifies.

To fix this issue you need to set the time manually using the following command or just reboot: 

date -s "`date`"

So far I found these applications being hit by this kernel bug: 

  • mysql-server
  • akonadi (as it uses own mysql instances)
  • Firefox
  • Openfire Jabber server (because it's using Java, which seems to trigger the problem as mysql does)
  • Virtualbox' VBoxSVC process
  • puppetmaster from package puppet, reported by Michael
  • mythfrontend, reported by pos on #debian-devel
  • Jetty, Hudson, Puppet agent and master, reported by Christian
  • milter-greylist, reported by E. Recio
  • dovecot, reported by Diogo Resende
  • Google Chrome, reported by Erik B. Andersen
  • if you find more apps, please comment and I'll include them here...

So, hope this helps and many thanks to mrvn and infinity on #debian-devel for the help!

Kategorie: 
 

Confusion about mkfs.xfs and log stripe size being too big

Recently I bought some new disks, placed them into my computer, and built a RAID5 on these 3x 4 TB disks. Creating a physical device (PV) with pvcreate, a volume group (VG) with vgcreate and some logical volumes (LV) with lvcreate was as easy and well-known as creating an XFS filesystem on the LVs... but something was strange! I never saw this message before, when creating XFS filesystems with mkfs.xfs: 

log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB

Usually I don't mess around with the parameters of mkfs.xfs, because mkfs.xfs is smart enough to find near to optimal parameters for your filesystem. But apparently mkfs.xfs wanted to use a log stripe unit of 512 kiB, although its maximum size for this is 256 kiB. Why? So I started to google and in parallel asked on #xfs@freenode. Erik Sandeen, one of the core developers of XFS, suggested that I write that issue to the mailing list. He did already face this issue himself, but couldn't remember details.

So I collected some more information about my setup and wrote to the XFS ML. Of course I included information about my RAID5 setup:

muaddib:/home/ij# mdadm --detail /dev/md7
/dev/md7:
Version : 1.2
Creation Time : Sun Jun 24 14:58:21 2012
Raid Level : raid5
Array Size : 7811261440 (7449.40 GiB 7998.73 GB)
Used Dev Size : 3905630720 (3724.70 GiB 3999.37 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Tue Jun 26 05:13:03 2012
State : active, resyncing
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Resync Status : 98% complete

Name : muaddib:7 (local to host muaddib)
UUID : b56a714c:d193231e:365e6297:2ca61b65
Events : 16

Number Major Minor RaidDevice State
0 8 52 0 active sync /dev/sdd4
1 8 68 1 active sync /dev/sde4
2 8 84 2 active sync /dev/sdf4

Apparently, mkfs.xfs takes the chunk size of the RAID5 and want to use this for its log stripe size setting. So, that's the explanation why mkfs.xfs wants to use 512 kiB, but why is the chunk size 512 kiB at all? I didn't messed around with chunk sizes when creating the RAID5 either and all of my other RAIDs are using chunk sizes of 64 kiB. The reason was quickly found: the new RAID5 has a 1.2 format superblock, whereas the older ones do have a 0.90 format superblock.

So, it seems that somewhen the default setting in mdadm, which superblock format is to be used for its metadata, has been changed. I asked on #debian.de@ircnet and someone answered that this was changed in Debian after release of Squeeze. Even in Squeeze the 0.90 format superblock was obsolete and has been only kept for backward compatibility. Well, ok. There actually was a change of defaults, which explains the behaviour of mkfs.xfs now, wanting to set log stripe size to 512 kiB.

But what is the impact of falling back to 32 kiB log stripe size? Dave Chinner, another XFS developer explains: 

Best thing in general is to align all log writes to the
underlying stripe unit of the array. That way as multiple frequent
log writes occur, it is guaranteed to form full stripe writes and
basically have no RMW overhead. 32k is chosen by default because
that's the default log buffer size and hence the typical size of
log writes.

If you increase the log stripe unit, you also increase the minimum
log buffer size that the filesystem supports. The filesystem can
support up to 256k log buffers, and hence the limit on maximum log
stripe alignment.

And in another mail, when being asked if it's possible to raise the 256 kiB limit to 512 kiB because of the mdadm defaults to 512 kiB as well: 

You can't, simple as that. The maximum supported is 256k. As it is,
a default chunk size of 512k is probably harmful to most workloads -
large chunk sizes mean that just about every write will trigger a
RMW cycle in the RAID because it is pretty much impossible to issue
full stripe writes. Writeback doesn't do any alignment of IO (the
generic page cache writeback path is the problem here), so we will
lamost always be doing unaligned IO to the RAID, and there will be
little opportunity for sequential IOs to merge and form full stripe
writes (24 disks @ 512k each on RAID6 is a 11MB full stripe write).

IOWs, every time you do a small isolated write, the MD RAID volume
will do a RMW cycle, reading 11MB and writing 12MB of data to disk.
Given that most workloads are not doing lots and lots of large
sequential writes this is, IMO, a pretty bad default given typical
RAID5/6 volume configurations we see....

So, reducing the log stripe size is in fact a good thing[TM]. If anyone will benefit from larger log stripe sizes, s/he would be knowledgeable enough to play around with mkfs.xfs parameters and tune them to needs of the workload.

Erik Sandeen suggested, though, to remove the warning in mkfs.xfs. Dave objects and maybe it's a good compromise to extend the warning by giving an URL for a FAQ entry explaining this issue in more depth than a short warning can do?

Maybe someone else is facing the same issue and searches for information and find this blog entry helpful in the meantime...

Kategorie: 
 

DaviCal and Addressbook Sync

After exchanging my rusted Nokia N97 against an iPhone I was in need to setup calendar and addressbook syncing again. Addressbook syncing wasn't possible with N97 anyways, or I haven't found out how to do it. Previously I synced my N97 by using iSync, but iSync doesn't sync anymore with iPhone, although iPhone now syncs with iTunes. Weird? Yes. But that's how it works. The iPhone syncs now via WLAN instead of Bluetooth, which is an improvement, but I don't really want to fire up iTunes everytime I want to sync my calendar or addressbook. And using iCloud is really not an option as well, because of privacy concerns. I'm a big fan of selfhosting and already have a running DaviCal instance running on my server. DaviCal is a great piece of software from Debian maintainer Andrew McMillan, who is doing a survey on Davical, so there's, of course, a Debian package for it.

Anyway, one problem with OSX and addressbook sync via carddav is that it is not working out of the box with Addressbook.app on OSX, although the documentation in the DaviCal wiki is quite useful. When you try to enter a new account in Addressbook.app the sync will not work. The solution can be found on the private blog of Harald Nikolisin, which is in German. He writes (German, English translation follows)

Mac OS X Adressbuch anschliessen
Oh ja – wenn man mittels SSL drauzugreift, dann gibts Probleme.
Im der Applikation Adressbuch kann man zwar ein CardDAV Account anlegen bei dem man die Authorisierungsdaten und den kompletten Serverpfad (s.o.) eingeben kann, man läuft aber immer auf eine Fehlermeldung hinaus.
Die Lösung ist, zweimal “Create” anzuklicken um den fehlerhaften Account anzulegen.

Dann editiert man manuell folgende Datei:

~/Library/Application Support/AddressBook/Sources/UNIQUE-ID/Configuration.plist
Dort trägt man unter Server String die komplette URL ein.
https://SERVERNAME/davical/caldav.php/USERNAME/contacts
Am besten modifiziert man noch das Feld HaveWriteAccess auf den Wert auf “1″

English translation: 

Connecting Mac OS X addressbook
Oh, yes - there are problems when accessing via SSL.
In Addressbook.app you can add a CardDAV account where you can define authentication and 
server path, but you'll always get an error message.
The solution is to click twice on "Create" in order to create the faulty entry.

Then you can edit the following file:

~/Library/Application Support/AddressBook/Sources/UNIQUE-ID/Configuration.plist

There you enter your complete URL under Server String.
https://SERVERNAME/davical/caldav.php/USERNAME/contacts 
It's best to modify the field HaveWriteAccess to the value "1"

After following this advice my Addressbook.app did successfully stored the contacts into DaviCals CardDAV from where I can sync with my iPhone. Maybe Andrew want to include this to the DaviCal wiki, maybe I'll do this myself by registering in the Wiki for that purpose...

Oh, and I forgot: Using the Roundcube plugin from graviox is working nice as well with DaviCals CardDAV!

Kategorie: 
 

About Gallery3 in Debian and MySQL

Years ago, when I started using a cheap Kodak DX3600 digital camera to make some digital photos, I used Gallery from Menalto to collect these pictures within a gallery. Gallery (version 1) was using plain text files to keep its information about galleries and photos and as the number of photos I put into the gallery the more it got slower and slower. Then Gallery2 was released which used a database, either MySQL or PostgreSQL, and was a huge improvement in speed. My main galleries do have about 10-20.000 pictures each. But Gallery2 is aged nowadays and the next logical step would be to migrate to Gallery3. But what a mess!

Gallery3 has some drawbacks: 

  1. there is currently no gallery3 package in Debian, although it's been released upstream for some time now.
  2. there is actually a bug open (#511715), stating that there are some license issues with Gallery3 and some SWF files
  3. it's been said that Gallery3 doesn't support per picture permissions anymore, only permission per album is now possible, which is giving me headache as I changed permissions of some personal pictures in the past for privacy reasons and which would either lead to completely unavailable albums to the public or the need of splitting album into a public and a private section, which breaks the timely order of the pictures
  4. whereas G2 supported both MySQL and PostgreSQL as database backends, G3 only supports MySQL. That's a real pitty because I prefer PostgreSQL because of its stability and easiness over MySQL. It already happened several times that MySQL databases were gone after a kernel crash or something. Even mysql.user table was gone more than once, whereas PostgreSQL never ever has shown such behaviour to me. It just works.

I'm really upset about the last point! Why is there such a strong believe in MySQL. In my eyes, MySQL is utter crap. It's more like MS Access than a real thing when talking DBMS/SQL. And my impression is that, after Oracle bought MySQL, Oracle did a good job to scare their customers off. PostgreSQL on the other hand gained a good momentum in usage since the Oracle/MySQL deal. So, it's a total mystery to me how a big software package with dozens of developers can decide not to support PostgreSQL or even drop the support of PostgreSQL for a new release! It's driving me nuts, again and again.

Other big software packages like Drupal are doing the right thing: while PostgreSQL support in drupal6 was weak and buggy because of all these awful MySQLisms around, Drupal now uses a database abstraction layer in drupal7 to allow even sqlite or Oracle to be used as underlying database. That's the way to go, but it's totally awkward to drop PostgreSQL support as Gallery3 did.

So, is there a way out of my dilemma? Will gallery3 be a package within Debian soon (no offense to Michael Schultheiss, I think he's doing a great job and needs assistance from upstream in this case)? Is there any good replacement for Gallery3 that can deal with tenthousends of images and dozens of users and supports PostgreSQL and has some kind of import tool from gallery2?

Kategorie: 
 

Grub2 fails booting from second VG

I just got a new harddisk, because my old RAID of 3x 1 TB disks are getting filled and I'm running out of space. In the end I'll have replaced my old disks by 3x 4 TB disks. Currently the setup is as this: 

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x38d543a1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63      514079      257008+  83  Linux
/dev/sda2          514080     4707044     2096482+  82  Linux swap / Solaris
/dev/sda3         4707045    13092974     4192965   83  Linux
/dev/sda4        13092975  1953520064   970213545    5  Extended
/dev/sda5        13093038    15197489     1052226   83  Linux
/dev/sda6        15197553  1953520064   969161256   83  Linux

Same to the other 2 disks, of course. So, I have (for x=[abc]): 

  • sdx1: RAID1 for /boot
  • sdx2: RAID1 for swap
  • sdx3: RAID1 for a rescue system
  • sdx5: RAID1 for /
  • sdx6: RAID5 for LVM for /usr, /var, /home & everything else

For the new disks I want a more simple layout like this maybe:

  • sdx1: RAID1 for /boot
  • sdx2: RAID1 for swap
  • sdx3: RAID5 for LVM for /, /usr, /var, /home & everything else

If it works, I would be fine without any special /boot partition anyway, but there's another problem: on the old disk set I have LVM and a Volume Group named "vg" and on the new disk I have a Volume Group named "lv": 

PV         VG   Fmt  Attr PSize PFree 
/dev/md4   vg   lvm2 a--  1.81t 129.53g
/dev/md6   lv   lvm2 a--  3.64t   3.58t

I made some Logical Volumes on "lv", copied over my system, ran update-grub and grub-install /dev/sdX and rebooted. In the grub menu I can select my desired new root partition, but when I try to boot from it, grub is not able to find the root device. Config lines within grub looks like this (from memory): 

root='(lv-root)'
search --no-floppy  --fs-uuid --set-root=0d54bd89-6628-499f-b835-e146a6fd895f

The UUID within grub matches the same UUID of /dev/lv/root, but grub states that I can't find the disk. The needed modules like for LVM and RAID are loaded, so I assume a problem with multiple Volume Groups, because a simple "ls" from within grub just shows Logical Volumes for Volume Group "vg", but not a single one of the second Volume Group "vg".

Is there a limitation in Grub2 for the number of supported Volume Groups? Does it only find the first created Volume Group? Is there a way to work around this limitation or is it a bug?

UPDATE:

  • The mainboard is an Asus P5P43TD PRO with latest BIOS firmware (version 0710) from Asus website.
  • The 4 TB disk is a Hitachi HDS724040ALE640 and the disk is recognized in BIOS as 4 TB
  • The disk is labeled with a GPT partition table.
  • Filesystem on /boot is ext3 and XFS on all other partitons/LVs.
Kategorie: 
 

Goodbye Google, hello Diaspora!

Well, at least in Germany there was lot of press coverage about Google changing its data protection policies and joining the user profiles of their services like Youtube, GMail, Google+ and so on. There were even HowTos to delete some tracking relevant data and history before of March 1st. like on German Spiegel Online news magazine. Because I'm no big fan of Google and being a strong privacy advocate, I tried to follow the steps mentioned there and got some surprises.

First of all I discovered in Googles Dashboard that Google already joined my private account or email address with the one I used at work when I was testing some Android phones for some reasons. The mail address from work was considered as primary address and I couldn't change this. I could delete my private mail address from that account, but not the primary address from work.

So, time was pressing because 1st of March was coming near, only one hour left, but what to do know? I'm already using plugins like Ghostery and AdBlockPlus for some time now to minimize the chance of being tracked and such. I registered with Google+ to have a look onto it and to reserve my name and account there, but I was not actively using it for privacy reasons nor do I use Facebook. Therefor the decision was simple and easy: deleting my Google account was probably the best idea I got on that day. That was easy and a quick win for my privacy concerns!

On the other hand having some sort of a social network can be nice. And I'm a fan of decentralized solutions like Jabber, which I prefer over AIM/ICQ. I'm running my own Jabber server for some years now, so it was a natural thought to me to make this step for a social network as well.

Diaspora* started as a distributed social network in 2010 after some students listened to a speech by Eben Moglen about "Freedom in the Cloud" and is currently still in Alpha stage of development. But it is working and it is running on free software. That's a fairly good reason alone to prefer, support and use Diaspora* over other proprietary social networks like G+ or Facebook, isn't it? So, give it a try!

Everyone can run a Diaspora* node, called "pod", on their own server. There are some good installation guides available at Github.com Wiki even covering installation on Debian! Although these installation HowTo is quite good there are some pitfalls left. For example you'll need good SSL cert from an CA authority that is wildly installed on all systems. CAcert seems not to be supported and self-signed certs and CAs doesn't work either. Without a good SSL cert you won't be able to interconnect with other Diaspora* pods. Another pitfall for Debian seems to be the installation of Ruby. First I used the Debian Ruby packages, but got some errors when starting the server that some CSS files couldn't be found. After using the RVM installation metioned in the installation guide these problems were solved (please note that the described way of using RVM didn't work for me either and I got help from some really helpful people on #diaspora-de@freenode).

But anyway, I managed to get Diaspora* up and running on my own server and get it to interconnect with other pods. Although installing Diaspora* from source is currently a little pain compared with the ease and comfort of pre-built Debian packages, it's worth the effort! Everyone who consider Google evil and Facebook bad should consider switching to Diaspora*! The more people join, the better the social network will get! Help to fight against the AOL-ism of the Internet by using open and non-proprietary APIs and software like Jabber and Diaspora*!

You can find my Diaspora* pod at: http://nerdwind.de/ where my account is "ij".

Kategorie: 
 

Roundcube doesn't work anymore because of suhosin

Well, yesterday out of nothing my webmailer roundcube started to refuse to work. At least as I remember it. For some reasons reloading the Inbox just showed the "Loading..." message on the screen, but there was no list of mails anymore. Funny enough other folders do actually work as before. But anyway, doing an update did not help and improve anything. (I really don't know whether I updated before or after because of the first occurence of this issue.)

There's an entry in syslog when loading the Inbox folder: 

Oct 26 07:24:59 muaddib suhosin[32432]: ALERT - Include filename ('http://www.gnu.org/s/hello/manual/automake/ ?.php') is an URL that is not allowed (attacker '127.0.0.1', file '/usr/share/roundcube/program/include/iniset.php', line 110

This lead to bug #1488086 in the Roundcube issue tracker which states: 

This messages made me wonder why suhosin thinks there's an include going on. Line 111 of iniset.php shows:

include_once("$filename.php");

It seems like roundcube wants to include what is displayed in the subject, which happens to be a url - and suhosin legitimately blocks this attempt.

In short, I can send an email to a user on a suhosin protected mail server and make his inbox unavailable. Needless to say, the user cannot delete this email himself via RoundCube. In my case, I had to delete the email file on the server to make roundcube show the inbox again.

In Debian there's bug #619411 that is related to PATH setting in iniset.php, but I'm not sure if this is really related to #1488086 in the Roundcube issue tracker and my problem? However, disabling suhosin doesn't seem the right way to "solve" this issue and the trac issue tracker suggests a security related problem.

Anyway, I filed this as bug #646675 in Debian, waiting for the bug number. But when someone else knows some quick fixes or something I can try, please speak up! :-) 

UPDATE: It seems as if some mail triggered this issue like reported in the Roundcube ticket. After filtering my mails with Iceweasel, I'm being able to read my Inbox now again.

Kategorie: 
 

Upgrading m68k from etch-m68k to unstable

 After being dropped out of Debian, the m68k was stalled from some time now. There was no real upgrade path and so my machines still are running etch-m68k. Thanks to Thorsten Glaser the port is slowly keeping up with NPTL now ported to kernel and glibc for m68k. He took care to port and compile a lots of packages that are needed for upgrading from etch-m68k. Big thanks for that to Thorsten!

Anyway, I'm in the progress of upgrading my m68k machines and buildds with the help and tips from Thorsten and this is the way I'm doing this: 

  1. Change your /etc/apt/sources.list to include this:

    deb http://ftp.debian-ports.org/debian/ unstable main contrib
    deb http://archive.debian.org/debian etch-m68k main contrib non-free
  2. Get libuuid-perl from snapshot.debian.org:

    wget http://snapshot.debian.org/archive/debian/20070128T000000Z/pool/main/libu/libuuid-perl/libuuid-perl_0.02-1_m68k.deb
    dpkg -i libuuid-perl_0.02-1_m68k.deb

     
  3. Get kernel & linux-base from unstable
    You need to install a recent kernel like linux-image-2.6.39-2-amiga in my case. Either download it by hand or use apt: 

    apt-get -d install linux-image-2.6.39-2-amiga linux-base
    cd /var/cache/apt/archive
    dpkg --force-depends -i linux-image-2.6.39-2-amiga_2.6.39-3_m68k.deb linux-base_3.3_all.deb

     
  4. If needed, remove linux-bases postinst, when you get this kind of error:

    syntax error at /var/lib/dpkg/info/linux-base.postinst line 1275, near "# UUIDs under /dev"
    Can't use global $_ in "my" at /var/lib/dpkg/info/linux-base.postinst line 1289, near "{$_"
    rm /var/lib/dpkg/info/linux-base.postinst
    dpkg --configure --pending --force-depends

  5. If everything installed fine you should be ready to boot into your new kernel.
    On Amigas you mostly likely need to edit your boot script and copy your kernel and System.map to your AmigaOS partition. This is how my boot looks like: 

    amiboot-5.6 -k vmlinux-2.6.39 "debug=mem root=/dev/sda4 video=pal-lace devtmpfs.mount=1"


    You can omit the debug=mem. This is just for the case that the kernel crashes. You can then collect the dmesg output under AmigaOS with the dmesg tool. The other parameter devtmpfs.mount=1 is needed because we don't want udev. Using video=pal-lace is necessary because the 2.6.39 kernel crashes on initializing my PicassoII graphics card and I've unplugged the card for the time being necessary to solve the problem.
     
  6. Kernel 2.6.39 runs fine, but you can't ssh into machine.
    Because we don't want udevd, there's now a problem when trying to login by SSH:

    pty allocation request failed on channel 0
    stdin is not a tty


    You can fix this either by installing udev, which most websites recommend when you're looking for this error, because on Xen this is the recommended solution, but as we are on m68k and not under Xen, it's better to run with a static /dev. So you need to create /dev/pts and add the following to your /etc/fstab:

    mkdir /dev/pts

    devpts          /dev/pts        devpts  rw,noexec,nosuid,gid=5,mode=620 0 0
     
  7. After kernel boots into 2.6.39 you can dist-upgrade to unstable.
    When you successfully booted into your new kernel, you should be save to proceed with upgrading to unstable. For that you should first let the missing or broken depends from linux-image and linux-base being installed:

    apt-get -f install


    This should lead to install some dependencies that were missing, because using dpkg --force-depends from above. After that I upgraded dpkg, apt and apt-utils:

    apt-get install dpkg apt apt-utils


    When this succeeded, you should be save to fully dist-upgrade to unstable:

    apt-get -u dist-upgrade

    When you get errors during apt-get dist-upgrade, you might run dpkg --configure --pending or apt-get -f install, before proceeding with apt-get -u dist-upgrade. Another problem can occur with apt. When you see this error: 

    E: Could not perform immediate configuration on 'perl-modules'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2)

    you should add "-o APT::Immediate-Configure=false" to your apt-get command, for example:

    apt-get -o APT::Immediate-Configure=false -f install

    Another pitfall might be exim4-daemon-heavy, which currently segfaults. Replace it by exim4-daemon-light in that case, which works.

As stated above, my PicassoII in my A3000 doesn't seem to work under 2.6.39, whereas the PicassoIV in my A4000T does not crash the kernel.

Please don't hesitate to add additions, corrections or other kind of feedback by commenting below!

P.S.:
Wouter and Thorsten are currently at Debconf in Banja Luka, working on the m68k port. Wouter just finished a first version of a new debian-installer image. He asks for testing it on real hardware. Please volunteer if you can! It's available at: http://people.debian.org/~wouter/di-m68k/

Kategorie: 
 

Apache and SNI - problems with some clients

Never change a running system. Old but true saying, but sometimes there's no other chance. Until a few days ago I was happy with SSL vhosts running with a single SSL certificate. Then I needed to add another SSL certificate for another site with several subdomains like svn.site-A.de, trac.site-A.de and www.site-A.de. With Apache2 running on Squeeze it's possible to make use of Server Name Indication (SNI) mechanism in order to be able to use multiple SSL certs on a single IP based vhost setup.

Well, it works for some client software, but apparently it does not work well with korganizer or Firefox Sync plugin nor with Cyberduck on OS X. Here's an example config: 

SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile  /etc/apache2/ssl/site-A-cert.pem
SSLCertificateKeyFile  /etc/apache2/ssl/site-A-key.pem
SSLOptions StrictRequire
SSLProtocol -all +TLSv1 +SSLv3
SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM
SSLVerifyClient none
SSLProxyEngine off

This is identical to all SSL vhosts on my system. The funny thing is now that it works for two sites (site A and site B) while it doesn't work for site C. In Firefox Sync plugin I get an error that the connection couldn't be established while on Cyberduck (a webdav client for OS X) I get a requester stating that I get cert for site A on site C. Pointing the browse to the appropriate URL I get the correct cert for site C on site C.

Is there anything I miss with SNI setup in Apache?

Kategorie: 
 

Updated: Automatically restore files from lost+found

Today, in an IRC channel near you, the discussion came to recover files from lost+found somehow. Two years ago I wrote some scripts to automatically recover files from lost+found, so this is some sort of a repost. There are two scripts: one that generates some kind of ls-LR file to have all the information needed for the second script, which restores those files in lost+found to their original folders. Here is the information from the original blog post: 

make-lsLR.sh - call this regularly (cron) to create the needed files that are stored in /root/. Of course you can alter the location easily and exclude other directories from being scanned.

check_lost+found.py - The second script is to be run when your fsck managed to mess up with your files and stored them into lost+found directory. It takes 3 arguments: 1) the source directory where your messed up lost+found directory is, 2) the target directory to which the data will be saved and 3) a switch to actually make it happen instead of a dry-run.

You can find both files as attachment end the end of this blog post.

I've chosen to copy the files to a different place instead of moving them within the same filesystem to their original place for safety reasons. Primary goal is to retrieve the files from lost+found, not to replace a full featured backup and restore application. Because of this the script doesn't handle hard- nor symlinks correctly. It just copy files.

Of course there's still room for improvements, like handling hard-/symlinks correctly or using inode number instead of md5sums to move data back to its prior location. But it works for me[tm] well enough in this way, so I'm satisfied so far. You're welcome, though, to improve this piece of ugliness if you like.

Maybe someone else finds this usefull as well. Use it on your own risk, of course. :)

Kategorie: 
 
AttachmentSize
make-lsLR.sh.txt2.23 KB
check_lostfound.py.txt2.9 KB

Pages

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer