Sie sind hier

Debian

Grub2 fails booting from second VG

I just got a new harddisk, because my old RAID of 3x 1 TB disks are getting filled and I'm running out of space. In the end I'll have replaced my old disks by 3x 4 TB disks. Currently the setup is as this: 

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x38d543a1

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63      514079      257008+  83  Linux
/dev/sda2          514080     4707044     2096482+  82  Linux swap / Solaris
/dev/sda3         4707045    13092974     4192965   83  Linux
/dev/sda4        13092975  1953520064   970213545    5  Extended
/dev/sda5        13093038    15197489     1052226   83  Linux
/dev/sda6        15197553  1953520064   969161256   83  Linux

Same to the other 2 disks, of course. So, I have (for x=[abc]): 

  • sdx1: RAID1 for /boot
  • sdx2: RAID1 for swap
  • sdx3: RAID1 for a rescue system
  • sdx5: RAID1 for /
  • sdx6: RAID5 for LVM for /usr, /var, /home & everything else

For the new disks I want a more simple layout like this maybe:

  • sdx1: RAID1 for /boot
  • sdx2: RAID1 for swap
  • sdx3: RAID5 for LVM for /, /usr, /var, /home & everything else

If it works, I would be fine without any special /boot partition anyway, but there's another problem: on the old disk set I have LVM and a Volume Group named "vg" and on the new disk I have a Volume Group named "lv": 

PV         VG   Fmt  Attr PSize PFree 
/dev/md4   vg   lvm2 a--  1.81t 129.53g
/dev/md6   lv   lvm2 a--  3.64t   3.58t

I made some Logical Volumes on "lv", copied over my system, ran update-grub and grub-install /dev/sdX and rebooted. In the grub menu I can select my desired new root partition, but when I try to boot from it, grub is not able to find the root device. Config lines within grub looks like this (from memory): 

root='(lv-root)'
search --no-floppy  --fs-uuid --set-root=0d54bd89-6628-499f-b835-e146a6fd895f

The UUID within grub matches the same UUID of /dev/lv/root, but grub states that I can't find the disk. The needed modules like for LVM and RAID are loaded, so I assume a problem with multiple Volume Groups, because a simple "ls" from within grub just shows Logical Volumes for Volume Group "vg", but not a single one of the second Volume Group "vg".

Is there a limitation in Grub2 for the number of supported Volume Groups? Does it only find the first created Volume Group? Is there a way to work around this limitation or is it a bug?

UPDATE:

  • The mainboard is an Asus P5P43TD PRO with latest BIOS firmware (version 0710) from Asus website.
  • The 4 TB disk is a Hitachi HDS724040ALE640 and the disk is recognized in BIOS as 4 TB
  • The disk is labeled with a GPT partition table.
  • Filesystem on /boot is ext3 and XFS on all other partitons/LVs.
Kategorie: 
 

Goodbye Google, hello Diaspora!

Well, at least in Germany there was lot of press coverage about Google changing its data protection policies and joining the user profiles of their services like Youtube, GMail, Google+ and so on. There were even HowTos to delete some tracking relevant data and history before of March 1st. like on German Spiegel Online news magazine. Because I'm no big fan of Google and being a strong privacy advocate, I tried to follow the steps mentioned there and got some surprises.

First of all I discovered in Googles Dashboard that Google already joined my private account or email address with the one I used at work when I was testing some Android phones for some reasons. The mail address from work was considered as primary address and I couldn't change this. I could delete my private mail address from that account, but not the primary address from work.

So, time was pressing because 1st of March was coming near, only one hour left, but what to do know? I'm already using plugins like Ghostery and AdBlockPlus for some time now to minimize the chance of being tracked and such. I registered with Google+ to have a look onto it and to reserve my name and account there, but I was not actively using it for privacy reasons nor do I use Facebook. Therefor the decision was simple and easy: deleting my Google account was probably the best idea I got on that day. That was easy and a quick win for my privacy concerns!

On the other hand having some sort of a social network can be nice. And I'm a fan of decentralized solutions like Jabber, which I prefer over AIM/ICQ. I'm running my own Jabber server for some years now, so it was a natural thought to me to make this step for a social network as well.

Diaspora* started as a distributed social network in 2010 after some students listened to a speech by Eben Moglen about "Freedom in the Cloud" and is currently still in Alpha stage of development. But it is working and it is running on free software. That's a fairly good reason alone to prefer, support and use Diaspora* over other proprietary social networks like G+ or Facebook, isn't it? So, give it a try!

Everyone can run a Diaspora* node, called "pod", on their own server. There are some good installation guides available at Github.com Wiki even covering installation on Debian! Although these installation HowTo is quite good there are some pitfalls left. For example you'll need good SSL cert from an CA authority that is wildly installed on all systems. CAcert seems not to be supported and self-signed certs and CAs doesn't work either. Without a good SSL cert you won't be able to interconnect with other Diaspora* pods. Another pitfall for Debian seems to be the installation of Ruby. First I used the Debian Ruby packages, but got some errors when starting the server that some CSS files couldn't be found. After using the RVM installation metioned in the installation guide these problems were solved (please note that the described way of using RVM didn't work for me either and I got help from some really helpful people on #diaspora-de@freenode).

But anyway, I managed to get Diaspora* up and running on my own server and get it to interconnect with other pods. Although installing Diaspora* from source is currently a little pain compared with the ease and comfort of pre-built Debian packages, it's worth the effort! Everyone who consider Google evil and Facebook bad should consider switching to Diaspora*! The more people join, the better the social network will get! Help to fight against the AOL-ism of the Internet by using open and non-proprietary APIs and software like Jabber and Diaspora*!

You can find my Diaspora* pod at: http://nerdwind.de/ where my account is "ij".

Kategorie: 
 

Roundcube doesn't work anymore because of suhosin

Well, yesterday out of nothing my webmailer roundcube started to refuse to work. At least as I remember it. For some reasons reloading the Inbox just showed the "Loading..." message on the screen, but there was no list of mails anymore. Funny enough other folders do actually work as before. But anyway, doing an update did not help and improve anything. (I really don't know whether I updated before or after because of the first occurence of this issue.)

There's an entry in syslog when loading the Inbox folder: 

Oct 26 07:24:59 muaddib suhosin[32432]: ALERT - Include filename ('http://www.gnu.org/s/hello/manual/automake/ ?.php') is an URL that is not allowed (attacker '127.0.0.1', file '/usr/share/roundcube/program/include/iniset.php', line 110

This lead to bug #1488086 in the Roundcube issue tracker which states: 

This messages made me wonder why suhosin thinks there's an include going on. Line 111 of iniset.php shows:

include_once("$filename.php");

It seems like roundcube wants to include what is displayed in the subject, which happens to be a url - and suhosin legitimately blocks this attempt.

In short, I can send an email to a user on a suhosin protected mail server and make his inbox unavailable. Needless to say, the user cannot delete this email himself via RoundCube. In my case, I had to delete the email file on the server to make roundcube show the inbox again.

In Debian there's bug #619411 that is related to PATH setting in iniset.php, but I'm not sure if this is really related to #1488086 in the Roundcube issue tracker and my problem? However, disabling suhosin doesn't seem the right way to "solve" this issue and the trac issue tracker suggests a security related problem.

Anyway, I filed this as bug #646675 in Debian, waiting for the bug number. But when someone else knows some quick fixes or something I can try, please speak up! :-) 

UPDATE: It seems as if some mail triggered this issue like reported in the Roundcube ticket. After filtering my mails with Iceweasel, I'm being able to read my Inbox now again.

Kategorie: 
 

Upgrading m68k from etch-m68k to unstable

 After being dropped out of Debian, the m68k was stalled from some time now. There was no real upgrade path and so my machines still are running etch-m68k. Thanks to Thorsten Glaser the port is slowly keeping up with NPTL now ported to kernel and glibc for m68k. He took care to port and compile a lots of packages that are needed for upgrading from etch-m68k. Big thanks for that to Thorsten!

Anyway, I'm in the progress of upgrading my m68k machines and buildds with the help and tips from Thorsten and this is the way I'm doing this: 

  1. Change your /etc/apt/sources.list to include this:

    deb http://ftp.debian-ports.org/debian/ unstable main contrib
    deb http://archive.debian.org/debian etch-m68k main contrib non-free
  2. Get libuuid-perl from snapshot.debian.org:

    wget http://snapshot.debian.org/archive/debian/20070128T000000Z/pool/main/libu/libuuid-perl/libuuid-perl_0.02-1_m68k.deb
    dpkg -i libuuid-perl_0.02-1_m68k.deb

     
  3. Get kernel & linux-base from unstable
    You need to install a recent kernel like linux-image-2.6.39-2-amiga in my case. Either download it by hand or use apt: 

    apt-get -d install linux-image-2.6.39-2-amiga linux-base
    cd /var/cache/apt/archive
    dpkg --force-depends -i linux-image-2.6.39-2-amiga_2.6.39-3_m68k.deb linux-base_3.3_all.deb

     
  4. If needed, remove linux-bases postinst, when you get this kind of error:

    syntax error at /var/lib/dpkg/info/linux-base.postinst line 1275, near "# UUIDs under /dev"
    Can't use global $_ in "my" at /var/lib/dpkg/info/linux-base.postinst line 1289, near "{$_"
    rm /var/lib/dpkg/info/linux-base.postinst
    dpkg --configure --pending --force-depends

  5. If everything installed fine you should be ready to boot into your new kernel.
    On Amigas you mostly likely need to edit your boot script and copy your kernel and System.map to your AmigaOS partition. This is how my boot looks like: 

    amiboot-5.6 -k vmlinux-2.6.39 "debug=mem root=/dev/sda4 video=pal-lace devtmpfs.mount=1"


    You can omit the debug=mem. This is just for the case that the kernel crashes. You can then collect the dmesg output under AmigaOS with the dmesg tool. The other parameter devtmpfs.mount=1 is needed because we don't want udev. Using video=pal-lace is necessary because the 2.6.39 kernel crashes on initializing my PicassoII graphics card and I've unplugged the card for the time being necessary to solve the problem.
     
  6. Kernel 2.6.39 runs fine, but you can't ssh into machine.
    Because we don't want udevd, there's now a problem when trying to login by SSH:

    pty allocation request failed on channel 0
    stdin is not a tty


    You can fix this either by installing udev, which most websites recommend when you're looking for this error, because on Xen this is the recommended solution, but as we are on m68k and not under Xen, it's better to run with a static /dev. So you need to create /dev/pts and add the following to your /etc/fstab:

    mkdir /dev/pts

    devpts          /dev/pts        devpts  rw,noexec,nosuid,gid=5,mode=620 0 0
     
  7. After kernel boots into 2.6.39 you can dist-upgrade to unstable.
    When you successfully booted into your new kernel, you should be save to proceed with upgrading to unstable. For that you should first let the missing or broken depends from linux-image and linux-base being installed:

    apt-get -f install


    This should lead to install some dependencies that were missing, because using dpkg --force-depends from above. After that I upgraded dpkg, apt and apt-utils:

    apt-get install dpkg apt apt-utils


    When this succeeded, you should be save to fully dist-upgrade to unstable:

    apt-get -u dist-upgrade

    When you get errors during apt-get dist-upgrade, you might run dpkg --configure --pending or apt-get -f install, before proceeding with apt-get -u dist-upgrade. Another problem can occur with apt. When you see this error: 

    E: Could not perform immediate configuration on 'perl-modules'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2)

    you should add "-o APT::Immediate-Configure=false" to your apt-get command, for example:

    apt-get -o APT::Immediate-Configure=false -f install

    Another pitfall might be exim4-daemon-heavy, which currently segfaults. Replace it by exim4-daemon-light in that case, which works.

As stated above, my PicassoII in my A3000 doesn't seem to work under 2.6.39, whereas the PicassoIV in my A4000T does not crash the kernel.

Please don't hesitate to add additions, corrections or other kind of feedback by commenting below!

P.S.:
Wouter and Thorsten are currently at Debconf in Banja Luka, working on the m68k port. Wouter just finished a first version of a new debian-installer image. He asks for testing it on real hardware. Please volunteer if you can! It's available at: http://people.debian.org/~wouter/di-m68k/

Kategorie: 
 

Apache and SNI - problems with some clients

Never change a running system. Old but true saying, but sometimes there's no other chance. Until a few days ago I was happy with SSL vhosts running with a single SSL certificate. Then I needed to add another SSL certificate for another site with several subdomains like svn.site-A.de, trac.site-A.de and www.site-A.de. With Apache2 running on Squeeze it's possible to make use of Server Name Indication (SNI) mechanism in order to be able to use multiple SSL certs on a single IP based vhost setup.

Well, it works for some client software, but apparently it does not work well with korganizer or Firefox Sync plugin nor with Cyberduck on OS X. Here's an example config: 

SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile  /etc/apache2/ssl/site-A-cert.pem
SSLCertificateKeyFile  /etc/apache2/ssl/site-A-key.pem
SSLOptions StrictRequire
SSLProtocol -all +TLSv1 +SSLv3
SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM
SSLVerifyClient none
SSLProxyEngine off

This is identical to all SSL vhosts on my system. The funny thing is now that it works for two sites (site A and site B) while it doesn't work for site C. In Firefox Sync plugin I get an error that the connection couldn't be established while on Cyberduck (a webdav client for OS X) I get a requester stating that I get cert for site A on site C. Pointing the browse to the appropriate URL I get the correct cert for site C on site C.

Is there anything I miss with SNI setup in Apache?

Kategorie: 
 

Updated: Automatically restore files from lost+found

Today, in an IRC channel near you, the discussion came to recover files from lost+found somehow. Two years ago I wrote some scripts to automatically recover files from lost+found, so this is some sort of a repost. There are two scripts: one that generates some kind of ls-LR file to have all the information needed for the second script, which restores those files in lost+found to their original folders. Here is the information from the original blog post: 

make-lsLR.sh - call this regularly (cron) to create the needed files that are stored in /root/. Of course you can alter the location easily and exclude other directories from being scanned.

check_lost+found.py - The second script is to be run when your fsck managed to mess up with your files and stored them into lost+found directory. It takes 3 arguments: 1) the source directory where your messed up lost+found directory is, 2) the target directory to which the data will be saved and 3) a switch to actually make it happen instead of a dry-run.

You can find both files as attachment end the end of this blog post.

I've chosen to copy the files to a different place instead of moving them within the same filesystem to their original place for safety reasons. Primary goal is to retrieve the files from lost+found, not to replace a full featured backup and restore application. Because of this the script doesn't handle hard- nor symlinks correctly. It just copy files.

Of course there's still room for improvements, like handling hard-/symlinks correctly or using inode number instead of md5sums to move data back to its prior location. But it works for me[tm] well enough in this way, so I'm satisfied so far. You're welcome, though, to improve this piece of ugliness if you like.

Maybe someone else finds this usefull as well. Use it on your own risk, of course. :)

Kategorie: 
 
AnhangGröße
Symbol für unformatierten Text make-lsLR.sh.txt2.23 KB
Symbol für unformatierten Text check_lostfound.py.txt2.9 KB

KDE - Login Problems with kdm on Unstable

Some days ago I upgraded my Sid system and when I restarted my X session the other day, I wasn't able to successfully login to KDE via kdm anymore. I'm getting some errors in ~/.xsession-errors: 

kdeinit4: preparing to launch /usr/bin/knotify4
Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)knotify(16474) KNotify::event: 1  ref= 0
QMetaObject::invokeMethod: No such method KUniqueApplication::loadCommandLineOptionsForNewInstance()kdeinit4: preparing to launch /usr/bin/plasma-desktop
kded4: Fatal IO error: client killedkdeinit4: Fatal IO error: client killed
kdeinit4: sending SIGHUP to children.
klauncher: Exiting on signal 1

At the user/password prompt of kdm I can login, the KDE splash screen appears and then, suddenly, the connection fails and I'm back at the kdm login again.

I tried to look for already existing bug reports, but KDE is quite large and with many programs. Are there any pointers for a bug report or even a solution/fix for the problem, dear LazyWeb?

UPDATE 21:51:
Sune suggests in the comments that it might be a Xorg problem. I've attached a xorg.txt logfile to this post. As you can see, there's a backtrace because of a sig11 error. Funny enough, when I connect via remote X from my OSX I can login into KDE, although there are visual errors. Xorg is working fine on the local server with Gnome, though. So, for me it seems related to either KDE or maybe Compiz.

UPDATE 2011-03-30:
Problem solved! LL hinted in the comments to reinstall the Nvidia driver and now it works again! Thanks! :-)

Kategorie: 
 
AnhangGröße
Symbol für unformatierten Text xorg.txt18.14 KB

Changing/Renewing GPG key procedure?

I know I'm a little late with this, but I want to renew GPG key and change it from DSA to RSA. The length of my ElGamal key is 1024, which is not that good for todays standard. When searching on Planet Debian, I found some few HowTos, especially that by Ana Guerrero.

Are there any other tips or caveats than those mentioned in her blog? Is an RSA key with a length of 4096 state of the art at the moment? Is it acceptable to send the new key sign (and maybe encrypted) to all those that already had signed my old key to get the new key signed?

Comments are welcome, dear Lazy Web!

Kategorie: 
 

Happy New Year - Frohes Neues Jahr 2011

Happy New Year to all my readers! Frohes Neues Jahr allen meinen Lesern!

 

WebDAV as webdrive on OSX

I'm using WebDAV on Lenny and on Squeeze now for some time for syncing my bookmarks and calendars which is working just fine. But now I want to extend my WebDAV in order to use it as an external storage. The only problem is: it doesn't workon OSX! D'oh!

Basically I followed several HowTos on the Net, so I ended with this configuration so far: 

     DavLockDB /path/to/DAVLockDB/DAVLockDB
     <Directory /path/to/webdav/>
        DAV On
        AuthType Digest
        AuthName "realm"
        AuthUserFile /path/tot/.htdigest
        Require valid-user
        Options +Indexes
        AllowOverride None
        Order allow,deny
        Allow from all
        <LimitExcept GET PUT POST OPTIONS DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                RewriteEngine Off
        </LimitExcept>
     </Directory>

I can connect to and browse the WebDAV directory, but I can't upload new files - neither with Finder on OSX nor with cp on command line in OSX. When using Finder I get the following errors: 

For non-Germans: it says that the process couldn't be completed because the object is still in use.

Error -36 seems to be a generic I/O error in OSX and you can find many hits when you do a search in your favorite search engine. The Apache logs report lots of these lines: 

==> /var/log/apache2/domain-ssl-error.log <==
[Thu Dec 09 21:23:35 2010] [error] [client 2001:6f8:90e:900::2] client denied by server configuration: /var/www/net/domain/path/to/Files/._Cam-EG_20101113015900_MD 3.avi

==> /var/log/apache2/domain-ssl-access.log <==
[09/Dec/2010:21:23:35 +0100] rostock.ip6.windfluechter.net 2001:6f8:90e:900::2 - "GET /path/to/Files/._Cam-EG_20101113015900_MD%203.avi HTTP/1.1" 403 250 "-" "WebDAVFS/1.8.1 (01818000) Darwin/10.5.0 (i386)"

When copying some files with cp on OSXs command line I get these kind of errors: 

$ cp -r Desktop/AIDAluna_KameraArchiv_Geiranger /Volumes/ij/Files/
cp: /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD: Operation not permitted
cp: Desktop/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD: unable to copy extended attributes to /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD: Operation not permitted
cp: /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD/.DS_Store: No such file or directory
cp: /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE/AVCHD/AVCHDTN: No such file or directory

Funny enough directories were created and some files were copying although OSX complains about "Operation not permitted": 

$ du -sch  /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/*
 10M    /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/DCIM
2,5K    /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/MISC
2,0K    /Volumes/ij/Files/AIDAluna_KameraArchiv_Geiranger/PRIVATE
 10M    total

Of course the directory on the webserver has sufficient permissions and copying files to it is working just fine with Windows as well as Debian Sid. But anyway, is there something I'm missing in WebDAV configuration or can I do something in OSX to make it work? Using a third party application on OSX is something I would like to avoid, but when nothing else will help, I'm open for suggestions.

Kategorie: 
 

Seiten

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer