Sie sind hier

Debian

LVM on RAID5 broken - how to fix?

Some time ago one of my disks in my Software RAID5 failed. No big problem as I had two spare A08U-C2412 available to replace that single 1 TB SATA disk. I can't remember any details but somewhat went wrong and I ended up with a non-booting system. I think I tried to add the HW-RAID as a physical volume to the LVM and thought of migrating the SW-RAID to the HW-RAID or do mirroring or such. Anyway: I booted into my rescue system that was on a RAID1 partition on those disks, but LVM didn't came up anymore, because the SW-RAID5 wasn't recognized during boot. So I re-created the md device and discovered that my PV was gone as well. =:-0

No, big deal, I thought, because I have a backup of that machine on another host. I restored /etc/lvm and tried to do a vgcfgrestore after I re-created the PV with pvcreate. First I didn't use the old UUID, so vgcfgrestore complained. After creating the proper PV with UUID LVM did recognize the PV, VG and LVs. Unfortunately I can't mount any of the LVs. Something seems to be broken:

hahn-rescue:~# mount /dev/vg/sys /mnt
mount: you must specify the filesystem type

Feb 19 07:50:02 hahn-rescue kernel: [748288.740949] XFS (dm-0): bad magic number
Feb 19 07:50:02 hahn-rescue kernel: [748288.741009] XFS (dm-0): SB validate failed

Running a gpart scan on my SW-RAID5 did gave me some results:

hahn-rescue:~# gpart /dev/md3

Begin scan...
Possible partition(SGI XFS filesystem), size(20470mb), offset(5120mb)
Possible partition(SGI XFS filesystem), size(51175mb), offset(28160mb)
Possible partition(SGI XFS filesystem), size(1048476mb), offset(117760mb)
Possible partition(SGI XFS filesystem), size(204787mb), offset(1168640mb)
Possible partition(SGI XFS filesystem), size(204787mb), offset(1418240mb)
Possible partition(SGI XFS filesystem), size(1048476mb), offset(1626112mb)

*** Fatal error: dev(/dev/md3): seek failure.

These is not the complete lists of VM as a comparison to the output of lvs shows:

hahn-rescue:~# lvs
  LV                 VG   Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  storage1           lv   -wi-ao--   1.00t                                          
  AmigaSeagateElite3 vg   -wi-a---   3.00g                                          
  audio              vg   -wi-a---  70.00g                                          
  backup             vg   -wi-a---   1.00t                                          
  data               vg   -wi-a---  50.00g                                          
  hochzeit           vg   -wi-a---  40.00g                                          
  home               vg   -wi-a---   5.00g                                          
  pics               vg   -wi-a--- 200.00g                                          
  sys                vg   -wi-a---  20.00g                                          
  video              vg   -wi-a--- 100.00g                                          
  windata            vg   -wi-a--- 100.00g   

Please notice that /dev/lv/storage1 is my HW-RAID where I stored the images of /dev/vg/-LVs to run xfs_repair and such on. Anyway, the sizes of the recognized XFS partitions by gpart is mostly correct, but some a missing and xfs_repair can not do anything good on the backup images on storage1. Everything ends up in /lost+found, because the blocks seem to be mixed up somehow.

What I figured out is, that my old RAID5 device was in metaformat=1.2 whereas the new one is in format 0.9. My best guess is now to recreate the RAID5 device with format 1.2, do a vgcfgrestore on that and have (hopefully!) a working LVM with working LVs back that I can then mount again. If there's anything else I might be able to try, dear Lazyweb, please tell me. Please see the attached config files/tarballs for a complete overview.

Side note: except for AmigaSeagateElite3, which is a dd image of an old Amiga SCSI disk, I should have a fairly complete backup on my second backup location, so there's not much lost, but I would be a real timesaver when I would be able to recover the lost LVs. Both systems are behind DSL/cable and have a limit of 10 Mbps upstream. It would take weeks to transfer the data and sending an USB disk would be faster.

Kategorie: 
 
AnhangGröße
Symbol für unformatierten Text mdadm.conf_.txt954 Bytes
Binärdaten hahn-lvm-restore.tar.gz14.7 KB

Status of m68k port

Just another short status update on the m68k port and the autobuilders. According to Buildd.Net we have currently 6425 packages installed: 

wanna-build statistics - Sat Feb 16 22:52:37 CET 2013
-----------------------------------------------------

Distribution unstable:
---------------------
Installed : 6425 (buildd_m68k-ara5: 1011, buildd_m68k-arrakis: 86,
buildd_m68k-elgar: 217, buildd_m68k-kullervo: 52,
buildd_m68k-vivaldi: 123, tg: 3234, unknown: 1701,
wouter: 1)
Needs-Build : 1375
Building : 35 (buildd_m68k-ara5: 1, buildd_m68k-arrakis: 1,
buildd_m68k-elgar: 1, buildd_m68k-kullervo: 1,
buildd_m68k-vivaldi: 1, tg: 30)
Built : 3 (buildd_m68k-arrakis: 3)
Uploaded : 14 (tg: 14)
Failed : 80 (buildd_m68k-ara5: 31, tg: 49)
Dep-Wait : 3 (tg: 3)
Reupload-Wait : 0
Install-Wait : 0
Failed-Removed : 0
Dep-Wait-Removed: 0
BD-Uninstallable: 1750
Auto-Not-For-Us : 187
Not-For-Us : 21
total : 9978

64.39% (6425) up-to-date, 64.53% (6439) including uploaded
13.78% (1375) need building
0.35% ( 35) currently building
0.03% ( 3) already built, but not uploaded
0.83% ( 83) failed/dep-wait
0.00% ( 0) old failed/dep-wait
17.54% (1750) not installable, because of missing build-dep
1.87% (187) need porting or cause the buildd serious grief (Auto)
0.21% ( 21) need porting or cause the buildd serious grief

Considering from where we came we're doing well. Especially given the fact that still some buildds are not working properly because of the missing SCSI driver for NCR53C9XF (espfast) chips. Having that driver working would result in 4 additional buildds and at least one that is currently using slower IDE interface. 

Oh, even Kullervo is working again! Now it just needs to get relocated to the datacenter again... :-)

Kategorie: 
 

right2water.eu - Water is a Human Right

I don't know how the water supply is organised in your country, whether it is a public water supply or private, but when you are living in the European Union it may change soon to private water supply. The EU Commission wants to liberalise the water supply and sanitation in the EU, but this would mean higher prices and less quality for the citizens.

There is a citizens' initiative against this plan, because water and sanitation is a human right and not a commoditiy that can be (ab)used by private companies to make money. Please sign the petition on right2water.eu:

Currently the petition already reached its goal of 1 Million signers, but unfortunately the rules are somewhat more complex: A certain quorum must be reached in every member country of the EU. At the moment the most signers came from Germany, so the quorum for Germany was reached. But according to a statistic posted on Twitter the quorum needs to be reached at least in 7 countries. Only Germany, Belgium and Austria did so. So please sign the petition on right2water.eu and spread the word in your country!

But why is privatization a bad idea? Especially when it is done in a Public Private Partnership (PPP)? As said, water and sanitation is a human right and must not be object for profit. What happens when water supply is done by a private corporation can be seen in the documentary "Water Makes Money" which was aired on French-German TV station "Arte" last week. You can watch it online in German and French.

The petition is running until September. There's enough time to sign it and - even more important - to contact your Members of European Parliament and request a "No!" to privatization of water and sanitation!

Kategorie: 
 

Progress of m68k port

A few weeks ago on Christmas Wouter and I blogged about the successful reinstallation of m68k buildds after a very long period of years of inactivity. This even got us mentioned on Slashdot. It's been now roughly 3 weeks since then and we made some sort of progress: 

Debian-ports.org shows now that we're from 20% keeping up to about 60% keeping up. The installed packages went from ~1900 to about 3800 and we even triggered 200 packages from BD-uninstallable to Needs-Build

  wanna-build statistics - Fri Jan 18 06:52:36 CET 2013
  -----------------------------------------------------

Distribution unstable:
---------------------
Installed       :  3868 (buildd_m68k-ara5: 488, buildd_m68k-arrakis: 20,
                         buildd_m68k-elgar: 106, buildd_m68k-vivaldi: 80,
                         tg: 1412, unknown: 1761, wouter: 1)
Needs-Build     :  3500
Building        :    26 (buildd_m68k-ara5: 1, buildd_m68k-elgar: 1,
                         buildd_m68k-vivaldi: 1, tg: 23)
Built           :     0
Uploaded        :     1 (tg: 1)
Failed          :    34 (buildd_m68k-ara5: 17, tg: 17)
Dep-Wait        :     4 (tg: 4)
Reupload-Wait   :     0
Install-Wait    :     0
Failed-Removed  :     0
Dep-Wait-Removed:     0
BD-Uninstallable:  2320
Auto-Not-For-Us :   188
Not-For-Us      :     9
total           :  9975

 38.78% (3868) up-to-date,  38.79% (3869) including uploaded
 35.09% (3500) need building
  0.26% ( 26) currently building
  0.00% (  0) already built, but not uploaded
  0.38% ( 38) failed/dep-wait
  0.00% (  0) old failed/dep-wait
 23.26% (2320) not installable, because of missing build-dep
  1.88% (188) need porting or cause the buildd serious grief (Auto)
  0.09% (  9) need porting or cause the buildd serious grief

So, overall we're performing fine. The mention on Slashdot even brought up new donors of hardware. Someone offered SCSI/SCA disks up to 73 GB in size and another person even offered several Amigas, from which we'll using a Amiga 2000 with Blizzard 2060 accellerator card as a new buildd.

This leads me to a medium-sized drawback: we actually have several Amigas with a Blizzard 2060 as buildds. Unfortunately there's no SCSI driver in current kernels for that kind of hardware. This results in the effect that we can't use as many machines as we could. Currently we are using 3 active buildds and some Aranym VMs running on Thorsten Glasers hosts. We could add 4 more buildds when there would be a working SCSI driver.

So, if anyone likes to contribute to the m68k port and loves kernel hacking, this would be a great way to help us. :-) 

Kategorie: 
Tags: 
 

Resurrecting m68k - We're on track again!

Mid of November I already wrote about "Resurrecting m68k" - and went on holidays right after that writing. So, nothing really happened until December. But then things happened rather quickly one after one. First, I got Elgar up and running. Then I upgraded Arrakis and Vivaldi again. And then it was a lucky coincedence that my parents made a short trip to Nuremberg. Back then there were another buildd located in that city: Akire, which was operated by Matthias "smurf" Urlichs. So I mailed him and asked, if Akire still do exists and he answered surprising quickly that it is - but he wanted to take it to the garbage soon.

I asked Smurf if my parents could pick it up and we managed to exchange contact addresses/phone numbers. To all of our surprise the Hotel, where my parents were staying, was just 180m away from Smurfs home! So it was really easy for my parents to pick up the machine, until they continued their trip to visit me in Rostock. That way I had just another machine to upgrade! Whoohoo!

I used most of the time in December to upgrade the machines, migrating to larger disks, setting up everything as someone on debian-68k list popped up to offer a hosting facility in Berlin. That was really perfect timing! I took Elgar from NMMN in Hamburg, where it was hosted until August, and had now a second machine, Akire, where I didn't know where to host. So the offer made it easy to decide: Elgar & Akire will go to Berlin whereas Kullervo & Crest will move back to NMMN, when those two boxes got upgraded. That way we have some kind of redundancy. Perfect!

Except that we would still need a running Buildd on those machines. During the last few years, I think 4-5 years, the sbuild/buildd suite did change in a great way. Nothing worked any longer as it did. So I concentrated on getting sbuild ready to pick a source and build it. But I got faced with some segfaults of various stuff. After all, it happened to be a somewhat broken kernel that caused all the problems. After upgrading the kernel, schroot suddenly did work and I could continue in setting up sbuild. After some days things got clearer and finally it worked: 6tunnel was the first newly build package by sbuild on m68k on 20. December 2012!

During the next days I tried to get a larger disk (18G) for Spice, another machine, working, so I could use the big disk (36G) for Akire, instead of the old 2 & 4G disks and tried to deploy the sbuild config to Arrakis and Vivaldi. That was about two days ago. The missing part was an updated buildd config. This was addressed by Wouter today (well, actually yesterday in the meantime) and now we have a working buildd again since years! Hooray! :-))

Now we are back on track with the m68k port and will add more buildds, as well native as emulated ones, to come down from that "Needs-Build : 5261" number.

So, very big thanks to all that made this possible: 

  • Wouter for configuring the buildd setup on Arrakis
  • Aurelien for adding the m68k buildd back to debian-ports.org.
  • John Paul Adrian Glaubitz for offering the hosting
  • Matthias "smurf" Urlichs for keeping care of Akire all of these years
  • NMMN in Hamburg for willing to continue the hosting for Kullervo & Crest
  • adb@#debian-68k for donating 4x 32MB PS/2 RAM

and finally, last but not least, a very, very BIG THANKS to Thorsten Glaser who acted all these years as a human buildd and for solving the TLS problem on m68k and keeping the port alive in some kind of one-man-show!

Kategorie: 
 

Resurrecting m68k

In August I picked up Elgar, a m68k machine, from NMMN in Hamburg, where it was ought to run as a buildd (NMMN donated space and network). Unfortunately it was in some kind of bad state: operating system was out of date, expansion cards were getting loose and NMMN wasn't happy about the CRT monitor in its datacenter as well.

Elgar is an Amiga 4000 Desktop built into a custom tower case. It took some weeks and months until I found a little time to care about Elgar, but now it's up and running again: 

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.2.0-3-amiga (Debian 3.2.23-1) (debian-kernel@lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-8+m68k.1) ) #1 Wed Jul 25 13:02:31 UTC 2012
[    0.000000] Enabling workaround for errata I14
[    0.000000] console [debug0] enabled
[    0.000000] Amiga hardware found: [A4000] VIDEO BLITTER AUDIO FLOPPY A4000_IDE KEYBOARD MOUSE SERIAL PARALLEL A3000_CLK CHIP_RAM PAULA LISA ALICE_PAL ZORRO3

That's the stock Debian m68k kernel and it already runs on Arrakis and Vivaldi, two other buildds, as well without any problem. The only problem at the moment is the missing SCSI driver for the CyberStorm Mk1 accelerator card. There were some changes in the kernel that need to be dealt with by someone knowing.

The other problem was to upgrade from etch-m68k to unstable. I already blogged about this last year. It's not as easy anymore and you'll need to deal with lots of dependency problems and such nowadays. But anyway: 

elgar:~# cat /etc/debian_version
wheezy/sid
elgar:~# dpkg -l libc6
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                 Version         Architecture    Description
+++-====================-===============-===============-==============================================
ii  libc6:m68k           2.13-35         m68k            Embedded GNU C Library: Shared libraries

It's amazing that the m68k port is in such good condition after more than 4 years. That's because of the real great work of Torsten Glaser who is doing much of the porter work for the tool chain. But m68k has currently 4833 packages in state Needs-Build

wanna-build statistics - Mon Nov 12 06:51:13 CET 2012
-----------------------------------------------------

Distribution unstable:
---------------------
Installed : 1321
Needs-Build : 4833
Building : 0
Uploaded : 0
Failed : 0
Dep-Wait : 0
Reupload-Wait : 0
Install-Wait : 0
Failed-Removed : 0
Dep-Wait-Removed: 0
Not-For-Us : 0
total : 9906

13.34% (1321) up-to-date, 13.34% (1321) including uploaded
48.79% (4833) need building
0.00% ( 0) currently building
0.00% ( 0) failed/dep-wait
0.00% ( 0) old failed/dep-wait
0.00% ( 0) need porting or cause the buildd serious grief

So, there's a lot of work to do, but it's apparent that m68k won't keep up with 10.000 packages in unstable. When I started running an autobuilder back in year 2000 there were 2400 packages for m68k, on Aug. 15th 2005 we had a total of 5949 packages in the archive. For m68k this means that we will have to start adding lots of packages to Not-For-Us. I think m68k will/should end up with approx. 4000 packages at most.

In the end m68k is a fairly good shape now. It only needs to get some packages built... Let's see when the buildds are operational again... ;-)

Kategorie: 
 

100% CPU load due to Leap Second

This morning Gregor Samsa woke up... oh, pardon! This morning I woke up and found myself puzzled, because my home server was eating up all of my 4 cores CPU cycles. Especially mysqld was high on CPU load. 100% CPU load for the mysql-server instance and 100% CPU load for akonadiservers own mysqld instance. Restarting KDE and mysql-server didn't help on my Debian unstable machine. Next step was upgrading the system. Sometimes this helps indeed, but not today.

Looking at bugs.debian.org for mysql-server didn't reveal any help as well. So my next logical step was to ask on #debian-devel in IRC. And my question was very quick answered: 

11:28 < ij> since tonight I've got two mysqld processes running at 100% CPU, one spawned by akonadi and
            the other is the mysqld from mysql-server (unstable that is). is this an already known issue?
            haven't found anything on b.d.o for mysql-server, though
11:29 < mrvn> ij: topic
11:29 < mrvn> you need to set the time
11:30 < ij> waaaaah!
11:30 < mrvn> ij: indeed.

The topic was at that time: 

 100% CPU? Reset leap second http://xrl.us/bnde4w

So, it was caused by the leap second. Although you might suspect mysql doing some nasty things (which, IMHO, is always a good guess ;)), the issue is this time within the Linux kernel itself, as a commit on git.kernel.org clearifies.

To fix this issue you need to set the time manually using the following command or just reboot: 

date -s "`date`"

So far I found these applications being hit by this kernel bug: 

  • mysql-server
  • akonadi (as it uses own mysql instances)
  • Firefox
  • Openfire Jabber server (because it's using Java, which seems to trigger the problem as mysql does)
  • Virtualbox' VBoxSVC process
  • puppetmaster from package puppet, reported by Michael
  • mythfrontend, reported by pos on #debian-devel
  • Jetty, Hudson, Puppet agent and master, reported by Christian
  • milter-greylist, reported by E. Recio
  • dovecot, reported by Diogo Resende
  • Google Chrome, reported by Erik B. Andersen
  • if you find more apps, please comment and I'll include them here...

So, hope this helps and many thanks to mrvn and infinity on #debian-devel for the help!

Kategorie: 
 

Confusion about mkfs.xfs and log stripe size being too big

Recently I bought some new disks, placed them into my computer, and built a RAID5 on these 3x 4 TB disks. Creating a physical device (PV) with pvcreate, a volume group (VG) with vgcreate and some logical volumes (LV) with lvcreate was as easy and well-known as creating an XFS filesystem on the LVs... but something was strange! I never saw this message before, when creating XFS filesystems with mkfs.xfs: 

log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB

Usually I don't mess around with the parameters of mkfs.xfs, because mkfs.xfs is smart enough to find near to optimal parameters for your filesystem. But apparently mkfs.xfs wanted to use a log stripe unit of 512 kiB, although its maximum size for this is 256 kiB. Why? So I started to google and in parallel asked on #xfs@freenode. Erik Sandeen, one of the core developers of XFS, suggested that I write that issue to the mailing list. He did already face this issue himself, but couldn't remember details.

So I collected some more information about my setup and wrote to the XFS ML. Of course I included information about my RAID5 setup:

muaddib:/home/ij# mdadm --detail /dev/md7
/dev/md7:
Version : 1.2
Creation Time : Sun Jun 24 14:58:21 2012
Raid Level : raid5
Array Size : 7811261440 (7449.40 GiB 7998.73 GB)
Used Dev Size : 3905630720 (3724.70 GiB 3999.37 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Tue Jun 26 05:13:03 2012
State : active, resyncing
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Resync Status : 98% complete

Name : muaddib:7 (local to host muaddib)
UUID : b56a714c:d193231e:365e6297:2ca61b65
Events : 16

Number Major Minor RaidDevice State
0 8 52 0 active sync /dev/sdd4
1 8 68 1 active sync /dev/sde4
2 8 84 2 active sync /dev/sdf4

Apparently, mkfs.xfs takes the chunk size of the RAID5 and want to use this for its log stripe size setting. So, that's the explanation why mkfs.xfs wants to use 512 kiB, but why is the chunk size 512 kiB at all? I didn't messed around with chunk sizes when creating the RAID5 either and all of my other RAIDs are using chunk sizes of 64 kiB. The reason was quickly found: the new RAID5 has a 1.2 format superblock, whereas the older ones do have a 0.90 format superblock.

So, it seems that somewhen the default setting in mdadm, which superblock format is to be used for its metadata, has been changed. I asked on #debian.de@ircnet and someone answered that this was changed in Debian after release of Squeeze. Even in Squeeze the 0.90 format superblock was obsolete and has been only kept for backward compatibility. Well, ok. There actually was a change of defaults, which explains the behaviour of mkfs.xfs now, wanting to set log stripe size to 512 kiB.

But what is the impact of falling back to 32 kiB log stripe size? Dave Chinner, another XFS developer explains: 

Best thing in general is to align all log writes to the
underlying stripe unit of the array. That way as multiple frequent
log writes occur, it is guaranteed to form full stripe writes and
basically have no RMW overhead. 32k is chosen by default because
that's the default log buffer size and hence the typical size of
log writes.

If you increase the log stripe unit, you also increase the minimum
log buffer size that the filesystem supports. The filesystem can
support up to 256k log buffers, and hence the limit on maximum log
stripe alignment.

And in another mail, when being asked if it's possible to raise the 256 kiB limit to 512 kiB because of the mdadm defaults to 512 kiB as well: 

You can't, simple as that. The maximum supported is 256k. As it is,
a default chunk size of 512k is probably harmful to most workloads -
large chunk sizes mean that just about every write will trigger a
RMW cycle in the RAID because it is pretty much impossible to issue
full stripe writes. Writeback doesn't do any alignment of IO (the
generic page cache writeback path is the problem here), so we will
lamost always be doing unaligned IO to the RAID, and there will be
little opportunity for sequential IOs to merge and form full stripe
writes (24 disks @ 512k each on RAID6 is a 11MB full stripe write).

IOWs, every time you do a small isolated write, the MD RAID volume
will do a RMW cycle, reading 11MB and writing 12MB of data to disk.
Given that most workloads are not doing lots and lots of large
sequential writes this is, IMO, a pretty bad default given typical
RAID5/6 volume configurations we see....

So, reducing the log stripe size is in fact a good thing[TM]. If anyone will benefit from larger log stripe sizes, s/he would be knowledgeable enough to play around with mkfs.xfs parameters and tune them to needs of the workload.

Erik Sandeen suggested, though, to remove the warning in mkfs.xfs. Dave objects and maybe it's a good compromise to extend the warning by giving an URL for a FAQ entry explaining this issue in more depth than a short warning can do?

Maybe someone else is facing the same issue and searches for information and find this blog entry helpful in the meantime...

Kategorie: 
 

DaviCal and Addressbook Sync

After exchanging my rusted Nokia N97 against an iPhone I was in need to setup calendar and addressbook syncing again. Addressbook syncing wasn't possible with N97 anyways, or I haven't found out how to do it. Previously I synced my N97 by using iSync, but iSync doesn't sync anymore with iPhone, although iPhone now syncs with iTunes. Weird? Yes. But that's how it works. The iPhone syncs now via WLAN instead of Bluetooth, which is an improvement, but I don't really want to fire up iTunes everytime I want to sync my calendar or addressbook. And using iCloud is really not an option as well, because of privacy concerns. I'm a big fan of selfhosting and already have a running DaviCal instance running on my server. DaviCal is a great piece of software from Debian maintainer Andrew McMillan, who is doing a survey on Davical, so there's, of course, a Debian package for it.

Anyway, one problem with OSX and addressbook sync via carddav is that it is not working out of the box with Addressbook.app on OSX, although the documentation in the DaviCal wiki is quite useful. When you try to enter a new account in Addressbook.app the sync will not work. The solution can be found on the private blog of Harald Nikolisin, which is in German. He writes (German, English translation follows)

Mac OS X Adressbuch anschliessen
Oh ja – wenn man mittels SSL drauzugreift, dann gibts Probleme.
Im der Applikation Adressbuch kann man zwar ein CardDAV Account anlegen bei dem man die Authorisierungsdaten und den kompletten Serverpfad (s.o.) eingeben kann, man läuft aber immer auf eine Fehlermeldung hinaus.
Die Lösung ist, zweimal “Create” anzuklicken um den fehlerhaften Account anzulegen.

Dann editiert man manuell folgende Datei:

~/Library/Application Support/AddressBook/Sources/UNIQUE-ID/Configuration.plist
Dort trägt man unter Server String die komplette URL ein.
https://SERVERNAME/davical/caldav.php/USERNAME/contacts
Am besten modifiziert man noch das Feld HaveWriteAccess auf den Wert auf “1″

English translation: 

Connecting Mac OS X addressbook
Oh, yes - there are problems when accessing via SSL.
In Addressbook.app you can add a CardDAV account where you can define authentication and 
server path, but you'll always get an error message.
The solution is to click twice on "Create" in order to create the faulty entry.

Then you can edit the following file:

~/Library/Application Support/AddressBook/Sources/UNIQUE-ID/Configuration.plist

There you enter your complete URL under Server String.
https://SERVERNAME/davical/caldav.php/USERNAME/contacts 
It's best to modify the field HaveWriteAccess to the value "1"

After following this advice my Addressbook.app did successfully stored the contacts into DaviCals CardDAV from where I can sync with my iPhone. Maybe Andrew want to include this to the DaviCal wiki, maybe I'll do this myself by registering in the Wiki for that purpose...

Oh, and I forgot: Using the Roundcube plugin from graviox is working nice as well with DaviCals CardDAV!

Kategorie: 
 

About Gallery3 in Debian and MySQL

Years ago, when I started using a cheap Kodak DX3600 digital camera to make some digital photos, I used Gallery from Menalto to collect these pictures within a gallery. Gallery (version 1) was using plain text files to keep its information about galleries and photos and as the number of photos I put into the gallery the more it got slower and slower. Then Gallery2 was released which used a database, either MySQL or PostgreSQL, and was a huge improvement in speed. My main galleries do have about 10-20.000 pictures each. But Gallery2 is aged nowadays and the next logical step would be to migrate to Gallery3. But what a mess!

Gallery3 has some drawbacks: 

  1. there is currently no gallery3 package in Debian, although it's been released upstream for some time now.
  2. there is actually a bug open (#511715), stating that there are some license issues with Gallery3 and some SWF files
  3. it's been said that Gallery3 doesn't support per picture permissions anymore, only permission per album is now possible, which is giving me headache as I changed permissions of some personal pictures in the past for privacy reasons and which would either lead to completely unavailable albums to the public or the need of splitting album into a public and a private section, which breaks the timely order of the pictures
  4. whereas G2 supported both MySQL and PostgreSQL as database backends, G3 only supports MySQL. That's a real pitty because I prefer PostgreSQL because of its stability and easiness over MySQL. It already happened several times that MySQL databases were gone after a kernel crash or something. Even mysql.user table was gone more than once, whereas PostgreSQL never ever has shown such behaviour to me. It just works.

I'm really upset about the last point! Why is there such a strong believe in MySQL. In my eyes, MySQL is utter crap. It's more like MS Access than a real thing when talking DBMS/SQL. And my impression is that, after Oracle bought MySQL, Oracle did a good job to scare their customers off. PostgreSQL on the other hand gained a good momentum in usage since the Oracle/MySQL deal. So, it's a total mystery to me how a big software package with dozens of developers can decide not to support PostgreSQL or even drop the support of PostgreSQL for a new release! It's driving me nuts, again and again.

Other big software packages like Drupal are doing the right thing: while PostgreSQL support in drupal6 was weak and buggy because of all these awful MySQLisms around, Drupal now uses a database abstraction layer in drupal7 to allow even sqlite or Oracle to be used as underlying database. That's the way to go, but it's totally awkward to drop PostgreSQL support as Gallery3 did.

So, is there a way out of my dilemma? Will gallery3 be a package within Debian soon (no offense to Michael Schultheiss, I think he's doing a great job and needs assistance from upstream in this case)? Is there any good replacement for Gallery3 that can deal with tenthousends of images and dozens of users and supports PostgreSQL and has some kind of import tool from gallery2?

Kategorie: 
 

Seiten

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer