You are here

Debian

Problems with DaviCal after Wheezy Upgrade

It's been a while since Wheezy was released, but the problems with DaviCal started with that upgrade. I don't know whether this is a DaviCal bug or not, maybe partly. This is just a informational note, before I'll file a bug report (or not).

First problem was that I couldn't add any contacts anymore (CardDAV) from OS X. A friend of mine has the same issue as he's using my server for that. He mailed that he's getting the following error of the Contacts app under OS X: 

[NSInvalidArgumentException] -[CoreDAVNullParser rootElement]:
unrecognized selector sent to instance 0x7f91bad6b1d0

When I looked at the webfrontend I discovered that the database of DaviCal didn't get updated. I don't know whether this is a general problem or it just happened to me. Anyway. Executing the proper database upgrade script delivered by DaviCal was no problem at all. Adding new contacts worked again.

But then I discovered somewhen later that my calendars on the iPhone didn't update anymore. That was a bigger problem to solve, because it appears that it's an IOS 6.x problem and not a DaviCal issue. Finally I found on Google this mail on the DaviCal mailing list: 

As I started using DAViCal, I created calenders with "named" paths, not these
long names, like you stated in your message. With iOS 6 this was not a good
idea.

I created every calender new with the iOS 6 device and copied all
calender-items to the new calendars. Was a work auf 15 minutes for me. I
described it in a blog-post (german):
http://tech.blog.rana.at/2012/10/24/davical-caldav-mit-ios-6/

Not a real solution for the problem, but a workaround.

So, as linked page is in German, I'll rephrase the "solution" here: 

The problem seems to be that the old principal path names like user/calendar or user/home do not work anymore under iOS 6. Instead you'll need to create a new calender from your iPhone. So, just configure your calendar account as usually. You'll end with an empyt calendar. Now create a new event in your calendar on the iPhone. This new calendar should show up under OS X (or other clients). There you can export your existing appointments to an *.ics file which you can import in DaviCal webfrontend to the new prinicipal collection. Your dates should now show up on your iPhone again. But you'll end up with duplicate entries in iCal app under OS X. You need to define your new principal as default calendar to be able to delete your old default calendar, if that's not possible. If everything went well, you can share your dates between OS X and your iPhone via DaviCal again. At least this worked for me. :-)

It's late over here, so I'll postpone writing the bugreport for now...

Kategorie: 
 

Is GSOC a whitewashing project?

"The same procedure as last year, Ms. Sophie?" - "The same procedure as every year, James!" - at least when summer is coming, every year Google starts its "Google Summer of Code" (GSoC). This contest is a yearly event since 2005. Wikipedia states: 

The Google Summer of Code (GSoC) is an annual program, first held from May to August 2005,[1] in which Google awards stipends (of 5,000 USD, as of 2013)[2] to hundreds of students who successfully complete a requested free and open-source software coding project during the summer. The program is open to students aged 18 or over – the closely related Google Code-In is intended for students under the age of 18.

[...]

The program invites students who meet their eligibility criteria to post applications that detail the software-coding project they wish to perform. These applications are then evaluated by the corresponding mentoring organization. Every participating organization must provide mentors for each of the project ideas received, if the organization is of the opinion that the project would benefit from them. The mentors then rank the applications and decide among themselves which proposals to accept. Google then decides how many projects each organization gets, and asks the organizations to mark at most that many projects accordingly.

Sounds nice, eh? Submit a nice project, do some cool coding and get 5000.- US-$ for having some sort of fun!

While writing Open Source software (FLOSS/Libre Software), often there's no money it. It's an honory task, just for the benefit of creating a better world. A little bit, at least. Doing some coding on FLOSS and getting paid is great, eh?

But think twice! Maybe Google is not that friendly company it always states that it is? In the first place Google is a company and wants to earn money. And it has a mantra: "Don't be evil!" But the companys main purpose is to earn money and it will do anything to achieve this.

Think of GSoC as a cheap marketing project for Google. A contest for whitewashing Googles image. They can say: "hey, look! We are supporting the FLOSS community! We are not evil!" And you can look at GSoC as a cheap recruitment program for Google. Overall it appears that Google has a bigger benefit from GSoC than the participants as a single or than FLOSS community as a whole. There is a danger that the community gets pocketed by Google instead of enforcing the FLOSS standards and being as independant as possible.

Sure, you need to pay bills, get something to eat and so on, but do you really want to help Google to whitewash its image as a monopolistic company? Or would it be worth to try out some sort of crowd funding when you have a great idea for a program you want to write?
 

Kategorie: 
 

Friendica on Debian

I guess many of you do have an account on Facebook. Facebook, on the other hand, has many privacy issues, beside the fact that it is not a good idea to give away your own data to an maybe-evil monopolist. I'm a great fan of self-hosting. I host my own DaviCal instance for CalDAV/CardDAV to sync my mobile phone, running my own mailserver and of course my own webservers. And additionally to run my own Jabber server I now run my own Social Media service as well. It's an instance of Friendica.

Unfortunately there is no Friendica package in standard Debian repositories, but when you do some web searches, you might stumble upon a package on mentors.debian.net as I did. Of course it would have been possible to run Friendica by using the git repository, but that wouldn't help the Debian package at all.

Here are some caveats and issues I discovered when trying to install Friendica on a new Wheezy VM: 

  • php-pear is missing as dependency
  • the directory "object" is no included/copied from source and will give you an error like this: "Failed opening required 'object/BaseObject.php'"
  • when running with a different database host than on the same machine, it's a little bit awkward to convince db-common to make use of the remote host. But that's more a db-common issue, I think.
  • symlinking to /etc/friendica/htaccess/.htaccess is wrong as the symlink in /usr/share/friendica/.htaccess points to /etc/friendica/htaccess and gives you this error: "(9)Bad file descriptor: /usr/share/friendica/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable"
  • invocation of scriptaculous is missing. Friendica looks for it in /usr/share/friendica/library/cropper/lib/, but can't find them there, because they are located in /usr/share/javascript/scriptaculous/ directory. This will result in being unable to upload and/or change your profile picture, because you can't crop your needed frame from the uploaded picture and the result will be a black profile picture afterwards.

As I'm unsure to report bugs against an not-included package in Debian, there's no bugreport within bugs.debian.org from me. I'm just saying this because of all these "where's your bug report, dude!?"-junkies out there. I'll mail my findings directly to Tobias (693504) and Kamath.

Anyway, you can find me in Friendica at ij on nerdica.net (Web Profile) and connect to me. Have fun with your Friendica installation! :-) 

PS: the registration on nerdica.net is basically open, but just needs my approval to prevent spam bots. So, feel free to join! :)

Kategorie: 
 

Xen problems with VMs on 2.6.32-5-xen-amd64?

On Saturday there were some updates for Squeeze/Stable, for example a new/updated version of Xen hypervisor and kernels were downloaded and installed: 

gate:~# dir /var/cache/apt/archives/
base-files_6.0squeeze7_amd64.deb                 libxenstore3.0_4.0.1-5.6_amd64.deb
bind9-host_1%3a9.7.3.dfsg-1~squeeze9_amd64.deb   linux-base_2.6.32-48_all.deb
dbus_1.2.24-4+squeeze2_amd64.deb                 linux-image-2.6.32-5-amd64_2.6.32-48_amd64.deb
dbus-x11_1.2.24-4+squeeze2_amd64.deb             linux-image-2.6.32-5-xen-amd64_2.6.32-48_amd64.deb
firmware-linux-free_2.6.32-48_all.deb            lock
gzip_1.3.12-9+squeeze1_amd64.deb                 openssh-client_1%3a5.5p1-6+squeeze3_amd64.deb
host_1%3a9.7.3.dfsg-1~squeeze9_all.deb           openssh-server_1%3a5.5p1-6+squeeze3_amd64.deb
libbind9-60_1%3a9.7.3.dfsg-1~squeeze9_amd64.deb  openssl_0.9.8o-4squeeze14_amd64.deb
libcups2_1.4.4-7+squeeze3_amd64.deb              partial
libdbus-1-3_1.2.24-4+squeeze2_amd64.deb          perl_5.10.1-17squeeze5_amd64.deb
libdbus-glib-1-2_0.88-2.1+squeeze1_amd64.deb     perl-base_5.10.1-17squeeze5_amd64.deb
libdns69_1%3a9.7.3.dfsg-1~squeeze9_amd64.deb     perl-modules_5.10.1-17squeeze5_all.deb
libisc62_1%3a9.7.3.dfsg-1~squeeze9_amd64.deb     ssh_1%3a5.5p1-6+squeeze3_all.deb
libisccc60_1%3a9.7.3.dfsg-1~squeeze9_amd64.deb   tzdata_2012g-0squeeze1_all.deb
libisccfg62_1%3a9.7.3.dfsg-1~squeeze9_amd64.deb  xen-hypervisor-4.0-amd64_4.0.1-5.6_amd64.deb
libldap-2.4-2_2.4.23-7.3_amd64.deb               xen-linux-system-2.6.32-5-xen-amd64_2.6.32-48_amd64.deb
liblwres60_1%3a9.7.3.dfsg-1~squeeze9_amd64.deb   xenstore-utils_4.0.1-5.6_amd64.deb
libperl5.10_5.10.1-17squeeze5_amd64.deb          xen-utils-4.0_4.0.1-5.6_amd64.deb
libssl0.9.8_0.9.8o-4squeeze14_amd64.deb

Unfortunately this update appears to be problematic on my Xen hosting server. This night it happened the second time that some of the virtual network interfaces disappeared or turned out to be non-working. For example I have two VMs: one running the webserver and one running the databases. Between these two VMs there's a bridge on the dom0 and both VMs have a VIF to that (internal) bridge. What happens is that this bridge becomes inaccessible from within the webserver VM.

Sadly there's not much to see in the log files. I just spotted this on dom0: 

Feb 26 01:01:29 gate kernel: [12697.907512] vif3.1: Frag is bigger than frame.
Feb 26 01:01:29 gate kernel: [12697.907550] vif3.1: fatal error; disabling device
Feb 26 01:01:29 gate kernel: [12697.919921] xenbr1: port 3(vif3.1) entering disabled state
Feb 26 01:22:00 gate kernel: [13928.644888] vif2.1: Frag is bigger than frame.
Feb 26 01:22:00 gate kernel: [13928.644920] vif2.1: fatal error; disabling device
Feb 26 01:22:00 gate kernel: [13928.663571] xenbr1: port 2(vif2.1) entering disabled state
Feb 26 01:40:44 gate kernel: [15052.629280] vif7.1: Frag is bigger than frame.
Feb 26 01:40:44 gate kernel: [15052.629314] vif7.1: fatal error; disabling device
Feb 26 01:40:44 gate kernel: [15052.641725] xenbr1: port 6(vif7.1) entering disabled state

This corresponds to the number of VMs having lost their internal connection to the bridge. On the webserver VM I see this output: 

Feb 26 01:59:01 vserv1 kernel: [16113.539767] IPv6: sending pkt_too_big to self
Feb 26 01:59:01 vserv1 kernel: [16113.539794] IPv6: sending pkt_too_big to self
Feb 26 02:30:54 vserv1 kernel: [18026.407517] IPv6: sending pkt_too_big to self
Feb 26 02:30:54 vserv1 kernel: [18026.407546] IPv6: sending pkt_too_big to self
Feb 26 02:30:54 vserv1 kernel: [18026.434761] IPv6: sending pkt_too_big to self
Feb 26 02:30:54 vserv1 kernel: [18026.434787] IPv6: sending pkt_too_big to self
Feb 26 03:39:16 vserv1 kernel: [22128.768214] IPv6: sending pkt_too_big to self
Feb 26 03:39:16 vserv1 kernel: [22128.768240] IPv6: sending pkt_too_big to self
Feb 26 04:39:51 vserv1 kernel: [25764.250170] IPv6: sending pkt_too_big to self
Feb 26 04:39:51 vserv1 kernel: [25764.250196] IPv6: sending pkt_too_big to self

Rebooting the VMs will result in a non-working VM as it will get paused on creation and Xen scripts complain about not working hotplug scripts and Xen logs shows this: 

[2013-02-25 13:06:34 5470] DEBUG (XendDomainInfo:101)
XendDomainInfo.create(['vm', ['name', 'vserv1'], ['memory', '2048'],
['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash',
'restart'], ['on_xend_start', 'ignore'], ['on_xend_stop', 'ignore'],
['vcpus', '2'], ['oos', 1], ['bootloader', '/usr/lib/xen-4.0/bin/pygrub'],
['bootloader_args', ''], ['image', ['linux', ['root', '/dev/xvdb '],
['videoram', 4], ['tsc_mode', 0], ['nomigrate', 0]]], ['s3_integrity', 1],
['device', ['vbd', ['uname', 'phy:/dev/lv/vserv1-boot'], ['dev', 'xvda'],
['mode', 'w']]], ['device', ['vbd', ['uname', 'phy:/dev/lv/vserv1-disk'],
['dev', 'xvdb'], ['mode', 'w']]], ['device', ['vbd', ['uname',
'phy:/dev/lv/vserv1-swap'], ['dev', 'xvdc'], ['mode', 'w']]], ['device',
['vbd', ['uname', 'phy:/dev/lv/vserv1mirror'], ['dev', 'xvdd'], ['mode',
'w']]]])
[2013-02-25 13:06:34 5470] DEBUG (XendDomainInfo:2508)
XendDomainInfo.constructDomain
[2013-02-25 13:06:34 5470] DEBUG (balloon:220) Balloon: 2100000 KiB free;
need 16384; done.
[2013-02-25 13:06:34 5470] DEBUG (XendDomain:464) Adding Domain: 39
[2013-02-25 13:06:34 5470] DEBUG (XendDomainInfo:2818)
XendDomainInfo.initDomain: 39 256
[2013-02-25 13:06:34 5781] DEBUG (XendBootloader:113) Launching bootloader
as ['/usr/lib/xen-4.0/bin/pygrub', '--args=root=/dev/xvdb  ',
'--output=/var/run/xend/boot/xenbl.6040', '/dev/lv/vserv1-boot'].
[2013-02-25 13:06:39 5470] DEBUG (XendDomainInfo:2845)
_initDomain:shadow_memory=0x0, memory_static_max=0x80000000,
memory_static_min=0x0.
[2013-02-25 13:06:39 5470] INFO (image:182) buildDomain os=linux dom=39
vcpus=2
[2013-02-25 13:06:39 5470] DEBUG (image:721) domid	    = 39
[2013-02-25 13:06:39 5470] DEBUG (image:722) memsize	    = 2048
[2013-02-25 13:06:39 5470] DEBUG (image:723) image	    =
/var/run/xend/boot/boot_kernel.xj7W_t
[2013-02-25 13:06:39 5470] DEBUG (image:724) store_evtchn   = 1
[2013-02-25 13:06:39 5470] DEBUG (image:725) console_evtchn = 2
[2013-02-25 13:06:39 5470] DEBUG (image:726) cmdline	    =
root=UUID=ed71a39f-fd2e-4035-8557-493686baa151 ro root=/dev/xvdb
[2013-02-25 13:06:39 5470] DEBUG (image:727) ramdisk	    =
/var/run/xend/boot/boot_ramdisk.QavuAo
[2013-02-25 13:06:39 5470] DEBUG (image:728) vcpus	    = 2
[2013-02-25 13:06:39 5470] DEBUG (image:729) features	    =
[2013-02-25 13:06:39 5470] DEBUG (image:730) flags	    = 0
[2013-02-25 13:06:39 5470] DEBUG (image:731) superpages     = 0
[2013-02-25 13:06:40 5470] INFO (XendDomainInfo:2367) createDevice: vbd :
{'uuid': '04d99772-cf27-aecf-2d1b-c73eaf657410', 'bootable': 1, 'driver':
'paravirtualised', 'dev': 'xvda', 'uname': 'phy:/dev/lv/vserv1-boot',
'mode': 'w'}
[2013-02-25 13:06:40 5470] DEBUG (DevController:95) DevController: writing
{'virtual-device': '51712', 'device-type': 'disk', 'protocol':
'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend':
'/local/domain/0/backend/vbd/39/51712'} to
/local/domain/39/device/vbd/51712.
[2013-02-25 13:06:40 5470] DEBUG (DevController:97) DevController: writing
{'domain': 'vserv1', 'frontend': '/local/domain/39/device/vbd/51712',
'uuid': '04d99772-cf27-aecf-2d1b-c73eaf657410', 'bootable': '1', 'dev':
'xvda', 'state': '1', 'params': '/dev/lv/vserv1-boot', 'mode': 'w',
'online': '1', 'frontend-id': '39', 'type': 'phy'} to
/local/domain/0/backend/vbd/39/51712.
[2013-02-25 13:06:40 5470] INFO (XendDomainInfo:2367) createDevice: vbd :
{'uuid': 'e46cb89f-3e54-41d2-53bd-759ed6c690d2', 'bootable': 0, 'driver':
'paravirtualised', 'dev': 'xvdb', 'uname': 'phy:/dev/lv/vserv1-disk',
'mode': 'w'}
[2013-02-25 13:06:40 5470] DEBUG (DevController:95) DevController: writing
{'virtual-device': '51728', 'device-type': 'disk', 'protocol':
'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend':
'/local/domain/0/backend/vbd/39/51728'} to
/local/domain/39/device/vbd/51728.
[2013-02-25 13:06:40 5470] DEBUG (DevController:97) DevController: writing
{'domain': 'vserv1', 'frontend': '/local/domain/39/device/vbd/51728',
'uuid': 'e46cb89f-3e54-41d2-53bd-759ed6c690d2', 'bootable': '0', 'dev':
'xvdb', 'state': '1', 'params': '/dev/lv/vserv1-disk', 'mode': 'w',
'online': '1', 'frontend-id': '39', 'type': 'phy'} to
/local/domain/0/backend/vbd/39/51728.
[2013-02-25 13:06:40 5470] INFO (XendDomainInfo:2367) createDevice: vbd :
{'uuid': 'e2d61860-7448-1843-3935-6b63c5d2878e', 'bootable': 0, 'driver':
'paravirtualised', 'dev': 'xvdc', 'uname': 'phy:/dev/lv/vserv1-swap',
'mode': 'w'}
[2013-02-25 13:06:40 5470] DEBUG (DevController:95) DevController: writing
{'virtual-device': '51744', 'device-type': 'disk', 'protocol':
'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend':
'/local/domain/0/backend/vbd/39/51744'} to
/local/domain/39/device/vbd/51744.
[2013-02-25 13:06:40 5470] DEBUG (DevController:97) DevController: writing
{'domain': 'vserv1', 'frontend': '/local/domain/39/device/vbd/51744',
'uuid': 'e2d61860-7448-1843-3935-6b63c5d2878e', 'bootable': '0', 'dev':
'xvdc', 'state': '1', 'params': '/dev/lv/vserv1-swap', 'mode': 'w',
'online': '1', 'frontend-id': '39', 'type': 'phy'} to
/local/domain/0/backend/vbd/39/51744.
[2013-02-25 13:06:40 5470] INFO (XendDomainInfo:2367) createDevice: vbd :
{'uuid': 'd314a46e-1ce9-0e8d-b009-3f08e29735f5', 'bootable': 0, 'driver':
'paravirtualised', 'dev': 'xvdd', 'uname': 'phy:/dev/lv/vserv1mirror',
'mode': 'w'}
[2013-02-25 13:06:40 5470] DEBUG (DevController:95) DevController: writing
{'virtual-device': '51760', 'device-type': 'disk', 'protocol':
'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend':
'/local/domain/0/backend/vbd/39/51760'} to
/local/domain/39/device/vbd/51760.
[2013-02-25 13:06:40 5470] DEBUG (DevController:97) DevController: writing
{'domain': 'vserv1', 'frontend': '/local/domain/39/device/vbd/51760',
'uuid': 'd314a46e-1ce9-0e8d-b009-3f08e29735f5', 'bootable': '0', 'dev':
'xvdd', 'state': '1', 'params': '/dev/lv/vserv1mirror', 'mode': 'w',
'online': '1', 'frontend-id': '39', 'type': 'phy'} to
/local/domain/0/backend/vbd/39/51760.
[2013-02-25 13:06:40 5470] DEBUG (XendDomainInfo:3400) Storing VM details:
{'on_xend_stop': 'ignore', 'shadow_memory': '0', 'uuid':
'04541225-6d3c-3cae-a4c4-0b6d4ccfac7a', 'on_reboot': 'restart',
'start_time': '1361794000.37', 'on_poweroff': 'destroy', 'bootloader_args':
'', 'on_xend_start': 'ignore', 'on_crash': 'restart', 'xend/restart_count':
'0', 'vcpus': '2', 'vcpu_avail': '3', 'bootloader':
'/usr/lib/xen-4.0/bin/pygrub', 'image': "(linux (kernel ) (args
'root=/dev/xvdb  ') (superpages 0) (tsc_mode 0) (videoram 4) (pci ())
(nomigrate 0) (notes (HV_START_LOW 18446603336221196288) (FEATURES
'!writable_page_tables|pae_pgdir_above_4gb') (VIRT_BASE
18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux)
(HYPERCALL_PAGE 18446744071578882048) (LOADER generic) (SUSPEND_CANCEL 1)
(PAE_MODE yes) (ENTRY 18446744071584289280) (XEN_VERSION xen-3.0)))",
'name': 'vserv1'}
[2013-02-25 13:06:40 5470] DEBUG (XendDomainInfo:1804) Storing domain
details: {'console/ring-ref': '2143834', 'image/entry':
'18446744071584289280', 'console/port': '2', 'store/ring-ref': '2143835',
'image/loader': 'generic', 'vm':
'/vm/04541225-6d3c-3cae-a4c4-0b6d4ccfac7a',
'control/platform-feature-multiprocessor-suspend': '1',
'image/hv-start-low': '18446603336221196288', 'image/guest-os': 'linux',
'cpu/1/availability': 'online', 'image/virt-base': '18446744071562067968',
'memory/target': '2097152', 'image/guest-version': '2.6', 'image/pae-mode':
'yes', 'description': '', 'console/limit': '1048576', 'image/paddr-offset':
'0', 'image/hypercall-page': '18446744071578882048',
'image/suspend-cancel': '1', 'cpu/0/availability': 'online',
'image/features/pae-pgdir-above-4gb': '1',
'image/features/writable-page-tables': '0', 'console/type': 'xenconsoled',
'name': 'vserv1', 'domid': '39', 'image/xen-version': 'xen-3.0',
'store/port': '1'}
[2013-02-25 13:06:40 5470] DEBUG (DevController:95) DevController: writing
{'protocol': 'x86_64-abi', 'state': '1', 'backend-id': '0', 'backend':
'/local/domain/0/backend/console/39/0'} to
/local/domain/39/device/console/0.
[2013-02-25 13:06:40 5470] DEBUG (DevController:97) DevController: writing
{'domain': 'vserv1', 'frontend': '/local/domain/39/device/console/0',
'uuid': 'c8819aed-c78f-02b8-0ef7-1600abd15add', 'frontend-id': '39',
'state': '1', 'location': '2', 'online': '1', 'protocol': 'vt100'} to
/local/domain/0/backend/console/39/0.
[2013-02-25 13:06:40 5470] DEBUG (XendDomainInfo:1891)
XendDomainInfo.handleShutdownWatch
[2013-02-25 13:06:40 5470] DEBUG (DevController:139) Waiting for devices
vif2.
[2013-02-25 13:06:40 5470] DEBUG (DevController:139) Waiting for devices
vif.
[2013-02-25 13:06:40 5470] DEBUG (DevController:139) Waiting for devices
vscsi.
[2013-02-25 13:06:40 5470] DEBUG (DevController:139) Waiting for devices
vbd.
[2013-02-25 13:06:40 5470] DEBUG (DevController:144) Waiting for 51712.
[2013-02-25 13:06:40 5470] DEBUG (DevController:628) hotplugStatusCallback
/local/domain/0/backend/vbd/39/51712/hotplug-status.

From my point of view, either Xen hypervisor or the kernel seems to be broken, but it's hard to tell for me. Maybe it would be easier to update the system from Squeeze to Wheezy and get rid off this problem that way? On the other hand this would solve the problem.

Are there any other people experiencing problems with that latest update of Xen and kernel?

UPDATE: Bug #701744 filed.

UPDATE 2: Downgrading kernel and hypervisor on dom0 to the following packages from snapshot.debian.org seems to have solved that problem.

  • xen-hypervisor-4.0-amd64_4.0.1-5.4_amd64.deb
  • linux-image-2.6.32-5-xen-amd64_2.6.32-46_amd64.deb

Note that I haven't tested yet with the newest kernel update from DSA-2632-1.

UPDATE 3: After running with older versions of hypervisor and kernel I now have upgraded the hypervisor to xen-hypervisor-4.0-amd64_4.0.1-5.6_amd64.deb and rebooted. Let's see whether it's running stable or not. If not it's the hypervisor, if yes, it's the kernel.

UPDATE 4: apparently it's the kernel which is buggy. And the kernel from DSA-2632-1 is affected as well. So, the current workaround is to downgrad to linux-image-2.6.32-5-xen-amd64_2.6.32-46_amd64.deb.

Kategorie: 
 

LVM on RAID5 broken - how to fix?

Some time ago one of my disks in my Software RAID5 failed. No big problem as I had two spare A08U-C2412 available to replace that single 1 TB SATA disk. I can't remember any details but somewhat went wrong and I ended up with a non-booting system. I think I tried to add the HW-RAID as a physical volume to the LVM and thought of migrating the SW-RAID to the HW-RAID or do mirroring or such. Anyway: I booted into my rescue system that was on a RAID1 partition on those disks, but LVM didn't came up anymore, because the SW-RAID5 wasn't recognized during boot. So I re-created the md device and discovered that my PV was gone as well. =:-0

No, big deal, I thought, because I have a backup of that machine on another host. I restored /etc/lvm and tried to do a vgcfgrestore after I re-created the PV with pvcreate. First I didn't use the old UUID, so vgcfgrestore complained. After creating the proper PV with UUID LVM did recognize the PV, VG and LVs. Unfortunately I can't mount any of the LVs. Something seems to be broken:

hahn-rescue:~# mount /dev/vg/sys /mnt
mount: you must specify the filesystem type

Feb 19 07:50:02 hahn-rescue kernel: [748288.740949] XFS (dm-0): bad magic number
Feb 19 07:50:02 hahn-rescue kernel: [748288.741009] XFS (dm-0): SB validate failed

Running a gpart scan on my SW-RAID5 did gave me some results:

hahn-rescue:~# gpart /dev/md3

Begin scan...
Possible partition(SGI XFS filesystem), size(20470mb), offset(5120mb)
Possible partition(SGI XFS filesystem), size(51175mb), offset(28160mb)
Possible partition(SGI XFS filesystem), size(1048476mb), offset(117760mb)
Possible partition(SGI XFS filesystem), size(204787mb), offset(1168640mb)
Possible partition(SGI XFS filesystem), size(204787mb), offset(1418240mb)
Possible partition(SGI XFS filesystem), size(1048476mb), offset(1626112mb)

*** Fatal error: dev(/dev/md3): seek failure.

These is not the complete lists of VM as a comparison to the output of lvs shows:

hahn-rescue:~# lvs
  LV                 VG   Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  storage1           lv   -wi-ao--   1.00t                                          
  AmigaSeagateElite3 vg   -wi-a---   3.00g                                          
  audio              vg   -wi-a---  70.00g                                          
  backup             vg   -wi-a---   1.00t                                          
  data               vg   -wi-a---  50.00g                                          
  hochzeit           vg   -wi-a---  40.00g                                          
  home               vg   -wi-a---   5.00g                                          
  pics               vg   -wi-a--- 200.00g                                          
  sys                vg   -wi-a---  20.00g                                          
  video              vg   -wi-a--- 100.00g                                          
  windata            vg   -wi-a--- 100.00g   

Please notice that /dev/lv/storage1 is my HW-RAID where I stored the images of /dev/vg/-LVs to run xfs_repair and such on. Anyway, the sizes of the recognized XFS partitions by gpart is mostly correct, but some a missing and xfs_repair can not do anything good on the backup images on storage1. Everything ends up in /lost+found, because the blocks seem to be mixed up somehow.

What I figured out is, that my old RAID5 device was in metaformat=1.2 whereas the new one is in format 0.9. My best guess is now to recreate the RAID5 device with format 1.2, do a vgcfgrestore on that and have (hopefully!) a working LVM with working LVs back that I can then mount again. If there's anything else I might be able to try, dear Lazyweb, please tell me. Please see the attached config files/tarballs for a complete overview.

Side note: except for AmigaSeagateElite3, which is a dd image of an old Amiga SCSI disk, I should have a fairly complete backup on my second backup location, so there's not much lost, but I would be a real timesaver when I would be able to recover the lost LVs. Both systems are behind DSL/cable and have a limit of 10 Mbps upstream. It would take weeks to transfer the data and sending an USB disk would be faster.

Kategorie: 
 
AttachmentSize
mdadm.conf_.txt954 bytes
hahn-lvm-restore.tar.gz14.7 KB

Status of m68k port

Just another short status update on the m68k port and the autobuilders. According to Buildd.Net we have currently 6425 packages installed: 

wanna-build statistics - Sat Feb 16 22:52:37 CET 2013
-----------------------------------------------------

Distribution unstable:
---------------------
Installed : 6425 (buildd_m68k-ara5: 1011, buildd_m68k-arrakis: 86,
buildd_m68k-elgar: 217, buildd_m68k-kullervo: 52,
buildd_m68k-vivaldi: 123, tg: 3234, unknown: 1701,
wouter: 1)
Needs-Build : 1375
Building : 35 (buildd_m68k-ara5: 1, buildd_m68k-arrakis: 1,
buildd_m68k-elgar: 1, buildd_m68k-kullervo: 1,
buildd_m68k-vivaldi: 1, tg: 30)
Built : 3 (buildd_m68k-arrakis: 3)
Uploaded : 14 (tg: 14)
Failed : 80 (buildd_m68k-ara5: 31, tg: 49)
Dep-Wait : 3 (tg: 3)
Reupload-Wait : 0
Install-Wait : 0
Failed-Removed : 0
Dep-Wait-Removed: 0
BD-Uninstallable: 1750
Auto-Not-For-Us : 187
Not-For-Us : 21
total : 9978

64.39% (6425) up-to-date, 64.53% (6439) including uploaded
13.78% (1375) need building
0.35% ( 35) currently building
0.03% ( 3) already built, but not uploaded
0.83% ( 83) failed/dep-wait
0.00% ( 0) old failed/dep-wait
17.54% (1750) not installable, because of missing build-dep
1.87% (187) need porting or cause the buildd serious grief (Auto)
0.21% ( 21) need porting or cause the buildd serious grief

Considering from where we came we're doing well. Especially given the fact that still some buildds are not working properly because of the missing SCSI driver for NCR53C9XF (espfast) chips. Having that driver working would result in 4 additional buildds and at least one that is currently using slower IDE interface. 

Oh, even Kullervo is working again! Now it just needs to get relocated to the datacenter again... :-)

Kategorie: 
 

right2water.eu - Water is a Human Right

I don't know how the water supply is organised in your country, whether it is a public water supply or private, but when you are living in the European Union it may change soon to private water supply. The EU Commission wants to liberalise the water supply and sanitation in the EU, but this would mean higher prices and less quality for the citizens.

There is a citizens' initiative against this plan, because water and sanitation is a human right and not a commoditiy that can be (ab)used by private companies to make money. Please sign the petition on right2water.eu:

Currently the petition already reached its goal of 1 Million signers, but unfortunately the rules are somewhat more complex: A certain quorum must be reached in every member country of the EU. At the moment the most signers came from Germany, so the quorum for Germany was reached. But according to a statistic posted on Twitter the quorum needs to be reached at least in 7 countries. Only Germany, Belgium and Austria did so. So please sign the petition on right2water.eu and spread the word in your country!

But why is privatization a bad idea? Especially when it is done in a Public Private Partnership (PPP)? As said, water and sanitation is a human right and must not be object for profit. What happens when water supply is done by a private corporation can be seen in the documentary "Water Makes Money" which was aired on French-German TV station "Arte" last week. You can watch it online in German and French.

The petition is running until September. There's enough time to sign it and - even more important - to contact your Members of European Parliament and request a "No!" to privatization of water and sanitation!

Kategorie: 
 

Progress of m68k port

A few weeks ago on Christmas Wouter and I blogged about the successful reinstallation of m68k buildds after a very long period of years of inactivity. This even got us mentioned on Slashdot. It's been now roughly 3 weeks since then and we made some sort of progress: 

Debian-ports.org shows now that we're from 20% keeping up to about 60% keeping up. The installed packages went from ~1900 to about 3800 and we even triggered 200 packages from BD-uninstallable to Needs-Build

  wanna-build statistics - Fri Jan 18 06:52:36 CET 2013
  -----------------------------------------------------

Distribution unstable:
---------------------
Installed       :  3868 (buildd_m68k-ara5: 488, buildd_m68k-arrakis: 20,
                         buildd_m68k-elgar: 106, buildd_m68k-vivaldi: 80,
                         tg: 1412, unknown: 1761, wouter: 1)
Needs-Build     :  3500
Building        :    26 (buildd_m68k-ara5: 1, buildd_m68k-elgar: 1,
                         buildd_m68k-vivaldi: 1, tg: 23)
Built           :     0
Uploaded        :     1 (tg: 1)
Failed          :    34 (buildd_m68k-ara5: 17, tg: 17)
Dep-Wait        :     4 (tg: 4)
Reupload-Wait   :     0
Install-Wait    :     0
Failed-Removed  :     0
Dep-Wait-Removed:     0
BD-Uninstallable:  2320
Auto-Not-For-Us :   188
Not-For-Us      :     9
total           :  9975

 38.78% (3868) up-to-date,  38.79% (3869) including uploaded
 35.09% (3500) need building
  0.26% ( 26) currently building
  0.00% (  0) already built, but not uploaded
  0.38% ( 38) failed/dep-wait
  0.00% (  0) old failed/dep-wait
 23.26% (2320) not installable, because of missing build-dep
  1.88% (188) need porting or cause the buildd serious grief (Auto)
  0.09% (  9) need porting or cause the buildd serious grief

So, overall we're performing fine. The mention on Slashdot even brought up new donors of hardware. Someone offered SCSI/SCA disks up to 73 GB in size and another person even offered several Amigas, from which we'll using a Amiga 2000 with Blizzard 2060 accellerator card as a new buildd.

This leads me to a medium-sized drawback: we actually have several Amigas with a Blizzard 2060 as buildds. Unfortunately there's no SCSI driver in current kernels for that kind of hardware. This results in the effect that we can't use as many machines as we could. Currently we are using 3 active buildds and some Aranym VMs running on Thorsten Glasers hosts. We could add 4 more buildds when there would be a working SCSI driver.

So, if anyone likes to contribute to the m68k port and loves kernel hacking, this would be a great way to help us. :-) 

Kategorie: 
Tags: 
 

Resurrecting m68k - We're on track again!

Mid of November I already wrote about "Resurrecting m68k" - and went on holidays right after that writing. So, nothing really happened until December. But then things happened rather quickly one after one. First, I got Elgar up and running. Then I upgraded Arrakis and Vivaldi again. And then it was a lucky coincedence that my parents made a short trip to Nuremberg. Back then there were another buildd located in that city: Akire, which was operated by Matthias "smurf" Urlichs. So I mailed him and asked, if Akire still do exists and he answered surprising quickly that it is - but he wanted to take it to the garbage soon.

I asked Smurf if my parents could pick it up and we managed to exchange contact addresses/phone numbers. To all of our surprise the Hotel, where my parents were staying, was just 180m away from Smurfs home! So it was really easy for my parents to pick up the machine, until they continued their trip to visit me in Rostock. That way I had just another machine to upgrade! Whoohoo!

I used most of the time in December to upgrade the machines, migrating to larger disks, setting up everything as someone on debian-68k list popped up to offer a hosting facility in Berlin. That was really perfect timing! I took Elgar from NMMN in Hamburg, where it was hosted until August, and had now a second machine, Akire, where I didn't know where to host. So the offer made it easy to decide: Elgar & Akire will go to Berlin whereas Kullervo & Crest will move back to NMMN, when those two boxes got upgraded. That way we have some kind of redundancy. Perfect!

Except that we would still need a running Buildd on those machines. During the last few years, I think 4-5 years, the sbuild/buildd suite did change in a great way. Nothing worked any longer as it did. So I concentrated on getting sbuild ready to pick a source and build it. But I got faced with some segfaults of various stuff. After all, it happened to be a somewhat broken kernel that caused all the problems. After upgrading the kernel, schroot suddenly did work and I could continue in setting up sbuild. After some days things got clearer and finally it worked: 6tunnel was the first newly build package by sbuild on m68k on 20. December 2012!

During the next days I tried to get a larger disk (18G) for Spice, another machine, working, so I could use the big disk (36G) for Akire, instead of the old 2 & 4G disks and tried to deploy the sbuild config to Arrakis and Vivaldi. That was about two days ago. The missing part was an updated buildd config. This was addressed by Wouter today (well, actually yesterday in the meantime) and now we have a working buildd again since years! Hooray! :-))

Now we are back on track with the m68k port and will add more buildds, as well native as emulated ones, to come down from that "Needs-Build : 5261" number.

So, very big thanks to all that made this possible: 

  • Wouter for configuring the buildd setup on Arrakis
  • Aurelien for adding the m68k buildd back to debian-ports.org.
  • John Paul Adrian Glaubitz for offering the hosting
  • Matthias "smurf" Urlichs for keeping care of Akire all of these years
  • NMMN in Hamburg for willing to continue the hosting for Kullervo & Crest
  • adb@#debian-68k for donating 4x 32MB PS/2 RAM

and finally, last but not least, a very, very BIG THANKS to Thorsten Glaser who acted all these years as a human buildd and for solving the TLS problem on m68k and keeping the port alive in some kind of one-man-show!

Kategorie: 
 

Resurrecting m68k

In August I picked up Elgar, a m68k machine, from NMMN in Hamburg, where it was ought to run as a buildd (NMMN donated space and network). Unfortunately it was in some kind of bad state: operating system was out of date, expansion cards were getting loose and NMMN wasn't happy about the CRT monitor in its datacenter as well.

Elgar is an Amiga 4000 Desktop built into a custom tower case. It took some weeks and months until I found a little time to care about Elgar, but now it's up and running again: 

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.2.0-3-amiga (Debian 3.2.23-1) (debian-kernel@lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-8+m68k.1) ) #1 Wed Jul 25 13:02:31 UTC 2012
[    0.000000] Enabling workaround for errata I14
[    0.000000] console [debug0] enabled
[    0.000000] Amiga hardware found: [A4000] VIDEO BLITTER AUDIO FLOPPY A4000_IDE KEYBOARD MOUSE SERIAL PARALLEL A3000_CLK CHIP_RAM PAULA LISA ALICE_PAL ZORRO3

That's the stock Debian m68k kernel and it already runs on Arrakis and Vivaldi, two other buildds, as well without any problem. The only problem at the moment is the missing SCSI driver for the CyberStorm Mk1 accelerator card. There were some changes in the kernel that need to be dealt with by someone knowing.

The other problem was to upgrade from etch-m68k to unstable. I already blogged about this last year. It's not as easy anymore and you'll need to deal with lots of dependency problems and such nowadays. But anyway: 

elgar:~# cat /etc/debian_version
wheezy/sid
elgar:~# dpkg -l libc6
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                 Version         Architecture    Description
+++-====================-===============-===============-==============================================
ii  libc6:m68k           2.13-35         m68k            Embedded GNU C Library: Shared libraries

It's amazing that the m68k port is in such good condition after more than 4 years. That's because of the real great work of Torsten Glaser who is doing much of the porter work for the tool chain. But m68k has currently 4833 packages in state Needs-Build

wanna-build statistics - Mon Nov 12 06:51:13 CET 2012
-----------------------------------------------------

Distribution unstable:
---------------------
Installed : 1321
Needs-Build : 4833
Building : 0
Uploaded : 0
Failed : 0
Dep-Wait : 0
Reupload-Wait : 0
Install-Wait : 0
Failed-Removed : 0
Dep-Wait-Removed: 0
Not-For-Us : 0
total : 9906

13.34% (1321) up-to-date, 13.34% (1321) including uploaded
48.79% (4833) need building
0.00% ( 0) currently building
0.00% ( 0) failed/dep-wait
0.00% ( 0) old failed/dep-wait
0.00% ( 0) need porting or cause the buildd serious grief

So, there's a lot of work to do, but it's apparent that m68k won't keep up with 10.000 packages in unstable. When I started running an autobuilder back in year 2000 there were 2400 packages for m68k, on Aug. 15th 2005 we had a total of 5949 packages in the archive. For m68k this means that we will have to start adding lots of packages to Not-For-Us. I think m68k will/should end up with approx. 4000 packages at most.

In the end m68k is a fairly good shape now. It only needs to get some packages built... Let's see when the buildds are operational again... ;-)

Kategorie: 
 

Pages

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer