You are here

Debian

Bind9 vs. PowerDNS

Currently I'm playing around with DNSSEC. The handling of DNSSEC seems a little bit complex to me when looking at my current Bind9 setup. I was following the Debian Wiki page on DNSSEC and related links. The linked howto on HowToForge is a little bit outdated as it targeted to Squeeze. I've learned in the meanwhile that Bind9 can do key renewal on its own, but anyway, I did look around if there other nameservers that can handle DNSSEC and came across PowerDNS, which seems to power a large number of european DNSSEC zones.

Whereas Bind9 is well-known, well documented and serving my zones well for years. But I got the impression that DNSSEC is a more or less a mess with Bind9 as it was added on top of it without being well integrated. On the contrary, DNSSEC support is built into PowerDNS as if it was well integrated from scratch on a design level. But on the other hand there doesn't seem much ressources available on the net about PowerDNS. There's the official documentation, of course, but this is not as good as the Bind9 documentation. On the plus side you can operate PowerDNS in Bind mode, i.e. using the Bind9 configuration and zone files, even in hybrid-mode that enables you to additionally run a database-based setup.

So, I'm somewhat undecided about how to proceed. Either stay with Bind9 and DNSSEC, completely migrate to PowerDNS and a database setup or use PowerDNS with bind backend? Feel free to comment or respond by your own blog post about your experience. :-)

Kategorie: 
 

Buildd.Net: update-buildd.net v0.99 released

Buildd.Net offers a buildd centric view to autobuilder network such as previously Debians autobuilder network or nowadays the autobuilder network of debian-ports.org. The policy of debian-ports.org requires a GPG key for the buildd to sign packages for upload that is valid for 1 year. Buildd admins are usually lazy people. At least they are running a buildd instead of building those packages all manually. Being a lazy buildd admin it might happen that you miss to renew your GPG key, which will render your buildd unable to upload newly built packages.

When participating in Buildd.Net you need to run update-buildd.net, a small script that transmits some statistical data about your package building. I added now a GPG key expiry check to that script that will warn the buildd admin by mail and text on the Buildd.Net arch status page, such as for m68k. So, either your client updates automatically to the new version or you can download the script yourself.

Kategorie: 
 

Debian donation to m68k arrived

The Debian m68k port has been entitled by the DPL to receive a donation of five memory expansion cards for the m68k autobuilders. The cards arrived two weeks ago and are now being shipped to the appropriate buildd admins. Adding those 256 MB memory expansion will have a huge effect to the m68k buildds, because most of them are Amigas that are currently running with "just" 128 MB.

The problem with those expansion cards is to make use of them. Sounds strange but this is the story behind it....

Those memory expansion cards, namely it is the BigRamPlus from Individual Computers, are Zorro III bus cards, which has some speed limitations. The Amiga memory model is best described in the Amiga Hardware Reference Manual. For Zorro III based Amigas this is described in the section "A3000 memory map", where you can see that the memory model is divided into different address space. The most important address space is the "Coprocessor Slot Expansion" space, starting at $0800 0000. This is where the memory on the CPU accelerator cards will be found and which runs at full CPU speed.

The BigRamPlus, however, is located within "Zorro III Expansion" address space at $1000 0000 and will have transfer rates of about 13 MB/s. Then again there's still the motherboard expansion memory and others like Zorro II expansion memory. Unfortunately the current kernel does not support SPARSEMEM on m68k, but is using DISCONTIGMEM as Geert Uytterhoven explained. In short: we need SPARSEMEM support to easily make use of all available memory chunks that can be found. To make it a little more difficult, Amigas do use some kind of memory priority. Memory on accelerator cards usually has a priority of 40, motherboard expansion memory has a priority of, let's say, 20 and chip memory a pri of 0. This priority usually is equivalent to speed of memory. So, we want to have the kernel loaded to accelerator memory, of course.

Basically we could do that by using a memfile and define the different memory chunks in the appropriate priority list like this one:

2097152
0x08000000 67108864
0x07400000 12582912
0x05000000 268435424

Would be an easy solution, right? Except that this doesn't work out. Currently the kernel will be loaded into the first memory chunk that is defined and ignore all memory chunks before that address space. As you can see 0x07400000 and 0x05000000 would be ignored because of this. Getting confused? No problem! It will get worse! ;)

There's another method of accessing memory for Amigas: it's called z2ram and will use Zorro II as, let's say, swapping area. But maybe you guessed it: z2ram does not work for Zorro III memory (yet). So, this won't work either.

Geert suggested to use that Zorro III memory as mtd device and finally this worked out! You'll need these modules in your kernel: 

CONFIG_MTD=m
CONFIG_MTD_CMDLINE_PARTS=m
CONFIG_MTD_BLKDEVS=m
CONFIG_MTD_SWAP=m
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
CONFIG_MTD_SLRAM=m
CONFIG_MTD_PHRAM=m

Then you just need to create the mtd device and configure it as swap space: 

/sbin/modprobe phram phram=bigram0,0x50000000,0xfffffe0
/sbin/modprobe mtdblock
/sbin/mkswap /dev/mtdblock0
/sbin/swapon -p 5 /dev/mtdblock0

And then you're done: 

# swapon -s
Filename Type Size Used Priority
/dev/sda3 partition 205932 8 1
/dev/sdb3 partition 875536 16 1
/dev/mtdblock0 partition 262136 53952 5

To make it even worse (yes, there's still room for that! ;)) you can put two memory expansion cards into one box: 

# lszorro -v
00: MacroSystems USA Warp Engine 40xx [Accelerator, SCSI Host Adapter and RAM Expansion]
40000000 (512K)

01: Unknown device 0e3b:20:00
50000000 (256M)

02: Village Tronic Picasso II/II+ RAM [Graphics Card]
00200000 (2M)

03: Village Tronic Picasso II/II+ [Graphics Card]
00e90000 (64K)

04: Hydra Systems Amiganet [Ethernet Card]
00ea0000 (64K)

05: Unknown device 0e3b:20:00
60000000 (256M)

The two "Unknown device" entries are the two BigRamPlus cards. As you can see card #1 starts at 0x50000000 and card #2 starts at 0x60000000. Unfortunately the phram kernel module can be loaded twice with different start addresses, but the idea to start at 0x50000000 with a size of 512M won't work either as there seems to be a reserved 0x20 bytes a range at the beginning of each card. Anyway...

So, to make a very long and weird story short: the donated memory cards from Debian can now be used as additional and fast swap space for the buildds as long as it takes to get SPARSEMEM support working.

Thanks again for donating the money for those memory expansion cards for the good old m68k port. Once done SPARSEMEM support in the m68k will benefit not only these cards in Amigas but Ataris as well.

Kategorie: 
 

Sharing GnuPG between Linux and OSX

I've been using GnuPG since years. Well, using is too strong. I have a GPG key that I've created somewhen and use it once in a while when sending login credentials to other Linux people. But since Edward Snowdens NSA leaks I now get encrypted mails by non-Linux people. It is great that people are making use of strong encryption to protect their communication, but it is frightening that people have to do so because of NSA mass surveillance the complete world and violating our civil and human rights.

Anyway, one problem with GnuPG and other PKI tools is, that you should keep your private key secret. When you use more than one device to write your mails, you will run into usuability problems like I did. My main computer is my Debian box, but I use a MacBook Pro laptop with OSX very often as well. There is GPGSuite (formerly GPGMail) for OSX to pimp your Mail.app with GPG. It uses, of course, a local .gnupg/ directory and thus it would create a separate GnuPG pair of keys. But apparently I want to use my existing pair of keys - without the need to copy them over from my Linux box to my laptop.

The solution would be a simple setup of netatalk to mount your home directory from the Linux box under OSX and a matching symlink to your Linux .gnupg/ directory (or even better: symlink the contents where necessary and not the whole directory). But that would've been too easy, I guess, because I got this error message on OSX: 

So, basically this didn't work right out of the box. Fortunately the GPGSuite support guys replied quick and solved this problem. The version they released yesterday did fix that problem, but I needed to add the following line to my ~/,gnupg/gpg-agent.conf, which didn't exist before too: 

no-use-standard-socket

With that line everything works like a charme under OSX with Mail.app using my GPG keys on my Debian box.

Kategorie: 
 

Exim4 and TLS with GMX/Web.de

Due to the unveiling of the NSA surveillance by Edward Snowden, some German mail providers decided last week to use TLS when sending mails out. For example GMX and Web.de. Usually there shouldn't be a problem with that, but it seems as if the Debian package of Exim4 (exim4-daemon-heavy) doesn't support any TLS ciphers that those providers will accept. The Debian package uses GnuTLS for TLS and there is Bug #446036 that asks for compilation against OpenSSL instead.

Anyway,  maybe it's something in my config as I don't use the Debian config but my own /etc/exim4/exim4.conf. Here are the TLS related parts: 

tls_advertise_hosts = *
tls_certificate = /etc/exim4/ssl.crt/webmail-ssl.crt
tls_privatekey = /etc/exim4/ssl.key/webmail-server.key

That's my basic setup. After discovering that GMX and Web.de cannot send mails anymore, I added some more, following the Exim docs (it's commented out, because I don't use GnuTLS anymore):

#tls_dhparam = /etc/exim4/gnutls-params-2236
#tls_require_ciphers = ${if =={$received_port}{25}\
# {NORMAL:%COMPAT}\
# {SECURE128}}

But still I got this kind of errors:

2013-08-14 22:49:27 TLS error on connection from mout.gmx.net [212.227.17.21] (gnutls_handshake): Could not negotiate a supported cipher suite.

As this didn't help either, I recompiled exim4-daemon-heavy against OpenSSL and et voila, it worked again. So, the question is if there's any way to get it working with GnuTLS ? Does the default Debian config work and if so, why? And if not, can a decision be made to use OpenSSL instead of GnuTLS? Reading the bug report it seems as if there are exemptions for linking against OpenSSL , so GPL wouldn't be violated.

UPDATE 16.08.2013:
I reinstalled the GnuTLS version of exim4-daemon-heavy to test the recommendation in the comments with explicit tls_require_chiphers settings, but with no luck: 

#tls_require_ciphers = SECURE256
#tls_require_ciphers = SECURE128
#tls_require_ciphers = NORMAL

These all resulted in the usual "(gnutls_handshake): Could not negotiate a supported cipher suite." error when trying one by one cipher setting.

UPDATE 2 16.08.2013:
There was a different report about recent GnuTLS problem on the debian-user-german mailing list. It's not the same cause, but might be related.

Kategorie: 
 

Debian-ports mirror on Buildd.Net

As a more or less unrelated side effect of the Debian m68k port resurrection I decided to give back something to debian-ports.org as we are relying on that service for our port. So I set up a debian-ports.org mirror: 

The mirror is running on a fast Gigabit connection and is reachable via IPv4 and (native) IPv6. It carries all current archs. 

Enjoy!

Kategorie: 
 

First m68k buildd relocated to FU Berlin

It's already been some days ago, but our first m68k buildd, elgar.buildd.net, has been relocated to its new hosting facility in Berlin on June 2nd. Its new home is located at the Physics Department of Freie Universität (FU) Berlin. So a big thank you to FU Berlin and John Paul Adrian Glaubitz to make this happen!

With Elgar being now hosted in Berlin the resurrection of the m68k port is steadily ongoing. More machines will be follow Elgar: while Elgar will be accompanied in Berlin by Akire (akire.buildd.net), Kullervo and Crest will hopefully be hosted at their old hosting donator NMMN in Hamburg somewhat later. The m68k port itself is doing fine and coped well with the all the new packages after the release of Wheezy:

  wanna-build statistics - Sat Jun 15 16:51:13 CEST 2013
  -----------------------------------------------------

Distribution unstable:
---------------------
Installed       :  7041 (buildd_m68k-ara5: 1016, buildd_m68k-arrakis: 157,
                         buildd_m68k-elgar: 201, buildd_m68k-kullervo: 238,
                         buildd_m68k-vivaldi: 153, tg: 53, unknown: 5223)
Needs-Build     :   780
Building        :     9 (buildd_m68k-ara5: 1, buildd_m68k-arrakis: 3,
                         buildd_m68k-elgar: 2, buildd_m68k-kullervo: 2,
                         buildd_m68k-vivaldi: 1)
Built           :     1 (buildd_m68k-elgar: 1)
Uploaded        :     0
Failed          :    73 (buildd_m68k-ara5: 51, buildd_m68k-kullervo: 10,
                         tg: 12)
Dep-Wait        :     3 (tg: 3)
Reupload-Wait   :     0
Install-Wait    :     0
Failed-Removed  :     0
Dep-Wait-Removed:     0
BD-Uninstallable:  1977
Auto-Not-For-Us :   192
Not-For-Us      :    50
total           : 10227

 68.85% (7041) up-to-date,  68.85% (7041) including uploaded
  7.63% (780) need building
  0.09% (  9) currently building
  0.01% (  1) already built, but not uploaded
  0.74% ( 76) failed/dep-wait
  0.00% (  0) old failed/dep-wait
 19.33% (1977) not installable, because of missing build-dep
  1.88% (192) need porting or cause the buildd serious grief (Auto)
  0.49% ( 50) need porting or cause the buildd serious grief

We are now constantly above 7000 packages installed, which is great considering the fact that we were at 10% keeping up by December 2012. Now we are approx 70% with just 5 buildds.

Of course we would like to get more buildds up & running, but currently there is the SCSI driver for the NCR53C9XF(espfast) chip missing for m68k. Sadly this chip is used on several accelerator cards for Amiga. With a working SCSI driver we could easily double our number of buildds. But I hope that this will just be a matter of time... :-)

Kategorie: 
 

Edward Snowden whistleblowed PRISM

Sometimes there are true heros. Even today. Like Edward Snowden who made PRISM publically known.

There's an interview by The Guardian with Edward Snowden

In a note accompanying the first set of documents he provided, he wrote: "I understand that I will be made to suffer for my actions," but "I will be satisfied if the federation of secret law, unequal pardon and irresistible executive powers that rule the world that I love are revealed even for an instant." [...]

He has had "a very comfortable life" that included a salary of roughly $200,000, a girlfriend with whom he shared a home in Hawaii, a stable career, and a family he loves. "I'm willing to sacrifice all of that because I can't in good conscience allow the US government to destroy privacy, internet freedom and basic liberties for people around the world with this massive surveillance machine they're secretly building."

Neither Bradley Manning nor Edward Snowden should be sentenced, but the Government that is responsible for such surveilance programs like PRISM should.

Kategorie: 
Tags: 
 

Problems with DaviCal after Wheezy Upgrade

It's been a while since Wheezy was released, but the problems with DaviCal started with that upgrade. I don't know whether this is a DaviCal bug or not, maybe partly. This is just a informational note, before I'll file a bug report (or not).

First problem was that I couldn't add any contacts anymore (CardDAV) from OS X. A friend of mine has the same issue as he's using my server for that. He mailed that he's getting the following error of the Contacts app under OS X: 

[NSInvalidArgumentException] -[CoreDAVNullParser rootElement]:
unrecognized selector sent to instance 0x7f91bad6b1d0

When I looked at the webfrontend I discovered that the database of DaviCal didn't get updated. I don't know whether this is a general problem or it just happened to me. Anyway. Executing the proper database upgrade script delivered by DaviCal was no problem at all. Adding new contacts worked again.

But then I discovered somewhen later that my calendars on the iPhone didn't update anymore. That was a bigger problem to solve, because it appears that it's an IOS 6.x problem and not a DaviCal issue. Finally I found on Google this mail on the DaviCal mailing list: 

As I started using DAViCal, I created calenders with "named" paths, not these
long names, like you stated in your message. With iOS 6 this was not a good
idea.

I created every calender new with the iOS 6 device and copied all
calender-items to the new calendars. Was a work auf 15 minutes for me. I
described it in a blog-post (german):
http://tech.blog.rana.at/2012/10/24/davical-caldav-mit-ios-6/

Not a real solution for the problem, but a workaround.

So, as linked page is in German, I'll rephrase the "solution" here: 

The problem seems to be that the old principal path names like user/calendar or user/home do not work anymore under iOS 6. Instead you'll need to create a new calender from your iPhone. So, just configure your calendar account as usually. You'll end with an empyt calendar. Now create a new event in your calendar on the iPhone. This new calendar should show up under OS X (or other clients). There you can export your existing appointments to an *.ics file which you can import in DaviCal webfrontend to the new prinicipal collection. Your dates should now show up on your iPhone again. But you'll end up with duplicate entries in iCal app under OS X. You need to define your new principal as default calendar to be able to delete your old default calendar, if that's not possible. If everything went well, you can share your dates between OS X and your iPhone via DaviCal again. At least this worked for me. :-)

It's late over here, so I'll postpone writing the bugreport for now...

Kategorie: 
 

Is GSOC a whitewashing project?

"The same procedure as last year, Ms. Sophie?" - "The same procedure as every year, James!" - at least when summer is coming, every year Google starts its "Google Summer of Code" (GSoC). This contest is a yearly event since 2005. Wikipedia states: 

The Google Summer of Code (GSoC) is an annual program, first held from May to August 2005,[1] in which Google awards stipends (of 5,000 USD, as of 2013)[2] to hundreds of students who successfully complete a requested free and open-source software coding project during the summer. The program is open to students aged 18 or over – the closely related Google Code-In is intended for students under the age of 18.

[...]

The program invites students who meet their eligibility criteria to post applications that detail the software-coding project they wish to perform. These applications are then evaluated by the corresponding mentoring organization. Every participating organization must provide mentors for each of the project ideas received, if the organization is of the opinion that the project would benefit from them. The mentors then rank the applications and decide among themselves which proposals to accept. Google then decides how many projects each organization gets, and asks the organizations to mark at most that many projects accordingly.

Sounds nice, eh? Submit a nice project, do some cool coding and get 5000.- US-$ for having some sort of fun!

While writing Open Source software (FLOSS/Libre Software), often there's no money it. It's an honory task, just for the benefit of creating a better world. A little bit, at least. Doing some coding on FLOSS and getting paid is great, eh?

But think twice! Maybe Google is not that friendly company it always states that it is? In the first place Google is a company and wants to earn money. And it has a mantra: "Don't be evil!" But the companys main purpose is to earn money and it will do anything to achieve this.

Think of GSoC as a cheap marketing project for Google. A contest for whitewashing Googles image. They can say: "hey, look! We are supporting the FLOSS community! We are not evil!" And you can look at GSoC as a cheap recruitment program for Google. Overall it appears that Google has a bigger benefit from GSoC than the participants as a single or than FLOSS community as a whole. There is a danger that the community gets pocketed by Google instead of enforcing the FLOSS standards and being as independant as possible.

Sure, you need to pay bills, get something to eat and so on, but do you really want to help Google to whitewash its image as a monopolistic company? Or would it be worth to try out some sort of crowd funding when you have a great idea for a program you want to write?
 

Kategorie: 
 

Pages

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer