You are here

Debian

Letsencrypt - when your blog entries don't show up on Planet Debian

Recently there is much talk on Planet Debian about LetsEncrypt certs. This is great, because using HTTPS everywhere improves security and gives the NSA some more work to decrypt the traffic.

However, when you enabled your blog with a LetsEncrypt cert, you might run into the same problem as I: your new article won't show up on Planet Debian after changing your feed URI to HTTPS. The reason seems to be quite simple: planet-venus, which is the software behind Planet Debian seems to have problems with SNI enabled websites.

When following the steps outlined in the Debian Wiki, you can check this by yourself: 

INFO:planet.runner:Fetching https://blog.windfluechter.net/taxonomy/term/2/feed via 5
ERROR:planet.runner:HttpLib2Error: Server presented certificate that does not match host blog.windfluechter.net: {'subjectAltName': (('DNS', 'abi94oesede.de'), ('DNS', 'www.abi94oesede.de')), 'notBefore': u'Jan 26 18:05:00 2016 GMT', 'caIssuers': (u'http://cert.int-x1.letsencrypt.org/',), 'OCSP': (u'http://ocsp.int-x1.letsencrypt.org/',), 'serialNumber': u'01839A051BF9D2873C0A3BAA9FD0227C54D1', 'notAfter': 'Apr 25 18:05:00 2016 GMT', 'version': 3L, 'subject': ((('commonName', u'abi94oesede.de'),),), 'issuer': ((('countryName', u'US'),), (('organizationName', u"Let's Encrypt"),), (('commonName', u"Let's Encrypt Authority X1"),))} via 5

I've filed bug #813313 for this. So, this might explain why your blog post doesn't appear on Planet Debian. Currently there seem 18 sites to be affected by this cert mismatch.

Kategorie: 
 

rpcbind listening on all interfaces

Currently I'm testing GlusterFS as a replicating network filesystem. GlusterFS depends on rpcbind package. No problem with that, but I usually want to have the services that run on my machines to only listen on those addresses/interfaces that are needed to fulfill the task. This is especially important, because rpcbind can be abused by remote attackers for rpc amplification attacks (dDoS). So, the rpcbind man page states: 

-h : Specify specific IP addresses to bind to for UDP requests. This option may be specified multiple times and is typically necessary when running on a multi-homed host. If no -h option is specified, rpcbind will bind to INADDR_ANY, which could lead to problems on a multi-homed host due to rpcbind returning a UDP packet from a different IP address than it was sent to. Note that when specifying IP addresses with -h, rpcbind will automatically add 127.0.0.1 and if IPv6 is enabled, ::1 to the list.

Ok, although there is neither a /etc/default/rpcbind.conf nor a /etc/rpcbind.conf nor a sample-rpcbind.conf under /usr/share/doc/rpcbind, some quick websearch revealed a sample config file. I'm using this one: 

# /etc/init.d/rpcbind
OPTIONS=""

# Cause rpcbind to do a "warm start" utilizing a state file (default)
# OPTIONS="-w "

# Uncomment the following line to restrict rpcbind to localhost only for UDP requests
OPTIONS="${OPTIONS} -h 192.168.1.254"
#127.0.0.1 -h ::1"

# Uncomment the following line to enable libwrap TCP-Wrapper connection logging
OPTIONS="${OPTIONS} -l "

As you can see, I want to bind to 192.168.1.254. After a /etc/init.d/rpcbind restart verifying that everything works as desired with netstat is showing...

tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 2084266 30777/rpcbind
tcp6 0 0 :::111 :::* LISTEN 0 2084272 30777/rpcbind
udp 0 0 0.0.0.0:848 0.0.0.0:* 0 2084265 30777/rpcbind
udp 0 0 192.168.1.254:111 0.0.0.0:* 0 2084264 30777/rpcbind
udp 0 0 127.0.0.1:111 0.0.0.0:* 0 2084260 30777/rpcbind
udp6 0 0 :::848 :::* 0 2084271 30777/rpcbind
udp6 0 0 ::1:111 :::* 0 2084267 30777/rpcbind

Whoooops! Although I've specified that rpcbind should only listen to 192.168.1.254 (and localhost as described by the man page) rpcbind is still listening on all addresses. Quick check if the process is using the correct options: 

root     30777  0.0  0.0  37228  2360 ?        Ss   16:11   0:00 /sbin/rpcbind -h 192.168.1.254 -l

Hmmm, yes, -h 192.168.1.254 is specified. Ok, something is going wrong here...

According to an entry in Ubuntus Launchpad I'm not the only one that experienced this problem. However this Launchpad entry mentioned that upstream seems to have a fix in version 0.2.3, but I experienced the same behaviour in stable as well as in unstable, where the package version is 0.2.3-0.2. Apparently the problem still exists in Debian unstable.

I'm somewhat undecided whether to file a normal bug against rpcbind or if I should label it as a security bug, because it opens a service to the public that can be abused for amplification attacks, although you might have configured rpcbind to just listen on internal addresses.

Kategorie: 
 

Letsencrypt: challenging challenges

On December 3rd 2015 the Letsencrypt project went to public beta and this is a good thing! Having more and more websites running on good and valid SSL certificates for their HTTPS is a good thing, especially because Letsencrypt takes care of renewing the certs every now and then. But there are still some issues with Letsencrypt. Some people criticize the Python client needing root priviledges and such. Others complain that Letsencrypt currently only supports webservers.

Well, I think for a public beta this is what we could have expected from the start: the Letsencrypt project focussed on a reference implementation and there are already other implementations being available. But one thing seems to a problem within the design of how Letsencrypt works as it uses a challenge response method to verify that the requesting user is controlling the domain for which the certificate shall be issued. This might work well in simple deployments, but what about a little more complex setups like multiple virtual machines and different protocols involved.

For example: you're using domain A for your communication like user@example.net for your mail, XMPP and SIP. Your mailserver runs on one virtual machine, whereas the webserver is running on a different virtual machine. The same for XMPP and SIP: a seperate VM as well. 

Usually the Letsencrypt approach would be that you configure your webserver (by configure /.well-known/acme-challenge/* location or use a standalone server on port 443) to handle the challenge response requests. This would give you a SSL cert for your webserver example.net. Of course you could copy this cert to your mail-, XMPP- and SIP-server, but then again you have to do this everytime the SSL cert gets renewed.

Another challenge is of course that you are not only have one or two domains, but a bunch of domains. In my case I host approximately >60 domains. Basically the mail for all domains are handled by my mailserver running on its own virtual machine. The webserver is located on a different VM. For some domains I offer XMPP accounts on a third VM.

What is the best way to solve this problem? Moving everything to just one virtual machine? Naaah! Writing some scripts to copy the certs as needed? Not very smart as well. Using a network share for the certs between all VMs? Hmmm... would that work?

And what about TLSA entries of your DNSSEC setup? When a SSL cert is renewed than the fingerprint might need an update in your DNS zone - for several protocols like mail, XMPP, SIP and HTTPS. At least the Bash implementation of Letsencrypt offers a "hook" which is called after the SSL cert has been issued.

What are you ways to deal with this kind of handling the ACME protocol challenges and multi-domain, multi-VM setup?

Kategorie: 
 

It has been 30 years now...

Sometimes you happen to realize how old you are when listening to the radio where they announce hits from your days of youth as "oldies". Or when todays youth asking you what a music casette or a 5.25" floppy disk is. You're even older than that when you still know 8" disks. We have one of those on our pin board.

Or you may realize your age when your favorite computer of your youth will celebrate its 30th anniversary this year!

In 1985 the Amiga was introduced to the public - and 30 years later Amigas around the world are still running! Some are still operated under AmigaOS, some are running maybe NetBSD, but some are running the Debian m68k port - even though the port was expelled from Debian years ago! But some people still take care of this port and it is keeping up with building packages, although mostly due to emulators doing the most work.

But anyway, the Amiga is turning 30 this year! And what a great machine this was! It brought multitasking, colorful graphics and multi-channel audio to the masses. It was years ahead of its competitors, but it was doomed to fail because of management failures by the producing company Commodore, which went bankrupt in 1994.

The "30th Anniversary Event" will take place on July 25th at the Computer History Museum in Mountain View, California, USA - if the kickstarting the event will be successful during the next two weeks. So, when you earned your first merits in computing on an Amiga as well, you might want to give your Amiga history another kickstart! Not the Kickstart floppy as the A1000 needed to boot up, but fundraising the event on kickstart.org!

I think this event is not only important to old Amiga users from the good old days, but for the rememberance of computing history in general. Without doubt the Amiga set new standards in computer history and is an important part of industrial heritage.

Kategorie: 
 

Bind9 vs. PowerDNS - part 2

Two weeks ago I wrote about implementing DNSSEC with Bind9 or PowerDNS and asked for opinions, because Bind9 appeared to me to be too complex to set it up with regular key signing and such and PowerDNS seemed to me to be nice and easy, but some kind of black box where I don't now what's happening.

I think I've now found the best and most suitable way for me to deal with DNSSEC. Or in short words: Bind9 won!

It won because of its inline-signing config option that you can use in bind9.9, which happens to be in backports. Another tip I can give due to my findings on the web: if you plan to implement DNSSEC with Bind9, do NOT! search for "bind dnssec" on the web. This will only bring up old HowTos and manuals which leaves you with the burden of manually update your keys. Just add the magic word "inline-signing" to your search phrase and you'll find proper results like the one from Michael McNally on a subpage of ISC.org: In-line Signing With NSEC3 in BIND 9.9+ -- A Walk-through. It's a fairly good starting point, but still left me with several manual steps to do to get a DNSSEC-signed zone. 

I'm quite a lazy guy when it comes down to manual steps that needs to get executed repeatedly, as many others in IT as well, I think. So I wrote some sort of small wrapper script to do the necessary steps of creating the keys, adding the necessary config options to your named.conf.local, enabling nsec3params, add the DS records to your zone file and displaying the DNSKEY to you, so that you just need to upload it to your registrar.

One problem was still open: when doing auto-signing/inline-signing with bind9, you are left with your plain text zone file whereas your signed zone file will keep to increase the serial with each key rollover. When changing your plain text zone file by adding, changing or removing RRs of that domain, you'll be left with the manual task of finding out was your actual serial is that is currently used, because it's not your serial +1 from your plain text zone file anymore. This is of course an awkward part I wanted to get rid off. And therefor my script includes an option to edit zone files with your favorite editor, increase the serial automatically by determing the currently highest number, either on disk or in DNS and raising this serial by 1. Finally the zone is automatically reloaded by rndc.

That way I now have the same comfort as in PowerDNS with Bind9, but also know what's going on, because it's not a black box anymore. Me happy. :-)

P.S.: I don't know whether this script is of interest to other users, because it relies heavily on my own setting, e.g. paths and such. But if there's interest, just ask...

P.P.S.: Well, I think it's better when you can decide yourself if my script is of interest to you... please find it attached...

Kategorie: 
 
AttachmentSize
dnssec.sh.txt3.55 KB

Bind9 vs. PowerDNS

Currently I'm playing around with DNSSEC. The handling of DNSSEC seems a little bit complex to me when looking at my current Bind9 setup. I was following the Debian Wiki page on DNSSEC and related links. The linked howto on HowToForge is a little bit outdated as it targeted to Squeeze. I've learned in the meanwhile that Bind9 can do key renewal on its own, but anyway, I did look around if there other nameservers that can handle DNSSEC and came across PowerDNS, which seems to power a large number of european DNSSEC zones.

Whereas Bind9 is well-known, well documented and serving my zones well for years. But I got the impression that DNSSEC is a more or less a mess with Bind9 as it was added on top of it without being well integrated. On the contrary, DNSSEC support is built into PowerDNS as if it was well integrated from scratch on a design level. But on the other hand there doesn't seem much ressources available on the net about PowerDNS. There's the official documentation, of course, but this is not as good as the Bind9 documentation. On the plus side you can operate PowerDNS in Bind mode, i.e. using the Bind9 configuration and zone files, even in hybrid-mode that enables you to additionally run a database-based setup.

So, I'm somewhat undecided about how to proceed. Either stay with Bind9 and DNSSEC, completely migrate to PowerDNS and a database setup or use PowerDNS with bind backend? Feel free to comment or respond by your own blog post about your experience. :-)

UPDATE: Problem solved, please read DNSSEC - Part 2

Kategorie: 
 

Buildd.Net: update-buildd.net v0.99 released

Buildd.Net offers a buildd centric view to autobuilder network such as previously Debians autobuilder network or nowadays the autobuilder network of debian-ports.org. The policy of debian-ports.org requires a GPG key for the buildd to sign packages for upload that is valid for 1 year. Buildd admins are usually lazy people. At least they are running a buildd instead of building those packages all manually. Being a lazy buildd admin it might happen that you miss to renew your GPG key, which will render your buildd unable to upload newly built packages.

When participating in Buildd.Net you need to run update-buildd.net, a small script that transmits some statistical data about your package building. I added now a GPG key expiry check to that script that will warn the buildd admin by mail and text on the Buildd.Net arch status page, such as for m68k. So, either your client updates automatically to the new version or you can download the script yourself.

Kategorie: 
 

Debian donation to m68k arrived

The Debian m68k port has been entitled by the DPL to receive a donation of five memory expansion cards for the m68k autobuilders. The cards arrived two weeks ago and are now being shipped to the appropriate buildd admins. Adding those 256 MB memory expansion will have a huge effect to the m68k buildds, because most of them are Amigas that are currently running with "just" 128 MB.

The problem with those expansion cards is to make use of them. Sounds strange but this is the story behind it....

Those memory expansion cards, namely it is the BigRamPlus from Individual Computers, are Zorro III bus cards, which has some speed limitations. The Amiga memory model is best described in the Amiga Hardware Reference Manual. For Zorro III based Amigas this is described in the section "A3000 memory map", where you can see that the memory model is divided into different address space. The most important address space is the "Coprocessor Slot Expansion" space, starting at $0800 0000. This is where the memory on the CPU accelerator cards will be found and which runs at full CPU speed.

The BigRamPlus, however, is located within "Zorro III Expansion" address space at $1000 0000 and will have transfer rates of about 13 MB/s. Then again there's still the motherboard expansion memory and others like Zorro II expansion memory. Unfortunately the current kernel does not support SPARSEMEM on m68k, but is using DISCONTIGMEM as Geert Uytterhoven explained. In short: we need SPARSEMEM support to easily make use of all available memory chunks that can be found. To make it a little more difficult, Amigas do use some kind of memory priority. Memory on accelerator cards usually has a priority of 40, motherboard expansion memory has a priority of, let's say, 20 and chip memory a pri of 0. This priority usually is equivalent to speed of memory. So, we want to have the kernel loaded to accelerator memory, of course.

Basically we could do that by using a memfile and define the different memory chunks in the appropriate priority list like this one:

2097152
0x08000000 67108864
0x07400000 12582912
0x05000000 268435424

Would be an easy solution, right? Except that this doesn't work out. Currently the kernel will be loaded into the first memory chunk that is defined and ignore all memory chunks before that address space. As you can see 0x07400000 and 0x05000000 would be ignored because of this. Getting confused? No problem! It will get worse! ;)

There's another method of accessing memory for Amigas: it's called z2ram and will use Zorro II as, let's say, swapping area. But maybe you guessed it: z2ram does not work for Zorro III memory (yet). So, this won't work either.

Geert suggested to use that Zorro III memory as mtd device and finally this worked out! You'll need these modules in your kernel: 

CONFIG_MTD=m
CONFIG_MTD_CMDLINE_PARTS=m
CONFIG_MTD_BLKDEVS=m
CONFIG_MTD_SWAP=m
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
CONFIG_MTD_SLRAM=m
CONFIG_MTD_PHRAM=m

Then you just need to create the mtd device and configure it as swap space: 

/sbin/modprobe phram phram=bigram0,0x50000000,0xfffffe0
/sbin/modprobe mtdblock
/sbin/mkswap /dev/mtdblock0
/sbin/swapon -p 5 /dev/mtdblock0

And then you're done: 

# swapon -s
Filename Type Size Used Priority
/dev/sda3 partition 205932 8 1
/dev/sdb3 partition 875536 16 1
/dev/mtdblock0 partition 262136 53952 5

To make it even worse (yes, there's still room for that! ;)) you can put two memory expansion cards into one box: 

# lszorro -v
00: MacroSystems USA Warp Engine 40xx [Accelerator, SCSI Host Adapter and RAM Expansion]
40000000 (512K)

01: Unknown device 0e3b:20:00
50000000 (256M)

02: Village Tronic Picasso II/II+ RAM [Graphics Card]
00200000 (2M)

03: Village Tronic Picasso II/II+ [Graphics Card]
00e90000 (64K)

04: Hydra Systems Amiganet [Ethernet Card]
00ea0000 (64K)

05: Unknown device 0e3b:20:00
60000000 (256M)

The two "Unknown device" entries are the two BigRamPlus cards. As you can see card #1 starts at 0x50000000 and card #2 starts at 0x60000000. Unfortunately the phram kernel module can be loaded twice with different start addresses, but the idea to start at 0x50000000 with a size of 512M won't work either as there seems to be a reserved 0x20 bytes a range at the beginning of each card. Anyway...

So, to make a very long and weird story short: the donated memory cards from Debian can now be used as additional and fast swap space for the buildds as long as it takes to get SPARSEMEM support working.

Thanks again for donating the money for those memory expansion cards for the good old m68k port. Once done SPARSEMEM support in the m68k will benefit not only these cards in Amigas but Ataris as well.

Kategorie: 
 

Sharing GnuPG between Linux and OSX

I've been using GnuPG since years. Well, using is too strong. I have a GPG key that I've created somewhen and use it once in a while when sending login credentials to other Linux people. But since Edward Snowdens NSA leaks I now get encrypted mails by non-Linux people. It is great that people are making use of strong encryption to protect their communication, but it is frightening that people have to do so because of NSA mass surveillance the complete world and violating our civil and human rights.

Anyway, one problem with GnuPG and other PKI tools is, that you should keep your private key secret. When you use more than one device to write your mails, you will run into usuability problems like I did. My main computer is my Debian box, but I use a MacBook Pro laptop with OSX very often as well. There is GPGSuite (formerly GPGMail) for OSX to pimp your Mail.app with GPG. It uses, of course, a local .gnupg/ directory and thus it would create a separate GnuPG pair of keys. But apparently I want to use my existing pair of keys - without the need to copy them over from my Linux box to my laptop.

The solution would be a simple setup of netatalk to mount your home directory from the Linux box under OSX and a matching symlink to your Linux .gnupg/ directory (or even better: symlink the contents where necessary and not the whole directory). But that would've been too easy, I guess, because I got this error message on OSX: 

So, basically this didn't work right out of the box. Fortunately the GPGSuite support guys replied quick and solved this problem. The version they released yesterday did fix that problem, but I needed to add the following line to my ~/,gnupg/gpg-agent.conf, which didn't exist before too: 

no-use-standard-socket

With that line everything works like a charme under OSX with Mail.app using my GPG keys on my Debian box.

Kategorie: 
 

Exim4 and TLS with GMX/Web.de

Due to the unveiling of the NSA surveillance by Edward Snowden, some German mail providers decided last week to use TLS when sending mails out. For example GMX and Web.de. Usually there shouldn't be a problem with that, but it seems as if the Debian package of Exim4 (exim4-daemon-heavy) doesn't support any TLS ciphers that those providers will accept. The Debian package uses GnuTLS for TLS and there is Bug #446036 that asks for compilation against OpenSSL instead.

Anyway,  maybe it's something in my config as I don't use the Debian config but my own /etc/exim4/exim4.conf. Here are the TLS related parts: 

tls_advertise_hosts = *
tls_certificate = /etc/exim4/ssl.crt/webmail-ssl.crt
tls_privatekey = /etc/exim4/ssl.key/webmail-server.key

That's my basic setup. After discovering that GMX and Web.de cannot send mails anymore, I added some more, following the Exim docs (it's commented out, because I don't use GnuTLS anymore):

#tls_dhparam = /etc/exim4/gnutls-params-2236
#tls_require_ciphers = ${if =={$received_port}{25}\
# {NORMAL:%COMPAT}\
# {SECURE128}}

But still I got this kind of errors:

2013-08-14 22:49:27 TLS error on connection from mout.gmx.net [212.227.17.21] (gnutls_handshake): Could not negotiate a supported cipher suite.

As this didn't help either, I recompiled exim4-daemon-heavy against OpenSSL and et voila, it worked again. So, the question is if there's any way to get it working with GnuTLS ? Does the default Debian config work and if so, why? And if not, can a decision be made to use OpenSSL instead of GnuTLS? Reading the bug report it seems as if there are exemptions for linking against OpenSSL , so GPL wouldn't be violated.

UPDATE 16.08.2013:
I reinstalled the GnuTLS version of exim4-daemon-heavy to test the recommendation in the comments with explicit tls_require_chiphers settings, but with no luck: 

#tls_require_ciphers = SECURE256
#tls_require_ciphers = SECURE128
#tls_require_ciphers = NORMAL

These all resulted in the usual "(gnutls_handshake): Could not negotiate a supported cipher suite." error when trying one by one cipher setting.

UPDATE 2 16.08.2013:
There was a different report about recent GnuTLS problem on the debian-user-german mailing list. It's not the same cause, but might be related.

Kategorie: 
 

Pages

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer