You are here

Debian

Xen randomly crashing server

It's a long story... an oddessey of almost two years...

But to start from the beginning: Back then I rented a server at Hetzner until they decided to bill for every IP address you got from them. I got a /26 in the past and so I would have to pay for every IP address of that subnet in addition to the server rent of 79.- EUR/month. That would have meant nearly doubling the monthly costs. So I moved with my server from Hetzner to rrbone Net, which offered me a /26 on a rented Cisco C200 M2 server for a competitve price.

After migrating the VMs from Hetzner to rrbone with the same setup that was running just fine at Hetzner I experienced spontaneous reboots of the server, sometimes several times per day and in short time frame. The hosting provider was very, very helpful in debugging this like exchanging the memory, setting up a remote logging service for the CIMC and such. But in the end we found no root cause for this. The CIMC logs showed that the OS was rebooting the machine.

Anyway, I then bought my own server and exchanged the Cisco C200 by my own hardware, but the reboots still happen as before. Sometimes the servers runs for weeks, sometimes the server crashes 4-6 times a day, but usually it's like a pattern: when it crashes and reboots, it will do that again within a few hours and after the second reboot the chances are high that the server will run for several days without a reboot - or even weeks.

The strange thing is, that there are absolutely no hints in the logs, neither syslog or in the Xen logs, so I assume that's something quite deep in the kernel that causes the reboot. Another hint is, that the reboots fairly often happened, when I used my Squid proxy on one of the VMs to access the net. I'm connecting for example by SSH with portforwarding to one VM, whereas the proxy runs on another VM, which led to network traffic between the VMs. Sometimes the server crashed on the very firsts proxy requests. So, I exchanged Squid by tinyproxy or other proxies, moved the proxy from one VM to that VM I connect to using SSH, because I thought that the inter-VM traffic may cause the machine to reboot. Moving the proxy to another virtual server I rented at another hosting provider to host my secondary nameserver did help a little bit, but with no real hard proof and statistics, just an impression of mine.

I moved from xm toolstack to xl toolstack as well, but didn't help either. The reboots are still happening and in the last few days very frequent. Even with the new server I exchanged the memory, used memory mirroring as well, because I thought that it might be a faulty memory module or something, but still rebooting out of the blue.

During the last weekend I configured grub to include "noreboot" command line and then got my first proof that somehow the Xen network stack is causing the reboots: 

This is a screenshot of the IPMI console, so it's not showing the full information of that kernel oops, but as you can see, there are most likely such parts involved like bridge, netif, xenvif and the physical igb NIC.

Here's another screenshot of a crash from this night: 

Slightly different information, but still somehow network involved as you can see in the first line (net_rx_action).

So the big question is: is this a bug Xen or with my setup? I'm using xl toolstack, the xl.conf is basically the default, I think: 

## Global XL config file ##

# automatically balloon down dom0 when xen doesn't have enough free
# memory to create a domain
autoballoon=0

# full path of the lockfile used by xl during domain creation
#lockfile="/var/lock/xl"

# default vif script
#vif.default.script="vif-bridge"

With this the default network scripts of the distribution (i.e. Debian stable) should be used. The network setup consists of two brdiges: 

auto xenbr0
iface xenbr0 inet static
        address 31.172.31.193
        netmask 255.255.255.192
        gateway 31.172.31.254
        bridge_ports eth0
        pre-up brctl addbr xenbr0

auto xenbr1
iface xenbr1 inet static
        address 192.168.254.254
        netmask 255.255.255.0
        pre-up brctl addbr xenbr1

There are some more lines to that config like setting up some iptables rules with up commands and such. But as you can see my eth0 NIC is part of the "main" xen bridge with all the IP addresses that are reachable from the outside. The second bridge is used for internal networking like database connections and such.

I would rather like to use a netconsole to capture the full debug output in case of a new crash, but unfortunately this only works until the bridge is brought up and in place: 

[    0.000000] Command line: placeholder root=UUID=c3....22 ro debug ignore_loglevel loglevel=7 netconsole=port@31.172.31.193/eth0,514@5.45.x.y/e0:ac:f1:4c:y:x
[   32.565624] netpoll: netconsole: local port $port
[   32.565683] netpoll: netconsole: local IPv4 address 31.172.31.193
[   32.565742] netpoll: netconsole: interface 'eth0'
[   32.565799] netpoll: netconsole: remote port 514
[   32.565855] netpoll: netconsole: remote IPv4 address 5.45.x.y
[   32.565914] netpoll: netconsole: remote ethernet address e0:ac:f1:4c:y:x
[   32.565982] netpoll: netconsole: device eth0 not up yet, forcing it
[   36.126294] netconsole: network logging started
[   49.802600] netconsole: network logging stopped on interface eth0 as it is joining a master device

So, the first question is: how to use netconsole with an interface that is used on a bridge?

The second question is: is the setup with two bridges with Xen ok? I've been using this setup for years now and it worked fairly well on the Hetzner server as well, although I used there xm toolstack with a mix of bridge and routed setup, because Hetzner didn't like to see the MAC addresses of the other VMs on the switch and shut the port down if that happens.

Kategorie: 
 

Letsencrypt: challenging challenges solved

A few weeks ago I was wondering in Letsencrypt: challenging challenges about how to setup Letsencrypt when a domain is spread across several virtual machines (VM). One of the possible solutions would be to consolidate everything on one single VM, which is nothing I would like to do. The second option would need to generate the Letsencrypt certs on the webserver and copy over the certs to the appropriate VM on a regular basis or event driven. The third option is to use a network share - and this is what I'm using right now.

So, my setup is as following after I solved the GlusterFS issue with rpcbind binding to all interfaces, although it has been configured to only listen to certain interfaces (solution was: simply remove all NFS related stuff):

On Dom0 (or the host machine) I run GlusterFS as a server on a small 1 GB LVM as part of a replicate with the VM that will do the actual Letsencrypt work: 

Volume Name: le
Type: Replicate
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.x.254:/srv/gfs/le
Brick2: 192.168.x.1:/srv/gfs/le

This is to ensure that on reboot of the machine every other VM using Letsencrypt certs can mount the GlusterFS share, because the host machine will be there for sure whereas the other VM generating the certs with the letsencrypt.sh script might still be booting. And when the GlusterFS share is missing services will not start on the other VMs because of the missing certs, of course. So, the replica on the virtualization host (Dom0) is only acting as some kind of always-being-available network share, because, well, the other VM will not always be there... for example during a kernel update when a reboot is required.

The same setup is on my mailserver, acting as the second GlusterFS brick of that replica drive. The mailserver hosts the bind9 nameserver as well and I might do something that new domains with Letsencrypt certs get added to my DNSSEC setup as well. Of course, when the letsencrypt.sh script creates or updates the certs, it needs the certs being mounted in that configured location, so I needed to add a line to /etc/fstab: 

192.168.x.254:/le /etc/letsencrypt.sh/certs glusterfs noexec,nodev,_netdev 0 0

Basically the same needs to be done on the other VMs where you want to use the certs as well, but you may want to mount the share as read-only there.

The next step was a little more tricky. When letsencrypt.sh generates new certs, Letsencrypt will contact the webserver for that domain to respond to the ACME challenge. This requires that on each VM you want to use letsencrypt you have to run a webserver. Well, actually at least that there is somewhere a webserver that can answer these requests for that specific domain...

Now, the setup of the webserver (Apache in my case) is like this: 

I'm using the Apache macro module to make it more easy, so I generated two small configs in /etc/apache/conf-available and enabled them bei a2enconf: letsencrypt-proxy.conf to do some setup for proxying the ACME challenges to a common website called acme.example.org. And then letsencrypt-sslredir.conf to setup SSL redirection when everything is in place and the domain can be switched over to HTTPS-only.

letsencrypt-proxy.conf: 

<Macro le_proxy>
     ProxyRequests Off
     <Proxy *>
            Order deny,allow
            Allow from all
     </Proxy>
     ProxyPass /.well-known/acme-challenge/ http://acme.windfluechter.net/
     ProxyPassReverse / http://%{HTTP_HOST}/.well-known/acme-challenge/
</Macro>

letsencrypt-sslredir.conf:

<Macro le_sslredir>
    RewriteEngine on
    RewriteCond %{HTTPS} !=on
    RewriteRule . https://%{HTTP_HOST}%{REQUEST_URI}  [L]
</Macro>

So, after all the setup of a virtual host for Apache looks like this: 

<Macro example.org>
(lots of setup stuff)
</Macro>
<VirtualHost 31.172.31.x:443 [2a01:a700:4629:x::1]:443>
        Header always set Strict-Transport-Security "max-age=31556926; includeSubDomains"
        SSLEngine on
        # letsencrypt certs:
        SSLCertificateFile /etc/letsencrypt.sh/certs/example.org/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt.sh/certs/example.org/privkey.pem
        SSLHonorCipherOrder On
    Use example.org
    Use le_proxy
</VirtualHost>
<VirtualHost 31.172.31.x:80 [2a01:a700:4629:x::1]:80>
    Use example.org
    Use le_proxy
    Use le_sslredir
</VirtualHost>

le_sslredir is only needed when you are sure that you want all traffic being redirected to HTTPS. For example when your blog is listed on planet.debian.org or other Planets you might want to omit this from your HTTP config because bug #813313 is not yet solved. 

In the end, when creating a new Letsencrypt cert, you need to add the le_proxy macro to your website, add the domain to letsencrypt.sh config in /etc/letsencrypt.sh/domains.txt and then the scripts will request a new cert from Letsencrypt, handling the ACME challenge stuff via the URL redirection in le_proxy being redirected to your acme.exmaple.org site and finally writes your new cert to the GlusterFS share. From that share you can then use the new cert on all needed VMs, be it your mailserver, webserver or XMPP/SIP server VMs. 

At least this works for me.

UPDATE:
Of course you should be careful about your file permissions on that GlusterFS share, so that the automatic key renewal works, but also without too many permissions granted that everyone can obtain your private keys.

Kategorie: 
 

Letsencrypt - when your blog entries don't show up on Planet Debian

Recently there is much talk on Planet Debian about LetsEncrypt certs. This is great, because using HTTPS everywhere improves security and gives the NSA some more work to decrypt the traffic.

However, when you enabled your blog with a LetsEncrypt cert, you might run into the same problem as I: your new article won't show up on Planet Debian after changing your feed URI to HTTPS. The reason seems to be quite simple: planet-venus, which is the software behind Planet Debian seems to have problems with SNI enabled websites.

When following the steps outlined in the Debian Wiki, you can check this by yourself: 

INFO:planet.runner:Fetching https://blog.windfluechter.net/taxonomy/term/2/feed via 5
ERROR:planet.runner:HttpLib2Error: Server presented certificate that does not match host blog.windfluechter.net: {'subjectAltName': (('DNS', 'abi94oesede.de'), ('DNS', 'www.abi94oesede.de')), 'notBefore': u'Jan 26 18:05:00 2016 GMT', 'caIssuers': (u'http://cert.int-x1.letsencrypt.org/',), 'OCSP': (u'http://ocsp.int-x1.letsencrypt.org/',), 'serialNumber': u'01839A051BF9D2873C0A3BAA9FD0227C54D1', 'notAfter': 'Apr 25 18:05:00 2016 GMT', 'version': 3L, 'subject': ((('commonName', u'abi94oesede.de'),),), 'issuer': ((('countryName', u'US'),), (('organizationName', u"Let's Encrypt"),), (('commonName', u"Let's Encrypt Authority X1"),))} via 5

I've filed bug #813313 for this. So, this might explain why your blog post doesn't appear on Planet Debian. Currently there seem 18 sites to be affected by this cert mismatch.

Kategorie: 
 

rpcbind listening on all interfaces

Currently I'm testing GlusterFS as a replicating network filesystem. GlusterFS depends on rpcbind package. No problem with that, but I usually want to have the services that run on my machines to only listen on those addresses/interfaces that are needed to fulfill the task. This is especially important, because rpcbind can be abused by remote attackers for rpc amplification attacks (dDoS). So, the rpcbind man page states: 

-h : Specify specific IP addresses to bind to for UDP requests. This option may be specified multiple times and is typically necessary when running on a multi-homed host. If no -h option is specified, rpcbind will bind to INADDR_ANY, which could lead to problems on a multi-homed host due to rpcbind returning a UDP packet from a different IP address than it was sent to. Note that when specifying IP addresses with -h, rpcbind will automatically add 127.0.0.1 and if IPv6 is enabled, ::1 to the list.

Ok, although there is neither a /etc/default/rpcbind.conf nor a /etc/rpcbind.conf nor a sample-rpcbind.conf under /usr/share/doc/rpcbind, some quick websearch revealed a sample config file. I'm using this one: 

# /etc/init.d/rpcbind
OPTIONS=""

# Cause rpcbind to do a "warm start" utilizing a state file (default)
# OPTIONS="-w "

# Uncomment the following line to restrict rpcbind to localhost only for UDP requests
OPTIONS="${OPTIONS} -h 192.168.1.254"
#127.0.0.1 -h ::1"

# Uncomment the following line to enable libwrap TCP-Wrapper connection logging
OPTIONS="${OPTIONS} -l "

As you can see, I want to bind to 192.168.1.254. After a /etc/init.d/rpcbind restart verifying that everything works as desired with netstat is showing...

tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 2084266 30777/rpcbind
tcp6 0 0 :::111 :::* LISTEN 0 2084272 30777/rpcbind
udp 0 0 0.0.0.0:848 0.0.0.0:* 0 2084265 30777/rpcbind
udp 0 0 192.168.1.254:111 0.0.0.0:* 0 2084264 30777/rpcbind
udp 0 0 127.0.0.1:111 0.0.0.0:* 0 2084260 30777/rpcbind
udp6 0 0 :::848 :::* 0 2084271 30777/rpcbind
udp6 0 0 ::1:111 :::* 0 2084267 30777/rpcbind

Whoooops! Although I've specified that rpcbind should only listen to 192.168.1.254 (and localhost as described by the man page) rpcbind is still listening on all addresses. Quick check if the process is using the correct options: 

root     30777  0.0  0.0  37228  2360 ?        Ss   16:11   0:00 /sbin/rpcbind -h 192.168.1.254 -l

Hmmm, yes, -h 192.168.1.254 is specified. Ok, something is going wrong here...

According to an entry in Ubuntus Launchpad I'm not the only one that experienced this problem. However this Launchpad entry mentioned that upstream seems to have a fix in version 0.2.3, but I experienced the same behaviour in stable as well as in unstable, where the package version is 0.2.3-0.2. Apparently the problem still exists in Debian unstable.

I'm somewhat undecided whether to file a normal bug against rpcbind or if I should label it as a security bug, because it opens a service to the public that can be abused for amplification attacks, although you might have configured rpcbind to just listen on internal addresses.

Kategorie: 
 

Letsencrypt: challenging challenges

On December 3rd 2015 the Letsencrypt project went to public beta and this is a good thing! Having more and more websites running on good and valid SSL certificates for their HTTPS is a good thing, especially because Letsencrypt takes care of renewing the certs every now and then. But there are still some issues with Letsencrypt. Some people criticize the Python client needing root priviledges and such. Others complain that Letsencrypt currently only supports webservers.

Well, I think for a public beta this is what we could have expected from the start: the Letsencrypt project focussed on a reference implementation and there are already other implementations being available. But one thing seems to a problem within the design of how Letsencrypt works as it uses a challenge response method to verify that the requesting user is controlling the domain for which the certificate shall be issued. This might work well in simple deployments, but what about a little more complex setups like multiple virtual machines and different protocols involved.

For example: you're using domain A for your communication like user@example.net for your mail, XMPP and SIP. Your mailserver runs on one virtual machine, whereas the webserver is running on a different virtual machine. The same for XMPP and SIP: a seperate VM as well. 

Usually the Letsencrypt approach would be that you configure your webserver (by configure /.well-known/acme-challenge/* location or use a standalone server on port 443) to handle the challenge response requests. This would give you a SSL cert for your webserver example.net. Of course you could copy this cert to your mail-, XMPP- and SIP-server, but then again you have to do this everytime the SSL cert gets renewed.

Another challenge is of course that you are not only have one or two domains, but a bunch of domains. In my case I host approximately >60 domains. Basically the mail for all domains are handled by my mailserver running on its own virtual machine. The webserver is located on a different VM. For some domains I offer XMPP accounts on a third VM.

What is the best way to solve this problem? Moving everything to just one virtual machine? Naaah! Writing some scripts to copy the certs as needed? Not very smart as well. Using a network share for the certs between all VMs? Hmmm... would that work?

And what about TLSA entries of your DNSSEC setup? When a SSL cert is renewed than the fingerprint might need an update in your DNS zone - for several protocols like mail, XMPP, SIP and HTTPS. At least the Bash implementation of Letsencrypt offers a "hook" which is called after the SSL cert has been issued.

What are you ways to deal with this kind of handling the ACME protocol challenges and multi-domain, multi-VM setup?

Kategorie: 
 

It has been 30 years now...

Sometimes you happen to realize how old you are when listening to the radio where they announce hits from your days of youth as "oldies". Or when todays youth asking you what a music casette or a 5.25" floppy disk is. You're even older than that when you still know 8" disks. We have one of those on our pin board.

Or you may realize your age when your favorite computer of your youth will celebrate its 30th anniversary this year!

In 1985 the Amiga was introduced to the public - and 30 years later Amigas around the world are still running! Some are still operated under AmigaOS, some are running maybe NetBSD, but some are running the Debian m68k port - even though the port was expelled from Debian years ago! But some people still take care of this port and it is keeping up with building packages, although mostly due to emulators doing the most work.

But anyway, the Amiga is turning 30 this year! And what a great machine this was! It brought multitasking, colorful graphics and multi-channel audio to the masses. It was years ahead of its competitors, but it was doomed to fail because of management failures by the producing company Commodore, which went bankrupt in 1994.

The "30th Anniversary Event" will take place on July 25th at the Computer History Museum in Mountain View, California, USA - if the kickstarting the event will be successful during the next two weeks. So, when you earned your first merits in computing on an Amiga as well, you might want to give your Amiga history another kickstart! Not the Kickstart floppy as the A1000 needed to boot up, but fundraising the event on kickstart.org!

I think this event is not only important to old Amiga users from the good old days, but for the rememberance of computing history in general. Without doubt the Amiga set new standards in computer history and is an important part of industrial heritage.

Kategorie: 
 

Bind9 vs. PowerDNS - part 2

Two weeks ago I wrote about implementing DNSSEC with Bind9 or PowerDNS and asked for opinions, because Bind9 appeared to me to be too complex to set it up with regular key signing and such and PowerDNS seemed to me to be nice and easy, but some kind of black box where I don't now what's happening.

I think I've now found the best and most suitable way for me to deal with DNSSEC. Or in short words: Bind9 won!

It won because of its inline-signing config option that you can use in bind9.9, which happens to be in backports. Another tip I can give due to my findings on the web: if you plan to implement DNSSEC with Bind9, do NOT! search for "bind dnssec" on the web. This will only bring up old HowTos and manuals which leaves you with the burden of manually update your keys. Just add the magic word "inline-signing" to your search phrase and you'll find proper results like the one from Michael McNally on a subpage of ISC.org: In-line Signing With NSEC3 in BIND 9.9+ -- A Walk-through. It's a fairly good starting point, but still left me with several manual steps to do to get a DNSSEC-signed zone. 

I'm quite a lazy guy when it comes down to manual steps that needs to get executed repeatedly, as many others in IT as well, I think. So I wrote some sort of small wrapper script to do the necessary steps of creating the keys, adding the necessary config options to your named.conf.local, enabling nsec3params, add the DS records to your zone file and displaying the DNSKEY to you, so that you just need to upload it to your registrar.

One problem was still open: when doing auto-signing/inline-signing with bind9, you are left with your plain text zone file whereas your signed zone file will keep to increase the serial with each key rollover. When changing your plain text zone file by adding, changing or removing RRs of that domain, you'll be left with the manual task of finding out was your actual serial is that is currently used, because it's not your serial +1 from your plain text zone file anymore. This is of course an awkward part I wanted to get rid off. And therefor my script includes an option to edit zone files with your favorite editor, increase the serial automatically by determing the currently highest number, either on disk or in DNS and raising this serial by 1. Finally the zone is automatically reloaded by rndc.

That way I now have the same comfort as in PowerDNS with Bind9, but also know what's going on, because it's not a black box anymore. Me happy. :-)

P.S.: I don't know whether this script is of interest to other users, because it relies heavily on my own setting, e.g. paths and such. But if there's interest, just ask...

P.P.S.: Well, I think it's better when you can decide yourself if my script is of interest to you... please find it attached...

Kategorie: 
 
AttachmentSize
dnssec.sh.txt3.55 KB

Bind9 vs. PowerDNS

Currently I'm playing around with DNSSEC. The handling of DNSSEC seems a little bit complex to me when looking at my current Bind9 setup. I was following the Debian Wiki page on DNSSEC and related links. The linked howto on HowToForge is a little bit outdated as it targeted to Squeeze. I've learned in the meanwhile that Bind9 can do key renewal on its own, but anyway, I did look around if there other nameservers that can handle DNSSEC and came across PowerDNS, which seems to power a large number of european DNSSEC zones.

Whereas Bind9 is well-known, well documented and serving my zones well for years. But I got the impression that DNSSEC is a more or less a mess with Bind9 as it was added on top of it without being well integrated. On the contrary, DNSSEC support is built into PowerDNS as if it was well integrated from scratch on a design level. But on the other hand there doesn't seem much ressources available on the net about PowerDNS. There's the official documentation, of course, but this is not as good as the Bind9 documentation. On the plus side you can operate PowerDNS in Bind mode, i.e. using the Bind9 configuration and zone files, even in hybrid-mode that enables you to additionally run a database-based setup.

So, I'm somewhat undecided about how to proceed. Either stay with Bind9 and DNSSEC, completely migrate to PowerDNS and a database setup or use PowerDNS with bind backend? Feel free to comment or respond by your own blog post about your experience. :-)

UPDATE: Problem solved, please read DNSSEC - Part 2

Kategorie: 
 

Buildd.Net: update-buildd.net v0.99 released

Buildd.Net offers a buildd centric view to autobuilder network such as previously Debians autobuilder network or nowadays the autobuilder network of debian-ports.org. The policy of debian-ports.org requires a GPG key for the buildd to sign packages for upload that is valid for 1 year. Buildd admins are usually lazy people. At least they are running a buildd instead of building those packages all manually. Being a lazy buildd admin it might happen that you miss to renew your GPG key, which will render your buildd unable to upload newly built packages.

When participating in Buildd.Net you need to run update-buildd.net, a small script that transmits some statistical data about your package building. I added now a GPG key expiry check to that script that will warn the buildd admin by mail and text on the Buildd.Net arch status page, such as for m68k. So, either your client updates automatically to the new version or you can download the script yourself.

Kategorie: 
 

Debian donation to m68k arrived

The Debian m68k port has been entitled by the DPL to receive a donation of five memory expansion cards for the m68k autobuilders. The cards arrived two weeks ago and are now being shipped to the appropriate buildd admins. Adding those 256 MB memory expansion will have a huge effect to the m68k buildds, because most of them are Amigas that are currently running with "just" 128 MB.

The problem with those expansion cards is to make use of them. Sounds strange but this is the story behind it....

Those memory expansion cards, namely it is the BigRamPlus from Individual Computers, are Zorro III bus cards, which has some speed limitations. The Amiga memory model is best described in the Amiga Hardware Reference Manual. For Zorro III based Amigas this is described in the section "A3000 memory map", where you can see that the memory model is divided into different address space. The most important address space is the "Coprocessor Slot Expansion" space, starting at $0800 0000. This is where the memory on the CPU accelerator cards will be found and which runs at full CPU speed.

The BigRamPlus, however, is located within "Zorro III Expansion" address space at $1000 0000 and will have transfer rates of about 13 MB/s. Then again there's still the motherboard expansion memory and others like Zorro II expansion memory. Unfortunately the current kernel does not support SPARSEMEM on m68k, but is using DISCONTIGMEM as Geert Uytterhoven explained. In short: we need SPARSEMEM support to easily make use of all available memory chunks that can be found. To make it a little more difficult, Amigas do use some kind of memory priority. Memory on accelerator cards usually has a priority of 40, motherboard expansion memory has a priority of, let's say, 20 and chip memory a pri of 0. This priority usually is equivalent to speed of memory. So, we want to have the kernel loaded to accelerator memory, of course.

Basically we could do that by using a memfile and define the different memory chunks in the appropriate priority list like this one:

2097152
0x08000000 67108864
0x07400000 12582912
0x05000000 268435424

Would be an easy solution, right? Except that this doesn't work out. Currently the kernel will be loaded into the first memory chunk that is defined and ignore all memory chunks before that address space. As you can see 0x07400000 and 0x05000000 would be ignored because of this. Getting confused? No problem! It will get worse! ;)

There's another method of accessing memory for Amigas: it's called z2ram and will use Zorro II as, let's say, swapping area. But maybe you guessed it: z2ram does not work for Zorro III memory (yet). So, this won't work either.

Geert suggested to use that Zorro III memory as mtd device and finally this worked out! You'll need these modules in your kernel: 

CONFIG_MTD=m
CONFIG_MTD_CMDLINE_PARTS=m
CONFIG_MTD_BLKDEVS=m
CONFIG_MTD_SWAP=m
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
CONFIG_MTD_SLRAM=m
CONFIG_MTD_PHRAM=m

Then you just need to create the mtd device and configure it as swap space: 

/sbin/modprobe phram phram=bigram0,0x50000000,0xfffffe0
/sbin/modprobe mtdblock
/sbin/mkswap /dev/mtdblock0
/sbin/swapon -p 5 /dev/mtdblock0

And then you're done: 

# swapon -s
Filename Type Size Used Priority
/dev/sda3 partition 205932 8 1
/dev/sdb3 partition 875536 16 1
/dev/mtdblock0 partition 262136 53952 5

To make it even worse (yes, there's still room for that! ;)) you can put two memory expansion cards into one box: 

# lszorro -v
00: MacroSystems USA Warp Engine 40xx [Accelerator, SCSI Host Adapter and RAM Expansion]
40000000 (512K)

01: Unknown device 0e3b:20:00
50000000 (256M)

02: Village Tronic Picasso II/II+ RAM [Graphics Card]
00200000 (2M)

03: Village Tronic Picasso II/II+ [Graphics Card]
00e90000 (64K)

04: Hydra Systems Amiganet [Ethernet Card]
00ea0000 (64K)

05: Unknown device 0e3b:20:00
60000000 (256M)

The two "Unknown device" entries are the two BigRamPlus cards. As you can see card #1 starts at 0x50000000 and card #2 starts at 0x60000000. Unfortunately the phram kernel module can be loaded twice with different start addresses, but the idea to start at 0x50000000 with a size of 512M won't work either as there seems to be a reserved 0x20 bytes a range at the beginning of each card. Anyway...

So, to make a very long and weird story short: the donated memory cards from Debian can now be used as additional and fast swap space for the buildds as long as it takes to get SPARSEMEM support working.

Thanks again for donating the money for those memory expansion cards for the good old m68k port. Once done SPARSEMEM support in the m68k will benefit not only these cards in Amigas but Ataris as well.

Kategorie: 
 

Pages

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer