Ich hatte mich nach meinem vorherigen Artikel nochmal beim Bausenator erkundigt und seine – durchaus nachvollziehbare – Erklärung war:
Hallo Herr Jürgensmann,
Krankheit, Urlaub etc.
Es ist mir auch peinlich. Aber…
Nächste Woche.
Und in der Tat: nachdem die Schulferien nun in Mecklenburg-Vorpommern beendet sind, wurde die neue Regelung auch prompt umgesetzt.
Jetzt müssen sich nur noch die Autofahrer an das neue Tempo gewöhnen und langsamer fahren. Ob das allerdings eine deutliche Lärmminderung zur Folge hat, bezweifle ich ein bißchen, da der Asphalt halt sehr laut ist und das vermutlich erst in ein paar Jahren mit der Sanierung der Parkstraße geändert werden kann.
Wir werden sehen.
]]>Im Ortsbereirat wurde am 14. Juni 2022 das Thema besprochen und auch in den Medien wurde eine Umsetzung von Tempo 30 in der Parkstraße und anderen Straßen in Warnemünde in Aussicht gestellt:
Die Hansestadt Rostock reagiert auf das wachsende Verkehrsaufkommen im Ostseebad Warnemünde: Noch bis Ende Juni soll in der gesamten Parkstraße das Tempolimit von 50 auf 30 Kilometer pro Stunde herabgesetzt werden.
Am 20. Juli 2022 habe ich dann mal den zuständigen Bausenator Holger Matthäus per Mail angeschrieben und nach dem Stand der Umsetzung gefragt, da ja das 2. Quartal mit Ende Juni abgelaufen war, aber immer noch nichts passiert ist.
Ein paar Tage später, am 26. Juli 2022, antwortete der Bausenator auch (vermutlich) persönlich und schrieb:
mir wurde nun endlich heute die Verkehrsrechtliche Anordnung zu T30 vorgelegt.
Bausenator Holger Matthäus am 26.07.22 per Mail
Ich habe die Anordnung unterschrieben.
Mit der Inkraftsetzung vor Ort können Sie die nächsten Tage rechnen.
Stand heute, d. 15. August 2022, sind die Zusatzschilder, die Tempo 30 nur für LKW und Busse vorschreiben, immer noch nicht demontiert und damit die Anordnung des Bausenators immer noch nicht umgesetzt worden.
Als Anwohner, die bei tropischen Temperaturen gezwungen sind, die Fenster zwecks Kühlung zu öffnen, ist das Problem des lauten Verkehrs gleich doppelt schwer zu ertragen: sowohl tagsüber als auch nachts, sind wir durch Verkehrslärm betroffen und leiden darunter. Insbesondere auch deswegen, weil der verwendete Asphalt auch besonders laut ist.
Ich werde sicherlich jetzt nochmal beim Bausenator nachhaken und nach einem festern Zieltermin für die Umsetzung warten.
]]>Interesting in this is, that for some time xmpp.social seemed to be the domain of choice for many users, maybe because of “xmpp” and “social” in the domain name – or because it is easier to name it than “hookipa” with “double-oh” and “kay”… who knows…
So, the user counts on xmpp.social were rising in a steeper curve than for hookipa.net, so much that I even considered to move over to xmpp.social for the main domain of that website.
But then something happened: the curated list of XMPP providers, which is now available under https://providers.xmpp.net. Since then some client software apps included the list, e.g. uwpx, and the user count was rising on hookipa.net – while it slowed down on xmpp.social.
Here you can see that the red curve of xmpp.social was for a long time above the blue curve of hookipa.net. Approx. in August 2021 something changed and hookipa.net was steadily increasing its user count. After a year, roughly in August 2022 hookipa.net surpassed the user count of xmpp.social.
Reason for this might be that hookipa.net is listed under Class A provider on providers.xmpp.net. Class A just means that some certain criteria is met like open registration and such. It doesn’t say anything whether or not it is a well-operated service. Well, at least not directly.
You can also look at the graphs on https://the-federation.info about the increase on hookipa.net and the stagnation on xmpp.social. There you can see the difference between a service that is listed on such a providers list and one that isn’t listed. Both domains are equally operated on my server and when you visit the Hookipa website you can register for both domains. But currently a downside (for me) of providers.xmpp.net is, that you need to provide the data for your classification on a website. Hookipa has a website, xmpp.social not, because it redirects to hookipa.net. Therefor xmpp.social is not included on providers.xmpp.net and thus is not gaining that many new users than hookipa.net.
I find that quite interesting, how registration counts are shifted from one domain to the other over time and what leads to that shift.
If you want to know which XMPP apps make use of the providers list, you can have a look at https://providers.xmpp.net/apps/.
]]>For years I’m self-hosting lots of stuff and also for family, friends and others. The user management was either standalone for each service like Nextcloud, Mastodon, Friendica, XMPP, or based on the mail auth backend on PostgreSQL, for example by authenticating against Dovecot IMAP server. This became complex over time and I was looking for a centralized auth backend. Basically this means: LDAP backend.
For that I took a look into 389ds, FusionDirectory and Univention Corporate Server (UCS). With 389ds I had installation or setup issues. FusionDirectory was way better and easier to use, but also very complex. In the end I went with UCS because of the UI experienced, the ease of use and having a working self-care portal for the users. And UCS comes with some kind of App Center with some nice apps in it that are preconfigured for the LDAP directory. So, really easy to use and to get started.
So, last year (even before I got employed by Univention) I migrated my users from PostgreSQL backend to UCS and LDAP backend. The migration was smooth and nice and worked like a charme. Only issue: things like my public XMPP server, Friendica or Mastodon are still using open registration and therefor internal auth backend. It will be much more difficult to migrate these with an existing user base of 3000-4000 users in total. So, having LDAP as user/auth backend is nice, but you should consider this at an early stage and not when you already have a plenty of services and users on your services. 😉
However, it’s even nicer when you can use Single Sign On (SSO) with your apps. I got a taste of SSO when I was doing Cisco UC stuff and liked it that I don’t need to enter my credentials over and over again. SSO is also possible with UCS, so I wanted to give it a try on my own server.
The official HowTo on setting up SAML SSO basically covers the process of setting it up, but my impression was, that this process can be made better, less error-prone and more reproducible by automatting the setup.
So I wrote in my spare time a small shell script to follow the instructions from the official HowTo and after many tests and enhancements, I released the script on Codeberg: setupSSO.sh.
So, what are the benefits of setupSSO.sh over the official HowTo?
So, if you are using UCS as well, you might want to have a look into setupSSO.sh when you are using SSO as well or plan to do so.
Disclaimer 2: As said I did this script in my spare time, so there is no support for it by Univention. Feedback via Codeberg is appreciated, of course.
]]>The new server is a used/refurbished Supermicro server with 2x 14-core Xeon E5-2683 and 256 GB RAM and 4x 3.5″ hot-swappable drives. It also came with a Hardware-RAID SAS/SATA 8-port controller with BBU. I also ordered two slim drive kits (MCP-220-81504-0N & MCP-220-81506-0N) to be able to use 2x 3.5″ slots for rotational HDDs as a cheap storage. Right now I added 2x 128 GB Supermicro SATA DOMs, 4x WD Red 4 TB SSDs and a Sonnet Fusion 4×4 Silent and 4x 1 TB Seagate Firecuda 520 NVMe disks.
And here the issue starts:
The NVMe should be capable of 4-5 GB/s, but they are connected to a PCIe 3.0 x16 port via the Sonnet Fusion 4×4, which itself features a PCIe bridge, so bifurbacation is not necessary.
When doing some tests with bonnie++ I get around 1 GB/s transfer rates out of a RAID10 setup with all 4 NVMes. In fact, regardless of the RAID level there are only transfer rates of about 1 – 1.2 GB/s with bonnie++. (All software RAIDs with mdadm.)
But also when constructing a RAID each NVMe gives around 300-600 MB/s in sync speed – except for one exception: RAID1.
Regardless of how many NVMe disks in a RAID1 setup the sync speed is up to 2.5 GB/s for each of the NVMe disks. So the lower transfer rates with bonnie++ or other RAID levels shouldn’t be limited by bus speed nor by CPU speed. Alas, atop shows upto 100% CPU usage for all tests. I even tested
In my understanding RAID10 should perform similar to RAID1 in terms of syncing and better and while bonnie++ tests (up to 2x write and 4x read speed compared to a single disk).
For the bonnie++ tests I even made some tests that are available here. You can find the test parameters listed in the hostname column: Baldur is the hostname, then followed by the layout (near-2, far-2, offset-2), chunk size and concurrency of bonnie++. In the end there was no big impact of the chunk size of the RAID.
So, now I’m wondering what the reason for the “slow” performance of those 4x NVMe disks is? Bus speed of the PCIe 3.0 x16 shouldn’t be the cause, because I assume that the software RAID will need to transfer the blocks in RAID1 as well as in RAID10 over the bus. Same goes for the CPU: the amount of CPU work should be roughly the same for RAID1 and for RAID10. RAID10 should even have an advantage because the blocks only need to be synced to 2 disks in a stripe set.
Bonnie++ tests are a different topic for sure. But when testing reading with dd from the md-devices I “only” get around 1-1.5 GB/s as well. Even when using LVM RAID instead of LVM on top of md RAID.
All NVMe disks are already set to 4k and IO scheduler is set to mq-deadline.
Is there anything I could do to improve the performance of the NVMe disks? On the other head, pure transfer rates are not that important to a server that runs a dozen of VMs. Here the improved IOPS performance over rotation disks is a clear performance gain. But I’m still curious if I could get maybe 2 GB/s out of a RAID10 setup with the NVMe disks. Then again having two independent RAID1 setups for MariaDB and for PostgreSQL databases might be a better choice over a single RAID10 setup?
]]>Back in that time there was some small website running on Kullervo to display some information about the Debian autobuilder. After some time we (as m68k porters) moved that webpage away from Kullervo to my root server. Step by step this site evolved to Buildd.Net and extended to other archs and “suites” beside unstable like backports or non-volatile. The project got more and more complex and beyond my ability to do a complete necessary rewrite.
So, in 2016 I asked for adoption of the project and in 2018 I shut it down, because (apparently) there was nobody taking over. From November 2005 until January 2018 I do have entries in my PostgreSQL database for Buildd.Net.
I think the data in the database might be interesting for those that want to examine that data. You can use the data to see how build times have increased over time, which e.g. led to the expulsion of m68k as release arch, because the arch couldn’t keep up anymore. I could imagine that you could do other interesting analysis with that data. For example how new versions of the toolchain increased the build times, maybe even if a specific version of e.g. binutils or gcc had a positive effect on certain archs, but a negative effect on other archs.
If there is interest in this data I could open the database to the public or even upload the dump of the database so that you can download and install it on your own.
]]>To address this I would like to see XMPP support in mail clients (MUAs). So when you reply to a mail or write a new one, the client will do a lookup in your addressbook if the address has an XMPP field associated with it and (if not) do a DNS lookup for _xmpp-server._tcp.example.com (or the matching domain part of recipients address). If there is an XMPP address listed in mail header, that JID will be used. When the lookup is successful and an xmpp: protocol handler is configured in the system, the MUA offers an option to begin a chat with the recipient and/or displays the presence status of the recipients (depending on available web-presence or presence subscription).
Basically a good candidate could be Thunderbird, because it already has XMPP support built in, albeit not a good implementation and lacking many modern features like OMEMO. But for basic functions (like presence status and such) it should be sufficient for a start.
Other candidates could be Evolution, Kmail (as KDE MUA and Kaidan as a native KDE XMPP client) or even Apple Mail.app, because Apples addressbook supports XMPP fields for each contact.
Basically the same could be done for SIP contacts: if a SIP SRV record for that domain does exist, the MUA could offer an option to call the recipient.
I would be willing to give some money via Bountysource or similar platforms. Is anyone aware of such a project or willing to write some addons? Maybe within the GSoC?
PS: there is RFC7259 about Jabber/XMPP JID in mail headers and there is also a page in the XMPP.org wiki.
]]>So, please update your git settings to https://github.com/ingoj
to https://codeberg.org/Windfluechter
(or the specific repo).
Honestly, I think the solution needs to be provided by LetsEncrypt…
I was having some strange issues on my ejabberd XMPP server the other day: some users complained that they couldn’t connect anymore to the MUC rooms on my server and in the logfiles I discovered some weird warnings about LetsEncrypt certificates being expired – although they were just new and valid until end of December.
It looks like this:
[warning] <0.368.0>@ejabberd_pkix:log_warnings/1:393 Invalid certificate in /etc/letsencrypt.sh/certs/buildd.net/fullchain.pem: at line 37: certificate is no longer valid as its expiration date has passed
and…
[warning] <0.18328.2>@ejabberd_s2s_out:process_closed/2:157 Failed to establish outbound s2s connection nerdica.net -> forum.friendi.ca: Stream closed by peer: Your server's certificate is invalid, expired, or not trusted by forum.friendi.ca (not-authorized); bouncing for 237 seconds
When checking out with some online tools like SSLlabs or XMPP.net the result was strange, because SSLlabs reported everything was ok while XMPP.net was showing the chain with X3 and D3 certs as having a short term validity of a few days:
After some days of fiddling around with the issue, trying to find a solution, it appears that there is a problem in Ejabberd when there are some old SSL certifcates being found by Ejabberd that are using the old CA chain. Ejabberd has a really nice feature where you can just configure a SSL cert directory (or a path containing wildcars. Ejabberd then reads all of the SSL certs and compare them to the list of configured domains to see which it will need and which not.
What helped (for me at least) was to delete all expired SSL certs from my directory, downloading the current CA file pems from LetsEncrypt (see their blog post from September 2020), run update-ca-certificates
and ejabberdctl restart
(instead of just ejabberdctl reload-config
). UPDATE: be sure to use dpkg-reconfigure ca-certificates to uncheck the DST Root X3 cert (and others if necessary) before renewing the certs or running update-ca-certificates. Otherwise the update will bring in the expired cert again.
Currently I see at least two other XMPP domains in my server logs having certicate issues and in some MUCs there are reports of other domains as well.
Disclaimer: Again: this helped me in my case. I don’t know if this is a bug in Ejabberd or if this procedure will help you in your case nor if this is the proper solution. But maybe my story will help you solving your issue if you experience SSL certs issues in the last few days, especially now that the R3 cert has already expired and the X3 cert following in a few hours.
]]>Die #btw21 ist Klimawahl.
Wir haben keine Zeit mehr, noch eine Legislatur an die CDU zu verschenken.
Für Grün-Rot-Rot. #NieMehrCDU
Dem kann ich eigentlich nur zustimmen. Seit der Veröffentlichung von “Die Grenzen des Wachstums” vom Club of Rome 1972 ist eigentlich bekannt, daß das mit dem ewigen (wirtschaftlichen) Wachstum eigentlich eine Sackgasse ist. Auch das mit dem Klimawandel war schon lange bekannt, wie eine Studie von Exxon (Spiegel Online) zum Klimawandel zeigt und die sehr genau die Erwärmung vorhersagte.
In den 80er Jahren war der Saure Regen und die Umweltverschmutzung in der BRD ein großes Thema und führte letztendlich neben den Anti-Atom-Protesten auch zur Gründung der Grünen.
Wer meint, daß die konservativen Parteien (CDU, CSU, SPD, FDP) auch nur im entferntesten geeignet seien, die Herausforderungen des bestehenden Klimawandels meistern zu können, der irrt gewaltig. Wir brauchen radikale Einschnitte und einen grundlegenden Wandel der Politik. Ich sehe nicht, wie das mit den bisherigen Regierungsparteien funktionieren soll?
Alle Wählerinnen und Wähler über 40 sollten sich fragen, ob sie ihre Stimme nicht der Jugend schenken kann, die ja weitaus länger unter den Folgen der verfehlten Politik der letzten 40 Jahre leiden werden müssen. Fragt eure nicht-wahlberechtigten Kinder und Enkel, wie ihr bei der Bundestagswahl abstimmen sollt. Oder stimmt von euch aus für einen Wechsel in der Klimapolitik. Das heißt in diesem Fall, die Grünen zu wählen. Die machen auch nicht immer alles richtig und haben auch einiges am Stecken, aber eine andere Alternative zu CDU/CSU und SPD existiert derzeit nicht.
]]>