Sie sind hier

Debian

Too much disk IO on sda in RAID10 setup - part 2

Some days ago I blogged about my issue with one of the disks in my server having a high utilization and latency. There have been several ideas and guesses what the reason might be, but I think I found the root cause today:

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       90%     50113         60067424
# 2  Short offline       Completed without error       00%      3346   

After the RAID-Sync-Sunday last weekend I removed sda from my RAIDs today and started a "smartctl -t long /dev/sda". This test was quickly aborted because it already ran into an read error after just a few minutes. Currently I'm still running a "badblocks -w" test and this is the result so far:

# badblocks -s -c 65536 -w /dev/sda
Testing with pattern 0xaa: done
Reading and comparing: 42731372done, 4:57:27 elapsed. (0/0/0 errors)
42731373done, 4:57:30 elapsed. (1/0/0 errors)
42731374done, 4:57:33 elapsed. (2/0/0 errors)
42731375done, 4:57:36 elapsed. (3/0/0 errors)
 46.82% done, 6:44:41 elapsed. (4/0/0 errors)

Long story short: I already ordered a replacement disk!

But what's also interesting is this:

I removed the disk today at approx. 12:00 and you can see the immediate effect on the other disks/LVs (the high blue graph from sda shows the badblocks test), although the RAID10 is now in degraded mode. Interesting what effect (currently) 4 defect blocks can have to a RAID10 performance without smartctl taking notice of this. Smartctl only reported an issue after issueing the selftest. It's also strange that the latency and high utilization slowly increased over time, like 6 months or so.

Kategorie: 

Too much disk IO on sda in RAID10 setup

I have a RAID10 setup with 4x 2 TB WD Red disks in my server. Although the setup works fairly well and has enough throughput there is one strange issue with that setup: /dev/sda has more utilzation/load than the other 3 disks. See the blue line in the following graph which represents utilization by week for sda:

As you can see from the graphs and from the numbers below sda has a 2-3 times higher utilization than sdb, sdc or sdd, especially when looking at disk latency graph by Munin:

Although the graphs are a little confusing you can easily spot the big difference from the below values. And it's not only Munin showing this strange behaviour of sda, but also atop: 

Here you see that sda is 94% busy although the writes to the disks are a little bit lower than on the other disks. The screenshot of atop was before I moved MySQL/MariaDB to my NVMe disk 4 weeks ago. But you can also spot that sda is slowing down the RAID10.

So the big question is: why is utilization and latency of sda that high? it's the same disk model as the other disks. All disks are connected to a Supermicro X9SRi-F mainboard. The first two SATA ports are 6 Gbit/s, the other 4 ports are 3 Gbit/s ports:

sda sdb sdc sdd
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD20EFRX-68AX9N0
Serial Number:    WD-WMC301414725
LU WWN Device Id: 5 0014ee 65887fe2c
Firmware Version: 80.00A80
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan  5 17:24:58 2019 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD20EFRX-68AX9N0
Serial Number:    WD-WMC301414372
LU WWN Device Id: 5 0014ee 65887f374
Firmware Version: 80.00A80
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan  5 17:27:01 2019 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD20EFRX-68AX9N0
Serial Number:    WD-WMC301411045
LU WWN Device Id: 5 0014ee 603329112
Firmware Version: 80.00A80
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sat Jan  5 17:30:15 2019 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD20EFRX-68AX9N0
Serial Number:    WD-WMC301391344
LU WWN Device Id: 5 0014ee 60332a8aa
Firmware Version: 80.00A80
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sat Jan  5 17:30:26 2019 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

There is even the same firmware version of the disks. Usually I would have expected a slower disk IO from the 3 Gbit/s disks (sdc & sdd), but not from sda. All disks are configured in BIOS to use AHCI.

I cannot explain why sda has higher latency and more utilization than the other disks. Any ideas are welcome and appreciated. You can also reach me in the Fediverse (Friendica: https://nerdica.net/profile/ij & Mastodon: ">https://nerdculture.de/) or via XMPP at ij@nerdica.net.

Kategorie: 

Adding NVMe to Server

My server runs on a RAID10 of 4x WD RAID 2 TB disks. Basically those disks are fast enough to cope with the disk load of the virtual machines (VMs). But since many users moved away from Facebook and Google, my Friendica installation on Nerdica.Net has a growing user count putting a large disk I/O load with many small reads & writes on the disks, resulting a slowing down the general disk I/O for all the VMs and the server itself. On mdraid-sync-Sunday this month the server needed two full days to sync its RAID10.

So the idea was to remove the high disk I/O load from the rotational disks the something different. For that reason I bought a Samsung Pro 970 512 GB NVMe disk and a matching PCIe 3.0 card to be put into my server in the colocation. On Thursday the Samsung has been installed by the rrbone staff in the colocation. I moved the PostgreSQL and MySQL databases from the RAID10 to the NVMe disk and restarted services again.

Here are some results from Munin monitoring: 

Disk Utilization

Here you can see how the disk utilization dropped after NVMe installation. The red coloured bar symbolizes the average utilization on RAID10 disks and the green bar symbolizes the same RAID10 after the databases were moved to the NVMe disk. There's roughly 20% less utilization now, whch is good.

Disk Latency

Here you can see the same coloured bars for the disk latency. As you can see the latency dropped by 1/3 now.

CPU I/O wait

The most significant graph is maybe the CPU graph where you can see a large portion of iowait of the CPUs. This is no longer true as there is apparently no significant iowait anymore thanks to the low latency and high IOPS nature of SSD/NVMe disks.

Overall I cannot confirm that adding the NVMe disk results in a significant faster page load of Friendica or Mastodon, maybe because other measurements like Redis/Memcached or pgbouncer already helped a lot before the NVMe disk, but it helps a lot with general disk I/O load and improving disk speeds inside of the VMs, like for my regular backups and such.

Ah, one thing to report is: in a quick test pgbench reported >2200 tps on NVMe now. That at least is a real speed improvement, maybe by order of 10 or so.

Kategorie: 

Xen & Databases

I'm running PostgreSQL and MySQL on my server that both serve different databases to Wordpress, Drupal, Piwigo, Friendica, Mastodon, whatever...

In the past the databases where colocated in my mailserver VM whereas the webserver was running on a different VM. Somewhen I moved the databases from domU to dom0, maybe because I thought that the databases would be faster running on direct disk I/O in the dom0 environment, but can't remember the exact rasons anymore.

However, in the meantime the size of the databases grew and the number of the VMs did, too. MySQL and PostgreSQL are both configured/optimized to run with 16 GB of memory in dom0, but in the last months I experienced high disk I/O especially for MySQL and slow I/O performance in all the domU VMs because of that.

Currently iotop shows something like this:

Total DISK READ :     131.92 K/s | Total DISK WRITE :    1546.42 K/s
Actual DISK READ:     131.92 K/s | Actual DISK WRITE:       2.40 M/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
 6424 be/4 mysql       0.00 B/s    0.00 B/s  0.00 % 60.90 % mysqld
18536 be/4 mysql      43.97 K/s   80.62 K/s  0.00 % 35.59 % mysqld
 6499 be/4 mysql       0.00 B/s   29.32 K/s  0.00 % 13.18 % mysqld
20117 be/4 mysql       0.00 B/s    3.66 K/s  0.00 % 12.30 % mysqld
 6482 be/4 mysql       0.00 B/s    0.00 B/s  0.00 % 10.04 % mysqld
 6495 be/4 mysql       0.00 B/s    3.66 K/s  0.00 % 10.02 % mysqld
20144 be/4 postgres    0.00 B/s   73.29 K/s  0.00 %  4.87 % postgres: hubzilla hubzi~
 2920 be/4 postgres    0.00 B/s 1209.28 K/s  0.00 %  3.52 % postgres: wal writer process
11759 be/4 mysql       0.00 B/s   25.65 K/s  0.00 %  0.83 % mysqld
18736 be/4 mysql       0.00 B/s   14.66 K/s  0.00 %  0.17 % mysqld
21768 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.02 % [kworker/1:0]
 2922 be/4 postgres    0.00 B/s   69.63 K/s  0.00 %  0.00 % postgres: stats collector process

MySQL data site is below configured max memory size for MySQL, so everything should more or less fit into memory. Yet, there is still a large amount of disk I/O by MySQL, much more than by PostgreSQL. Of course there is much I/O done by writes to the database.

However, I'm thinking of changing my setup again back to domU based database setup again, maybe one dedicated VM for both DBMS' or even two dedicated VMs for each of them? I'm not quite sure how Xen reacts to the current work load?

Back in the days when I did 3D computer graphic I did a lot of testing with different settings in regards of priorities and such. Basically one would think that giving the renderer more CPU time would speed of the rendering, but this turned out to be wrong: the higher the render tasks priority was, the slower the rendering got, because disk I/O (and other tasks that were necessary for the render task to work) got slowed down. When running the render task at lowest priority all the other necessary tasks could run on higher speed and return the CPU more quickly, which resulted in shorter render times.

So, maybe I experience something similar with the databases on dom0 here as well: dom0 is busy doing database work and this slows down all the other tasks (== domU VMs). When I would move databases back to domU this would enable dom0 again to better do its basic job of taking care of the domUs?

Of course, this is also a quite philosophical question, but what is the recommended setup? Is it better to separate the databases in two different VMs or just one? Or is running the databases on dom0 the best option?

I'm interested in your feedback, so please comment! :-)

UPDATE: you can also contact me @ij@nerdculture.de on Mastodon or on Friendica at https://nerdica.net/profile/ij

Kategorie: 

#Friendica vs #Hubzilla vs #Mastodon

I've been running a #Friendica node for several years now. Some months ago I also started to run a #Hubzilla hub as well. Some days ago I also installed #Mastodon on a virtual machine, because there was so much hype about Mastodon in the last days due to some changes Twitter made in regards of 3rd party clients.

All of those social networks do have their own focus:

Friendica: basically can connect to all other social networks, which is quite nice because there exists historically two different worlds: the Federation (Diaspora, Socialhome) and the Fediverse (GnuSocial, Mastodon, postActiv, Pleroma). Only Friendica and Hubzilla can federate with both: Federation and Fediverse.
Friendicas look&feel appears sometimes a little bit outdated and old, but it works very well and reliable.

Hubzilla: is the second player in the field of connecting both federations, but has a different focus. It is more of one-size-fits-all approach. If you need a microblogging site, a wiki, a cloud service, a website, etc. then Hubzilla is the way to go. The look&feel is a little bit more modern, but there are some quirks that appears a little odd to me. A unique feature for Hubzilla seems to be the concept of "nomadic accounts": you can move to a different hub and take all your data with you. Read more about that in the Hubzilla documentation.

Mastodon: this aims to be a replacement for Twitter as a microblogging service. It looks nice and shiny, has a bunch of nice clients for smartphones and has the largest userbase by far (which is not that important because of federation).
But the web GUI is rather limited and weird, as far as I can tell after just some days.

Technically spoken these are the main differences:
- Friendica: MySQL/MariaDB, PHP on the server, Clients: some Android clients, no iOS client
- Hubzilla: MySQL/MariaDB or PostgreSQL, PHP on the server, Clients: don't know, didn't care so far.
- Mastodon: PostgreSQL, Ruby on the server, Clients: many iOS and Android clients available

I'm not that big Ruby fan and if I remember correctly the Ruby stuff turned me away from Diaspora years ago and made me switch to Friendica, because back then it was a pain to maintain Diaspora. Mastodon addresses this by offering Docker container for the ease of installation and maintenance. But as I'm no Docker fan either, I followed the guide to install Mastodon without Docker, which works so far as well (for the last 3 days ).

So after all my Friendica node is still my favorit, because is just works and is reliable. Hubzilla has a different approach and offers a full set of webfeatures and nomadic accounts. The best I can say about Mastodon at this moment is: it runs on PostgreSQL and has nice clients on mobile devices.

Here are my instances:
- Friendica: https://nerdica.net/
- Hubzilla: https://silverhaze.eu/
- Mastodon: https://nerdculture.de/

PS: "A quick guide to The Free Network" by Sean Tilley on https://medium.com/we-distribute/a-quick-guide-to-the-free-network-c0693...

PPS: this is a cross post from my Friendica node.

Kategorie: 

#DeleteFacebook and alternative Social Networks

Some weeks ago a more or less large scandal popped up in the media: Cambridge Analytica misused data from Facebook. Many users of Facebook were unhappy about abusing their personal data and deleted their Facebook accounts - or at least tried some alternatives like Friendica, Hubzilla, Diaspora, Mastodon and others.

There has been a significant increase in user count since then and this gave a general boost for the networks.

Apropos networks... basically there are two large networks: The Federation (Diaspora, Socialhome) and The Fediverse (GNU Social, Mastodon, postActiv, Pleroma). Within the two networks all solution can exchange information like posts, comments, user information and such, or in other words: they federate with each other. So, when you use Mastodon your posts won't be available to Diaspora users and vice versa as they use different protocols for federation.

And here Friendica and Hubzilla do have their advantage: both are able to federate with both networks. Sean Tilley has some more information in his article "A quick guide to The Free Network" on medium.com.

Another great resource you can use to find more about alternatives to Facebook is the great new Fediverse Wiki.

From my point of view I would recommend either Friendica or Hubzilla, depending on what you want:

  • Friendica is in my opinion the best solution to have best of both worlds, i.e. Fediverse und Federation. It has active developers and a good and helping community. It concentrates on the social network topic.
  • Hubzilla is a more complete approach: you can add addons to have a cloud space, a wiki or create web pages.

Both offers you to create multiple profiles with one account (Friendica: profiles, Hubzilla: channels) and of course you have a fine-grained control about your content. There is also a fresh Youtube channel describing some Friendica features. Although it is in German, others might get some helpful hints as well from those videos.

Which alternative will be the best for you is up to you to decide. All alternatives have their pros and cons. If you don't already have a website, cloud space or such, Hubzilla might be the best choice for you. If you don't need such additional functions, you might be best suited with Friendica. If you like to install docker images, Mastodon will make you happy.

In the end, it's all about choice! You will have better control about your own data in all cases. You can run your own instance, make it private only, or you can join one of the available servers and try out what best suites you. For Friendica you can find some public servers on https://dir.friendica.social.

I'm running a Friendica node on https://nerdica.net/ as well as a Hubzilla hub on https://silverhaze.eu/ - feel invited to try both and register on either one to have a look onto some alternatives for Facebook.

After all: Decentralize and spread the workd! Use the alternatives you have and don't sell you privacy to the big players like Facebook and Google if you don't need to. :-)

PS: If you are already on one of those alternative networks, please feel free to connect me on my Friendica node or Hubzilla hub!

Kategorie: 

Upgrade to Debian Stretch - GlusterFS fails to mount

Before I upgrade from Jessie to Stretch everything worked as a charme with glusterfs in Debian. But after I upgraded the first VM to Debian Stretch I discovered that glusterfs-client was unable to mount the storage on Jessie servers. I got this in glusterfs log:

[2017-06-24 12:51:53.240389] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 12:51:54.534826] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 12:51:54.534896] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount
[2017-06-24 12:51:56.668254] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-24 12:51:56.671649] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-06-24 12:51:56.671669] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/le)
[2017-06-24 12:51:57.014502] W [glusterfsd.c:1327:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7fbea36c4a20] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x494) [0x55fbbaed06f4] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55fbbaeca444] ) 0-: received signum (0), shutting down
[2017-06-24 12:51:57.014564] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting '/etc/letsencrypt.sh/certs'.
[2017-06-24 16:44:45.501056] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 16:44:45.504038] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 16:44:45.504084] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount

After some searches on the Internet I found Debian #858495, but no solution for my problem. Some search results recommended to set "option rpc-auth-allow-insecure on", but this didn't help. In the end I joined #gluster on Freenode and got some hints there:

JoeJulian | ij__: debian breaks apart ipv4 and ipv6. You'll need to remove the ipv6 ::1 address from localhost in /etc/hosts or recombine your ip stack (it's a sysctl thing)
JoeJulian | It has to do with the decisions made by the debian distro designers. All debian versions should have that problem. (yes, server side).

Removing ::1 from /etc/hosts and from lo interface did the trick and I could mount glusterfs storage from Jessie servers in my Stretch VMs again. However, when I upgraded the glusterfs storages to Stretch as well, this "workaround" didn't work anymore. Some more searching on the Internet made me found this posting on glusterfs mailing list:

We had seen a similar issue and Rajesh has provided a detailed explanation on why at [1]. I'd suggest you to not to change glusterd.vol but execute "gluster volume set <volname> transport.address-family inet" to allow Gluster to listen on IPv4 by default.

Setting this option instantly fixed my issues with mounting glusterfs storages.

So, whatever is wrong with glusterfs in Debian, it seems to have something to do with IPv4 and IPv6. When disabling IPv6 in glusterfs, it works. I added information to #858495.

Kategorie: 

Back to the roots: FidoNet - I'm back!

Last month I blogged about Fidonet. This month I can report that I'm back in FidoNet. While I was 2:2449/413 back then, my new node number is now 2:2452/413@fidonet. The old network 2:2449 is still listed in the Fidonet nodelist, but no longer active, but maybe I can revive that network at a later time. Who knows.

The other problem I complained last month about was missing software in Debian. There is binkd and ifcico as mailer software and crashmail and ifmail as a tosser, but no reader software. So how did I get started again? First, I got into mood by watching all parts of the BBS documentary about BBSes:

 

It's a nice watch, so even when you don't plan to start a BBS or join Fidonet like I did, you can see Tom Jennings and others talking about BBSes in general and Fidonet. It's somewhat a nice way-back machine and it made me to actually start my comeback to Fidonet. I tried to compile some projects from Sourceforge like fidoip or GoldEdPlus, but all projects were in a state where they didn't compile without additional work under Debian. At least with those included debian/rules that have. 

So I decided to reactivate my old Fidonet software on my Amiga. Instead of GMS_Mailer I found AmiBinkd on Aminet which runs quite well. With that setup I was able to call to other Fidonet nodes and do some filerequests. That way I found out that 2:2452/250 is one of the still reachable Fidonet boxes in Germany and soon I became 2:2452/413. Still running on my Amiga with Mailmanager as Tosser and Reader and AmiBinkd as a mailer. Using Fidonet is quite different nowadays as you don't need to call out via phone line anymore, but use Internet connections instead. Although this is nice and much faster and with no additional costs and you can use "crash mail", it's not the same fun as dialing into a mailbox by modem and hear the typical sqeaking sound of a modem connecting. So I bought a Zyxel U-1496E modem on Ebay for € 5.50 and connected it to my FritzBox 7490. This works quite well and I could place calls via the modem using TrapDoor as a mailer on my Amiga.

Anyway, using my Amiga was only a temporary solution to get me up & running again. The goal is to run a full featured Fidonet node on Debian on my colocated server in the datacenter and in the meanwhile I was able to switch the DNS record from my Amiga to the server in the datacenter, running with binkd from Debian and Husky suite as tosser.

Husky is complete Fidonet suite, including tosser, areafix, filefix, tic-file processor, etc. However there are no Debian packages available - at least not easily to find. Philipp Giebel pointed me in an Fidonet echoarea to his own personal repository for Debian and Raspbian: 

https://www.kuehlbox.wtf/index.php#repo

He was very helpful in getting me started on Linux with Husky and shared many of his config files with me. Big thanks for that! He also used our discussions to write a blog article about this. Although it's German only you can find the necessary config files. You can find that on:

https://www.stimpyrama.org/blog/17-computer/138-ftnsetup

It covers nearly all necessary aspects:

  • how to setup his repo in your apt sources
  • install the necessary packages
  • configuration of husky, binkd and goldedplus with example configs
  • some tips & tricks like some keyboard shortcuts for goldedplus, etc.

So, this is really helpful for everyone that wants to join Fidonet as well.

You can use goldedplus as a reader for Fidonet, or when you just want to be a point and not a full node, you might want to try OpenXP on Linux. OpenXP includes everything you'll need for a point, like a mailer, reader and tosser. You can even use it as a mail reader via POP3/IMAP or to read Internet News (aka newsgroups).

It's still possible to run a Fidonet node on Amiga, on Linux and of course other operations systems like Windows and even OS/2. And with HotdogEd there is even Fidonet software available on your Android smartphone!

But why Fidonet if you already have the Internet at your fingertips? Well, this is something you need to decide for yourself, but for me there are several reasons why I joined Fidonet after 17 years of inactivity again:

  • It's not the Internet! :-)  This means basically no spam mails. At least I didn't experience any spam so far.
  • It's a small and welcoming community.
  • There is not only Fidonet itself (with zone 1:* to 5:*), but other zones as well, like for example AmigaNet with zone 39:* or fsxNet with zone 21:*. FTN technology makes it easy to setup a own network based on a certain topic. 
  • It's a technology that enabled people to communicate worldwide with each other, long before the Internet was available for everyone! This is some kind of technical heritage I find worthwhile to preserve.
  • Although most people of us can enjoy a free and open Internet, this is not valid for everyone in the world. Nowadays some regimes decide to block and censor the Internet for their citizens. Fidonet or FTN technology can enable those citizens to still communicate free and without censorship when even Tor is not working anymore because the Internet at all has been taken down in a country. Often enough you can still use phone lines and therefor you can use modems to connect to mailboxes and exchange mails and files. FTN is optimized for this kind of dialup connections and this is one of the main reasons why I don't want to only offer connections via Internet but also by modem to my Fidonet node.

So, be invited to join Fidonet as well!

Kategorie: 

Back to the roots: FidoNet

Back in the good old days there was no Facebook, Google+, Skype and no XMPP servers for people to communicate with each other. The first "social communities" were Bulletin Board Systems (BBS), if you want to see those as social communities. Often those BBS not only offered communication possibilities to online users but also ways to communicate with others when being offline. Being offline is from todays point of view a strange concept, but back then it was a common scenario 20-30 years ago, because being online meant to dial via a modem and a phone line into a BBS or - at a later time - Internet provider. Those BBS interconnected with each others and some networks grew that allowed to exchange messages between different BBS - or mailboxes. One of those networks was FidoNet

When I went "online" back then, I called into a BBS, a mailbox. I don't know why, but when reading messages from others the mailbox crashed quite frequently. So the "sysop" of that mailbox offered me to become a FidoNet point - just to prevent that I'd keep crashing his mailbox all the time. So, there I was: a FidoNet point, reachable under the FidoNet address 2:2449/413.19. At some time I took over the mailbox from the old sysop, because he moved out of town. Despite the fact that the Internet arose in the late 1990s, making all those BBS, mailboxes, and networks such as FidoNet obsolete.

However, it was a whole lot of fun back then. So much fun that I plan to join FidoNet again. Yes, it's still there! Instead of using dial-up connections via modems most nodes in FidoNet now offers connection via Internet as well.

A FidoNet system (node) usually consists of a mailer that does the exchange with other systems, a tosser that "routes" the mail to the recipients, and a reader with which you can finally read and write messages to others. Back in the old days I ran my mailbox on my Amiga 3000 with a Zyxel U-1496E+ modem, later with an ISDN card called ISDN-Master. The software used was first TrapDoor as mailer and TrapToss as a tosser. Later replaced by GMS Mailer as a mailer and MailManager as a tosser and reader.

Unfortunately GMS Mailer is not able to handle connections via Internet. For this you'll need something like binkd, which is a Debian package. So, doing a quick search for FidoNet packages on Debian reveals this:

# apt-cache search fidonets  0.00 %  0.00 % [kdevtmpfs]
crashmail - JAM and *.MSG capable Fidonet tosser
fortunes-es - Spanish fortune database
htag - A tagline/.signature adder for email, news and FidoNet messages
ifcico - Fidonet Technology transport package
ifgate - Internet to Fidonet gateway
ifmail - Internet to Fidonet gateway
jamnntpd - NNTP Server allowing newsreaders to access a JAM messagebase
jamnntpd-dbg - debugging symbols for jamnntpd
lbdb - Little Brother's DataBase for the mutt mail reader

So, there are at least two different mailer (ifcico and binkd) and crashmail as a tosser. What is missing is a FidoNet reader. In older Debian releases there was GoldEd+, but this package got removed from Debian some years ago. There's still some upstream development of GoldEd+, but when I tried to compile it fails. So there is no easy way to have a full FidoNet node running on Debian, which is sad. 

Yes, FidoNet is maybe outdated technology, but it's still alive and I would like to get a FidoNet node running again. Are there any other FidoNet nodes running on Debian and give assistance in setting up? There are maybe some fully integrated solutions like MysticBBS, but I'm unsure about those.

So, any tips and hints are welcome! :-)

Kategorie: 

Migrating from Owncloud 7 on Debian to Nextcloud 11

These days I got a mail by my hosting provider stating that my Owncloud instance is unsecure, because the online scan from scan.nextcloud.com mailed them. However the scan seemed quite bogus: it reported some issues that were listed as already solved in Debians changelog file. But unfortunately the last entry in changelog was on January 5th, 2016. So, there has been more than a whole year without security updates for Owncloud in Debian stable.

In an discussion with the Nextcloud team I complained a little bit that the scan/check is not appropriate. The Nextcloud team replied very helpful with additional information, such as two bug reports in Debian to clarify that the Owncloud package will most likely be removed in the next release: #816376 and #822681.

So, as there is no nextcloud package in Debian unstable as of now, there was no other way to manually upgrade & migrate to Nextcloud. This went fairly well:

ownCloud 7 -> ownCloud 8.0 -> ownCloud 8.1 -> ownCloud 8.2 -> ownCloud 9.0 -> ownCloud 9.1 -> Nextcloud 10 -> Nextcloud 11

There were some smaller caveats:

  1. When migrating from OC 9.0 to OC 9.1 you need to migrate your addressbooks and calendars as described in the OC 9.0 Release Notes
  2. When migrating from OC 9.1 to Nextcloud 10, the OC 9.1 is higher than expected by the Mextcloud upgrade script, so it warns about that you can't downgrade your installation. The fix was simply to change the OC version in the config.php
  3. The Documents App of OC 7 is no longer available in Nextcloud 11 and is replaced by Collabora App, which is way more complex to setup

The installation and setup of the Docker image for collabora/code was the main issue, because I wanted to be able to edit documents in my cloud. For some reason Nextcloud couldn't connect to my docker installation. After some web searches I found "Can't connect to Collabora Online" which led me to the next entry in the Nextcloud support forum. But in the end it was this posting that finally made it work for me. So, in short I needed to add...

DOCKER_OPTS="--storage-driver=devicemapper"

to /etc/default/docker.

So, in the end everything worked out well and my cloud instance is secure again. :-)

UPDATE 2016-02-18 10:52:
Sadly with that working Collabora Online container from Docker I now face this issue of zombie processes for loolforkit inside of that container.

Kategorie: 

Seiten

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer