[WLANware] [christof.schulze at gmx.net: babeld usage for wifi mesh for festival in Germany]

Moritz Warning moritzwarning at web.de
Sat Jul 6 16:53:11 CEST 2019


Danke für die Info. Es ist interessant mal über Babel zu hören.

1K client sind vermutlich über die Veranstaltung gezählt?

On 7/6/19 12:48 PM, Christof Schulze wrote:
> Hallo zusammen,
>
> auf mehrfachen Wunsch hin möchte ich euch nun diese Nachrichten nicht länger vorenthalten.
>
> Die angesprochenen drei Punkte sind bereits in der jeweiligen Software adressiert (babeld skaliert nun nicht mehr mit quadratischem Aufwand sondern mit linearem, mmfd lauscht nicht mehr auf dem babel socket sondern hat seine eigene neighbour-Erkennung), wodurch die CPU-Last deutlich sinken wird. Ich bin gespannt auf die nächsten Tests in der Größenordnung.
>
> viele Grüße
>
> Christof
>
>
>
> ----- Forwarded message from Christof Schulze <christof.schulze at gmx.net> -----
>
> Date: Thu, 4 Jul 2019 23:45:02 +0200
> From: Christof Schulze <christof.schulze at gmx.net>
> To: Babel users mailing list <babel-users at lists.alioth.debian.org>
> Cc: User Ml Ffffm <user at wifi-frankfurt.de>, geno at fireorbit.de, ffmd-dev at netz39.de
> Subject: babeld usage for wifi mesh for festival in Germany
> Message-ID: <20190704214502.eowtr74zu5o5gwmh at mail.christofschulze.com>
> User-Agent: NeoMutt/20170113 (1.7.2)
>
> Hello everyone,
>
> Babeld 1.8.4 was just used to create a mesh network for the german festival "Breminale". To accomplish client roaming, l3roamd was used.  Multicast-Routing was enabled mesh-wide using mmfd. Before the network was taken down and replaced with a switched network it had 5-6K Routes and 1K client devices (with 58 Nodes with 2-4 used interfaces into every daemon on it).
>
> This is proof that this type of setup is able to scale into the dimensions needed by Freifunk Communities in Germany. This is the largest real-life babeld installation that I know of and the fact that it was possible to pull it off is a great success. I could stop this email here, however the engineer in myself cannot deny that:
> * Route distribution was slow at that network size to the point where  the network was unusable during peak times.
> * Babeld was spending a lot of CPU time. (1.9 should help)
> * mmfd was listening on the babeld status socket, burning 30% CPU.   Monitoring just neighbour changes would have significantly helped. To  address this, an architecture change is being implemented in mmfd such  that the dependency towards the babel socket is removed. This will  reduce the load as well.
>
> I am extremely curious how the stack will perform next time this is attempted using babeld 1.9.
>
> I thank the Freifunk Bremen team around genofire for allowing this type of experiment at that scale and for making it all happen, providing extremely valuably input for debugging and further developing the required software stack [1]. Thank you very much.
>
> [1] https://github.com/FreifunkBremen/gluon-site-ffhb/tree/breminale2019-babel
>
> Cheers
> Christof
>
>



More information about the WLANware mailing list