RNS Logo

rns.recipes

Community Forum

Forum / Help / LXMF Propagation Node - Very low "distribution factor"

LXMF Propagation Node - Very low "distribution factor"

LXMF

Started by Anonymous ·

Anonymous

Distribution factor is 0.15

I am running it with the more or less default config. I increased max peers to 100, but maybe that makes this stat even worse ^^

If I check with lxmd --peers I can see a lot of peers not accepting any message, with acceptance rates of 0%, while having multiple (hundreds) unhandled messages.

Is this to be expected? There is very little to configure at the moment, so I am not super sure what my possibilities are. Is this due to the increase of nodes and not well configured prop nodes?

Anonymous

With that many peers, having a couple of hundred unhandled messages is entirely normal. An acceptance rate of 0% means that the another node delivered messages you were offering first, so the peer didn't want any of the offered messages. This is again not uncommon when you have a lot of peers - you're slightly increasing the time between sync to each peer, so there's a higher chance someone else will deliver messages you also had waiting first, lowering the acceptance rate. Having a lot of peers with a 0% acceptance rate is a good sign that you can lower your amount of peers. Any distribution factor over 0 is helping the network, so 0.15 is not bad as such. A factor of 1 means that for each message your node receives, it delivers it to one other node that needed that message. So right now, about 15% of the messages that got to your node was passed on to other nodes that needed them. You can probably improve this by actually lowering your peer count, keeping your node well connected and online as much as possible. And just waiting a bit for it to discover what other nodes are best to try delivering to :)

Anonymous 19b8963d0633a562...

I have a well connected node set up for 80 peers (usually fewer then that are available) and I've been seeing DF hovering around 8. After I updated LXMF my stats got partially clobbered on disk (not sure why) and now my DF is closer to 60 😎

Anonymous

Anonymous wrote:

With that many peers, having a couple of hundred unhandled messages is entirely normal. An acceptance rate of 0% means that the another node delivered messages you were offering first, so the peer didn't want any of the offered messages. This is again not uncommon when you have a lot of peers - you're slightly increasing the time between sync to each peer, so there's a higher chance someone else will deliver messages you also had waiting first, lowering the acceptance rate. Having a lot of peers with a 0% acceptance rate is a good sign that you can lower your amount of peers. Any distribution factor over 0 is helping the network, so 0.15 is not bad as such. A factor of 1 means that for each message your node receives, it delivers it to one other node that needed that message. So right now, about 15% of the messages that got to your node was passed on to other nodes that needed them. You can probably improve this by actually lowering your peer count, keeping your node well connected and online as much as possible. And just waiting a bit for it to discover what other nodes are best to try delivering to :)

But wouldn it be a good thing for the network to have a lot of peers in general? In terms of resiliency at least, maybe not in terms of efficiency then. If other nodes closer to the client deliver faster thats fine, if I have ressources for a bigger node I can store the messages as a sort of backup then. Thats how this works I hope :)

I experienced that all my peer slots almost immediately fill up, so I continuously increased the peer number. It feels like this uses so little ressources and I could just keep going. Is there a reason the defaults are so limited?

xf302 1bda10f601e48c7e...

Is there a reason the defaults are so limited?

I think there are two reasons for this. One, this lets lxmd work on a wide range of hardware straight out of the box.

Two, Reticulum isn't exactly widespread yet. Yes there's activity, but only so many messages are actually going around. You're effectively "fighting" other nodes for the right to process a limited number of messages, and the more you try to do, the more time you give other PNs to potentially "win"--superior processing power, network conditions, any combination of factors.

In other words: supply is high, demand is low.

If we were a cryptocurrency that would maybe be an issue, and various idiots would be freaking out right about now. Luckily Reticulum is only adjacent to economics, not drowning within it.

I can suggest a few things though:

  • Try tweaking the peering/stamp costs. Not sure it'll help here but it's worth a try.
  • The autopeering depth setting seems very relevant for making a deeply connected network. Might have diminishing returns with super high numbers though, not sure.
  • If you can, configure you node for Yggdrasil and/or I2P for even greating bridging-ness.

I have a PN running on a VPS, here's some selected settings/stats:

autopeer_maxdepth = 8
propagation_stamp_cost_target = 20
propagation_stamp_cost_flexibility = 3
peering_cost = 20
remote_peering_cost_max = 26
max_peers = 28

[...]

Peers   : 28 total (peer limit is 28)
          25 discovered, 3 static
          12 available, 16 unreachable

Distribution factor is 3.42

That's after 4d 7h of uptime. Maybe try these specific settings?

Anonymous

But wouldn it be a good thing for the network to have a lot of peers in general?

Yes, having many different propagation nodes on the network is completely fine, and a good thing in terms of resilience - especially if most of those actually stay available and online as much as possible. Keeping your node up and available will make other nodes prioritize it more for keeping as a peer, you will receive new messages faster and therefore also get a higher distribution factor, since more nodes will want the message you have.

I experienced that all my peer slots almost immediately fill up

Maybe a bit counterintuitive, but this is actually a good thing. Once your peer slots are filled up, lxmd will start prioritizing which are the best to keep, and which you should probably not spend time on trying to deliver to. If you have fast enough hardware, you can set it to 50 peers or something, but let it sit there for a while. The full prioritization and distribution quality mapping doesn't kick in until your node limit has been reached.

Is there a reason the defaults are so limited?

Mostly to keep the defaults sane on small systems like Raspberry Pi and such. But even a node with only 5 or 10 peers helps out - if each of those other nodes also have just 10 peers, the complete coverage of connectivity scales and saturates quickly.

Some good observations and recommendations from xf302 there.

Post a Reply

Markdown

Supports Markdown: **bold**, *italic*, `code`, ```code blocks```, [links](url)

Log in to upload images

Proof of work verification for anonymous posting

Copied to clipboard