Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gnrc/ipv6/nib: automatically create 6ctx for downstream networks #21086

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

benpicco
Copy link
Contributor

Contribution description

When gnrc_ipv6_auto_subnets announces the creation of a downstream subnet to the upstream router, if that router is a 6lbr, it might as well create a compression context for that network.

To also inform the downstream router about this compression context, send another RA to the downstream router, containing only the 6ctx information (to avoid creating a packet ping-pong loop if it also contained a prefix information option).

Testing procedure

An instance of examples/gnrc_border_router and one of examples/gnrc_networking with both ZEP and TAP enabled (I used two instances of the border router with #21081 to only enable the ABR functionality of one at run-time).

start 6LBR on-demand
static gnrc_netif_t *_get_6lo_interface(void)
{
    gnrc_netif_t *netif = NULL;
    while ((netif = gnrc_netif_iter(netif))) {
        if (gnrc_netif_is_6ln(netif)) {
            return netif;
        }
    }

    return NULL;
}

static int cmd_start_network(int argc, char **argv)
{
    (void)argc;
    (void)argv;

    gnrc_netif_t *downstream = _get_6lo_interface();
    if (!downstream) {
        puts("can't find 6lo interface");
        return;
    }

    ipv6_addr_t prefix;
    int len = ipv6_prefix_from_str(&prefix, "fd12::/16");
    if (len <= 0) {
        puts("Can't parse prefix string");
        return;
    }

    /* configure subnet on downstream interface */
    int idx = gnrc_netif_ipv6_add_prefix(downstream, &prefix, len,
                                         UINT32_MAX, UINT32_MAX);
    if (idx < 0) {
        DEBUG("adding prefix to %u failed\n", downstream->pid);
        return;
    }

    netopt_enable_t enable = NETOPT_ENABLE;
    gnrc_netapi_set(downstream->pid, NETOPT_6LO_ABR, 0, &enable, sizeof(enable));
    gnrc_netapi_set(downstream->pid, NETOPT_IPV6_SND_RTR_ADV, 0, &enable, sizeof(enable));

    return 0;
}
SHELL_COMMAND(leader, "Make the node a leader", cmd_start_network);

Downstream node receives the fd12::/16 prefix and creates a fd12:1284:c87:1fb7::/64 prefix from it, sends this to the upstream router. Upstream router creates a compression context and response with yet another RA that contains the updated list of compression contexts.

6ctx on both nodes shows the same information:

cid|prefix                                     |C|ltime
-----------------------------------------------------------
  0|                                 fd12::/16 |1| 6046min
  1|                   fd12:1284:c87:1fb7::/64 |1| 6046min

Issues/PRs references

@github-actions github-actions bot added Area: network Area: Networking Area: sys Area: System labels Dec 13, 2024
@@ -1787,6 +1787,28 @@ static const char *_prio_string(uint8_t prio)
return "invalid";
}

static gnrc_pktsnip_t *_build_ctxs(_nib_abr_entry_t *abr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't there already be a function like this somewhere in the code base?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only a function that adds all options

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid code duplication, does it make sense to call the function at hand there as well?

@fabian18
Copy link
Contributor

fabian18 commented Jan 11, 2025

There are two things implied here that are untypical for 6lowpan networks.

  1. Usually prefixes and compression contexts are originated at a 6LBR and disseminated across 6LR to the hosts over the entire lowpan
  2. There is no subnetting in a lowpan as far as I know. To my understanding this would be against a flexible mesh topology.

A lowpan can be spanned by N 6LBR which then have to be synchronized in prefixes and compression contexts.
The participation of a 6LN in multiple lowpans is outside of RFC6775. I think you would need to take care that lowpan information does not mix.

The auto_subnet_eui thing on a 6LBR could be used to span multiple lowpans (::/64), I thought, when all the 6LBR are getting the fd12::/16. You know that the ::/64 lowpans are unique as they are constructed from a unique EUI. The context 0 disseminated in network would be correspond to the EUI of the 6LBR.

Or when you want to introduce multiple border routers per lowpan e.g. one on the left and one on the right edge, there are 2 compression contexts, one for each ::/64 net contructed from the EUI.

Further isolation of lowpans would maybe happen over PAN IDs.

What is the problem you are trying to solve?

@benpicco
Copy link
Contributor Author

What is the problem you are trying to solve?

The 6LoWPAN is the 'backbone' network between units. Each unit consists of multiple boards, one being the radio box with participates in the 6LoWPAN network. The other nodes on that unit are connected to then radio box via Ethernet.

So each unit gets their own subnet and since those subnets are reached over 6lo, I thought it fitting to also add compression contexts for them.

[6LBR] - - - - [6LR]----[Node A]
(fd12::/16)    (fd12:1284:c87:1fb7::/64) 
   ¦             |
   ¦             \------[Node B]
   ¦
    \ - - - - - [6LR]---[Node C]
                (fd12:c552:5e89:6e19::/64)
                  |
                  \-----[Node D]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: network Area: Networking Area: sys Area: System
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants