Learning by doing: Adding routes to Neutron

This post outlines how to add routes to a Neutron router. The outcome of this post will allow the jumphost to access VMs and networks advertised behind the SRX. Working on my lab environment I have some server infrastructure and jump hosts in the network Due to Neutron routing being very plain I could not dynamically peer the SRX with the Neutron gateway.

First time to list my routers in my project

[email protected]:~$ neutron router-list
| id                                   | name            | external_gateway_info                                                       |
| 27d89917-bb77-46c3-95d5-250a259ba304 | public_router   | {"network_id": "083ad060-d6dd-4e49-84e1-c8a2259982ff", "enable_snat": true} |
| 60aefbeb-d2f2-4daf-91b2-6f59391bfee5 | external_router | {"network_id": "083ad060-d6dd-4e49-84e1-c8a2259982ff", "enable_snat": true} |
| a41a761d-9ee1-449d-80be-3ea0f599c4f9 | isolated_router | {"network_id": "083ad060-d6dd-4e49-84e1-c8a2259982ff", "enable_snat": true} |

The router I want to use is the isolated_router. The ID is a41a761d-9ee1-449d-80be-3ea0f599c4f9.

The attached image below shows the rough network environment.



The three networks attached to the Distributed Logical Router are unknown beyond the edge of the SRX. WIN-MGT on the network has no idea of it. It can only see the interface of the SRX in the network. We need to teach the Neutron Router that routes between these two networks about

This can be done with updating the neutron router.

[email protected]:~$ neutron router-update a41a761d-9ee1-449d-80be-3ea0f599c4f9 --routes type=dict list=true destination=,nexthop= destination=,nexthop= destination=,nexthop=
destination=,nexthop= destination=,nexthop=
Updated router: a41a761d-9ee1-449d-80be-3ea0f599c4f9

The result when we look at the Neutron router again is much better.

[email protected]:~$ neutron router-show a41a761d-9ee1-449d-80be-3ea0f599c4f9
| Field                 | Value                                                                       |
| admin_state_up        | True                                                                        |
| distributed           | False                                                                       |
| external_gateway_info | {"network_id": "083ad060-d6dd-4e49-84e1-c8a2259982ff", "enable_snat": true} |
| id                    | a41a761d-9ee1-449d-80be-3ea0f599c4f9                                        |
| name                  | isolated_router                                                             |
| routes                | {"destination": "", "nexthop": ""}            |
|                       | {"destination": "", "nexthop": ""}            |
|                       | {"destination": "", "nexthop": ""}            |
|                       | {"destination": "", "nexthop": ""}            |
|                       | {"destination": "", "nexthop": ""}             |
| status                | ACTIVE                                                                      |
| tenant_id             | c3485cfe92be4f47852db87ca06b4383                                            |

As you can see there is a new field that includes the routes that I have programmed into my Neutron router. I now have connectivity from the network into my networks advertised off the DLR. Between the SRX and the DLR is an ECMP fabric.

mgt-lnxjump (                   Tue Jul 28 00:32:10 2015
Keys:  Help   Display mode   Restart statistics   Order of fields
   quit                 Packets               Pings
 Host                 Loss%   Snt   Last   Avg  Best  Wrst StDev
 1.      0.0%   173    0.5   0.3   0.2   4.7   0.3
 2.    0.0%   173    6.6   8.0   1.2  11.6   2.3
 3.        0.0%   173    3.9   4.0   1.1  23.4   2.3
 4. ???
 5.      0.0%   172    7.9   8.7   5.9  22.9   2.1

End to end connectivity. We can see at hop three is the E3 currently passing traffic. If this drops or turns off this hop will be updated with,2 or 4. ECMP is great!

Gotcha: A gotcha of this is that neutron doesn’t add additional static routes each time you execute the command. It will refresh the list. Ensure you don’t forget any else you may have some connectivity issues!

The alternative is to assign host routes under a DHCP scope. This is pretty easy. A host route is a DHCP option passed to an instance on boot that would allow an allocation of pre-defined static routes. This would do that but in my case my instance had spawned and the other instance accessing this environment was actually not a nova instance and therefore not in scope for an IP from my Neutron DHCP Client.

There you are. Connectivity to my remote network. Openstack is pretty powerful!

Creating a Neutron port and attaching to an Instance

Creating a Neutron port and attaching to an Instance

This post will demonstrate the flexibility of creating a new port on a Neutron network and attaching it to an existing Instance. The Instance this port is attaching to is a nested vSphere host so I will run the additional commands listed in this [article].

Creating the port

The first task is to create a port in neutron. The operation consists of the following

The command neutron port-create lab_transport_MGT allows the creation of a new Neutron port on the existing network called _lab_transport_MGT.

[email protected]:~$ neutron port-create _lab_transport_MGT                     Created a new port:
| Field                 | Value                                                                               |
| admin_state_up        | True                                                                                |
| allowed_address_pairs |                                                                                     |
| binding:vnic_type     | normal                                                                              |
| device_id             |                                                                                     |
| device_owner          |                                                                                     |
| fixed_ips             | {"subnet_id": "011821e4-bab3-4987-9eeb-7350f363daf2", "ip_address": ""} |
| id                    | a2815656-76f7-46d4-9d11-057063db1a14                                                |
| mac_address           | fa:16:3e:3f:fe:a9                                                                   |
| name                  |                                                                                     |
| network_id            | d84ce932-b101-4301-98fe-70888792debb                                                |
| port_security_enabled | True                                                                                |
| security_groups       | d96a7dc2-6cf0-429b-ba89-ec74c151baa1                                                |
| status                | DOWN                                                                                |
| tenant_id             | c3485cfe92be4f47852db87ca06b4383                                                    |

Attaching the port to an instance

Now that the port is created it needs to be attached to an instance. Attaching a newly created port to an Instance can be done while the Instance is running.

nova interface-attach --port-id a2815656-76f7-46d4-9d11-057063db1a14 mgt_esx4

This uses nova to attach the newly created Neutron port to the instance with a name of mgt_esx4.

The appearance of a new network interface without a restart depends on Instance image. There is no hot NIC support within ESX so I require a reboot for my changes to appear.

To validate that I have connected to an instance I can do many things. I have chosen to check the mac-address against neutron to see which instance has registered. The MAC address should match to the address listed in the neutron port we created at the start.

[email protected]:~$ neutron port-list | grep fe:a9
| a2815656-76f7-46d4-9d11-057063db1a14 | | fa:16:3e:3f:fe:a9 | {"subnet_id": "011821e4-bab3-4987-9eeb-7350f363daf2", "ip_address": ""} |

There! Nice and easy. Restart this particular instance and you have added a network interface to an instance. In this case it was dedicated NIC for VXLAN traffic in a nested NSX lab.