linux and networking articles

PPPoE termination on a Juniper MX

This post is about terminating PPPoE sessions dynamicly on a Juniper MX. Recently i’ve setup dynamic PPPoE termination succesfully on a MX series, the goal was to migrate PPPoE termination functionality from a Cisco 7206 VXR towards one of our Juniper MX’s.

There is documentation from juniper that describes how to set this up:

I stumbled upon several weird things while testing with PPPoE that i thought may be worth sharing.

There are several ways to setup PPPoE termination on a MX:

  • Static subscriber management
  • Dynamic subscriber management

When configuring static subscriber management you need to provision a logical interface for each subscriber, this will not scale when you hit more subscribers.

Dynamic subscriber management is the way i wanted to go, because of the amount of subscribers we have.

JunOS version 13.3R9 experiences

I started testing on JunOS 13.3R9, some things:

  • Documentation was not clear on whether you needed to set the access-profile at the vlan-profile or global.
  • Noticed that the command set available was not very useful for some serious troubleshooting
  • The dynamic ppp profile needs to end with -profile in the name, if you think of using a different name it will not work, it’s probably documented somewhere internally at Juniper.
  • IPCP DNS configuration for clients was not configurable and supported in this release.
  • test aaa commands cannot be used on a MX, you can give the command but it won’t do a thing. This command is apparently only valid on a Juniper EX?

After this experience a new recommended JTAC version was released so i continued with testing on JunOS 15.1R6.7.

JunOS version 15.1R6.7 experiences

Right after the upgrade from JunOS 13.3R9 towards 15.1R6.7, the PPPoE configuration that was previously working stopped completely.

After doing some packetcaptures i could see the PADI coming in from clients, but the MX was dead silent. “show pppoe statistics” were all at 0 confirming the MX was dead silent.

After some debugging i found the following message from auto-configuration:

The reason of this message is that dynamic subscriber management apparently only works on JunOS 15.x and onwards when you run the chassis in enhanced ip mode. I couldn’t find a reference of this in the release notes, but ok… :-)

After setting the configuration as described here PPPoE started working again! So what is this PPPoE configuration about?

The Cisco 7206VXR configuration for PPPoE that I was trying to convert was basicly the following config:

Configuration of dynamic subscriber management

Here is the configuration that i have used to setup dynamic subscriber management using dynamic vlans and dynamic PPP interfaces. The result of this configuration is that end-users can plug-and-play and the only thing you need to do is have a radius account for the happy enduser.

AAA configuration

System configuration

Before applying any dynamic-profile related stuff, I’ve enabled versioning on the MX, this allows a dynamic-profile to be adjusted while subscribers are online.

Dynamic vlan profile configuration

Dynamic ppp profile configuration

Interface configuration

Although it looks easy, it took a while before getting things working and to know how the demux interface works (or not ;-)).

I hope this helps other people setting up PPPoE termination on the MX platform, feel free to comment.

Configuring EVE-NG on VMware

After a successful install of EVE-NG (as a guest) on VMware Server 5.1, a couple of notes:

  • EVE-NG as a virtual machine requires Intel VT enabled on the host you install it on this can be configured from the BIOS of the host. You can verify the enablement by going to the following URL on the ESX host:

    CTRL+F for nestedHVSupported
  • You need to enable CPU VT extensions passthrough on the EVE-NG guest machine configuration. The way you configure this really depends on the VMware ESXi version that you are running. As i am running VMware Server 5.1 the .vmx file was adjusted with the value:

Unifi controller on Synology NAS

This post is about how to configure and run the Ubiquiti unifi controller on a Synology NAS using Docker.

Quite recently I purchased UAP-AC-PRO access-point. Part of the Ubiquiti unifi solution is the unifi controller. I have tried installing the unifi controller on a Ubuntu 16.04 (LTS) system with shared applications but that didn’t work out well because of ipv6 running on this box. The installation of the unifi controller failed because of this. Normally you would dedicate a VPS to the unifi controller function. :)

Then i thought about running the unifi controller on my Synology NAS DS412+. After some investigation I saw that docker was an application that was available and that someone actually published the unifi controller on dockerhub.

Here’s some small steps on what you need to do to get the unifi controller running on a Synology NAS running DSM 6.0.2-8451 Update 6 using the GUI:

1.) Install docker from package center

2.) Open docker and go to Register, search for unifi “https://hub.docker.com/r/jacobalberty/unifi/” and click download.

3.) Within a few minutes (depending on your download speed) the image should be available in the image tab.

4.) Within the image tab click start and choose a wise name to identify your own unifi controller (mine is called wifi-controller).

5.) Set the network settings 1:1 (TCP/UDP ports), make sure you do not have any other applications running on these ports (like SabNZB).

6.) Start the wifi-controller and access it through: https://<nas-ip>:8443

Have fun!

MPLS LDP label filtering

This post will outline MPLS LDP label filtering on IOS and IOS-XE. It contains LDP label filtering configuration and belonging output.


Recently we migrated two POPs to a MPLS based network  coming from VRF Lite. With other non-MPLS POPs left to migrate, we still have quite some prefixes in our IGP.

As LDP assigns labels for IGP based routes we ended up with quite some labels that were generated- and advertised without any purpose. This may impact convergence of a network so we setup LDP label filtering to only generate labels for PE’s that have L3VPN or AToM xconnects. Label filtering can be used to minimize the number of prefixes in the LIB and control which labeled prefixes are advertised using LDP.


There are two ways to control LDP label filtering:

  • LDP inbound label filtering (per LDP neighbor configuration)
  • LDP advertised label filtering

The configurations that follow are based on LDP advertised label filtering. Reason for this is that inbound label filtering is error-prone (lots of config) and if you solve the problem at the source (advertising labels), it won’t effect others. :)

This post assumes the following basic MPLS LDP configuration:

The configuration of advertised label filtering starts with a standard access-list for prefixes you would like to generate labels for. For MPLS L3VPN you basicly only wants labels of the PE loopbacks.

Next you need to configure LDP to use this standard access-list:

The result of this config can be obtained with the following commands:

When checking the results on another PE (PE2), it appears that the labels in the LIB are still advertised even though the prefixes do not match the standard ACL of PE1. So the implicit deny of a standard ACL does not work.

There is one missing command on PE1 to fix this:

Total configuration of one PE for LDP label filtering:

I hope this helps someone out there. If you have any questions, please comment!

« Older posts

© 2017 ipnetworking.net

Theme by Anders NorenUp ↑