So this module will talk primarily about distributed denial-of-service attacks and how the routing infrastructure can be utilized to provide for some effective mitigation so over the years I'd say in the last 15 years coordinated distributed attacks have really grown in size and in scale and overall also in sophistication so whereas 20 years ago it might have been a really common to see 100 megabits per second and maybe even a gigabit or two packets per second of distributed denial-of-service attacks, today you can see up to 100 gigabits per second or even two or three and we've seen up to 400 gigabits per second of illegitimate traffic. Now that leaves your infrastructure at risk. It's an overwhelming amount of data and what can we do about that? So this is a picture that shows in very simple form an automated distributed denial of service attack with the millions of hosts that are available today by compromising hundreds of thousands or millions of hosts. These can all be used by an attacker to launch a massive DDoS attack to a specific network infrastructure or even to specific targeted hosts. So one of the problems with ISPs is that this can really overwhelm router CPU cycles. So even if you have very high end architectures, small packet processing is taxing on any type of CPU. And while filtering can be useful, again, this also has a CPU hit so you really want to be able to mitigate these kinds of distributed denial-of-service attacks in a very sophisticated fashion that has the least impact on your resources on your network infrastructure. So one of the things that you can do to defend against distributed denial of service attacks is you can do packet filters at the customer site. However, the problem is that here the packets have already traversed the link and the link is already swamped and you know you could run out of bandwidth for a legitimate traffic. So that is not the most effective solution. You could filter at the ISP side but this has the issue of it requires human intervention and it also requires massive amounts of CPU cycles to do the filtering and quite frankly doesn't scale. So one of the things, one of the mechanisms, that ISPs have come up with and working with vendors is to manually null route all the traffic to the IP address under attack and so you're basically discarding all of that traffic and there's a technique that i'm going to talk about called remotely triggered black hole filtering. So let's now look at how DDoS mitigation can be done using remotely triggered black hole filtering. A primary concept is that BGP is used to trigger network wide responses. So it exploits the routers forwarding logic to drop packets. The packets are forwarded to a null interface configured as null zero on most devices and also called the discard interface. So if a route points to a null interface at the next hop it gets dropped at a router. This technique is very effective against both spoofed and valid source IP addresses. Unfortunately we still do still see a lot of denial of service attacks that have spoofed, i.e. forged source, IP addresses so we do have to deal with both spoofed and valid ones. What's really important is also to understand that RTBH filtering really gives you extremely fast response times when you're filtering across network wide infrastructures. So just by utilizing a static route and the pre-configured BGP it allows ISPs to trigger network wide filtering as fast as iBGP can update the network. And really speed is of the essence because you want to drop as quickly as possible. Now when we look at how long this particular technique has existed it's been nearly 18 years. It's been operationally used since the early 2000s the first time I ever gave a workshop that included RTBH was back in 2005 where I did some training in Kyoto at an APNIC meeting. So it really has existed for a very long time. The standardization took a little bit longer. The first informational RFC configuring BGP to block denial of service attacks came out in 2004. Five years later you added some enhancements and this RFC was called remotely triggered black hole filtering with uRPF which also added source filtering and I'll talk about that in detail in a bit. And then also in 2016 you added a standard for specifically a black hole community. So as we can see while the standardization took a while operationally it has been effectively used for over 15 years. So this particular slide shows how black hole filtering gives you a CPU advantage over packet filters. So essentially in a router you have something called the forwarding information base which tells you what the next hop route is to a particular destination. You also have packet filters that can get used to drop certain traffic. Now when a packet arrives when you're using black hole filtering you're just looking up the entry the next hop in the forwarding information base and if the next hop is the null interface it just gets dropped. Now we can see that it really saves on CPU cycles and ACL processing, access control lists, if you're using Cisco routers. And you just drop the packets after you take a quick look into the forwarding information base so this is a really nice technique that saves on CPU cycles One of the things I really want to emphasize is that a combination of packet filters and RTBH is the best solution for mitigating against distributed denial-of-service attacks. Packet filtering strengths include that you can have detail filtering so you can define ports, protocols, IP ranges, fragments, what have you. And you want to enlist the support of upstream ISPs as you're doing these packet filters. Now they do have some weaknesses. One of them is that it's very operationally challenging with frequent changes and they're also very difficult to employ simultaneously on multitudes of interfaces. So typically when you're thinking about packet filters you want them to be in environments where things don't change very often, it's pretty static. With RTBH you will be able to use dynamic issues like denial of service attacks and obviously also use uRPF to handle source based drops. And so just some last points regarding our remotely triggered black hole filtering--it is a very effective method to automate large-scale filtering. It enlists the support of upstream ISPs. It's very lightweight on resources especially CPU cycles and it uses BGP communities for signaling between customer and transit providers. I think to be a really good internet citizen we all have to look at how we can help each other and distributed denial of service attacks really are growing at a rate where I'm hoping that all ISPs will work together to help solve the problem and good luck.

© Produced by Philip Smith and the Network Startup Resource Center, through the University of Oregon.

Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
This is a human-readable summary of (and not a substitute for) the license. Disclaimer. You are free to: Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material The licensor cannot revoke these freedoms as long as you follow the license terms. Under the following terms: Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. NonCommercial — You may not use the material for commercial purposes. No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.