Rate Limiting at the Edge with HAProxy

September 22nd, 2021   |     |  product and technology, developer

Here at OneLogin, we use HAProxy as our primary ingress load balancer. HAProxy has a robust set of functionality and gives plenty of extensibility through Lua. In this article, I’ll cover one of the features, stick tables, and show you how to use stick tables to implement a simple rate limiter. We’ll walk through a real-world example on one of the ways we’ve implemented them. Before we jump in, it should be noted these are *examples* and should not just be copy and pasted into a production configuration. Now that that’s out of the way, let’s cover some of the basics.

Stick Table Overview

So, what are stick tables? They are HAProxy’s way of providing in-memory key-value–based storage. The data structure is an elastic binary tree, making it extremely performant to add, remove, and fetch data. Since the tables are in-memory, you can’t aggregate them across nodes in a cluster (unless you are using HAProxy Enterprise and taking advantage of their Stick Table Aggregator, wah-wah). You can still share tables across your cluster using peers, but the table’s data is overwritten on each update from the local node. Don’t let this limitation deter you from implementing them! Regardless, I’ll only be covering the open-source version in this article.

So what are the use cases for stick tables? The one we’ve implemented the most has been for high-level burst rate-limiting. Since all our client traffic traverses through our HAProxy clusters, we wanted to build rules in a centralized location without adding another service and more latency to the mix. But if you have any need to track or make decisions based on a counter, using a stick table is a great option, and the use cases are plentiful.

The Anatomy of Stick Tables

Let’s dive into some of the fundamentals. Stick tables are where you will store the data from the sticky counter (sc) - more on sticky counters next. Stick tables can be added to a listener, frontend, or backend sections. Each section can only have a single table defined. We’ve found the easiest way to implement each table is as its own backend. Defining each table in a separate backend gives us the ability to have n number of tables. There are two concepts which are important to understand when it comes to defining a table. The first is a table type. There are five different table types: ‘ip’, ‘ipv6’, ‘integer’, ‘string’, and ‘binary’. You can only specify a single type per table. The second is a data type, you can store multiple data types on that table. Think of data types as logical groups of counter information, like conn_rate, http_req_rate, or bytes_out_rate. For example, you can have a table type of `ip` and data types that track the request rate, bytes in and out, and even connections made by an IP on a single table.

Sticky counters define what to add to the stick tables. They look like this:

http-request track-sc0 src table name_of_table if { url_beg /login }

There are four parts to the sticky counter:

  1. The sticky counter number ‘track-sc{ 0 | 1 | 2 }’, which by default in the open-source version goes up to three
  2. What you want to track, in the above we’re tracking the source IP (‘src’)
  3. Which table you would like to send the data ‘table name_of_table’
  4. An optional ACL statement


Example

Now we have some of the basics out of the way, let’s put it into practice. In this example, we will be adding a stick table to track and block IP addresses that are hitting our login page with a high frequency of failed requests, such as 401s.

As I mentioned earlier, since you can only define one table per section, we will create a new backend to hold our table. For the nomenclature, use obvious terms to make the readability and reviewability easy. We use the following pattern ‘table_{purpose}’, prepending with ‘table_’ makes it obvious that this backend is a table.

backend table_login_limiter
 stick-table type ip size 1m expire 60s store http_req_rate(60s)

Now let’s break the table above apart using what we’ve learned. The table ‘type’ is of ‘ip’, the ‘size’ is 1M unique entries, we ‘expire’ the entry after 60 seconds, and we are storing a single data type of ‘http_req_rate’ which will give us the average request rate over 60 seconds.

We can now set up our sticky counter and send data from the counter to the stick table. We’ll setup our counter in the frontend section.

frontend http-in
 bind :80
 http-request track-sc0 src table table_login_limiter if { url_beg /login } { status 401 }

Next, we’ll track the client IP and any time the user fails to login and gets a 401 response code. We will also track requests per minute and send a 429 response. To do this we take advantage of one of the dozens of different fetch methods that HAProxy offers ‘sc_http_req_rate’. This method requires one parameter which is the sticky counter number being used. If you recall when we defined the tracker in the section before we used ‘track-sc0,’ so in this case we will use ‘0.’

http-request deny_status 429 if { url_beg /login } { sc_http_req_rate(0) gt 10 }

We now have this simple config which will enable us to easily add a stick table and rate limit potentially bad acting clients. It will rate limit a client IP address hitting the ‘/login’ page more than 11 times within 60 seconds.

frontend http-in
  bind :80
  http-request track-sc0 src table table_login_limiter if { url_beg /login } { status 401 }
  http-request tarpit deny_status 429 if { sc_http_req_rate(0) gt 10 } { url_beg /login }
  default_backend be_default_server


backend table_login_limiter
  stick-table type ip size 1m expire 60s store http_req_rate(60s)

backend be_default_server
  balance leastconn
  server server_1 127.0.0.1:80

Let’s run a quick for-loop curling the login page. You’ll see we only get 11 requests returning a 401 before the rate limiter kicks in.

Wrapping It Up

This article gives you some ideas on what stick tables are and how to use them. In the next post, I’ll cover how we’ve implemented rate limits in more detail and what we’re doing with Consul to streamline configuration updates. While I shared a few use cases of ours, I would love to hear from you how you already use stick tables or how you plan to! Hit us up on Twitter! @OneLoginDev

OneLogin blog author
About the Author

Matt Barrio is a Director of Engineering @ OneLogin. He leads the Site Reliability team, covering OneLogin’s cloud platform while overseeing the entire Engineering organization’s reliability objectives. He is passionate about helping people grow and building things at scale, from processes to systems.

View all posts by Matt Barrio

OneLogin blog author
About the Author

Matt Barrio is a Director of Engineering @ OneLogin. He leads the Site Reliability team, covering OneLogin’s cloud platform while overseeing the entire Engineering organization’s reliability objectives. He is passionate about helping people grow and building things at scale, from processes to systems.

View all posts by Matt Barrio

Secure all your apps, users, and devices