Operating in the Margins - Experimenting with Suricata Features

Hi everyone,

I have a bunch of news and announcements that I want to talk about, regarding various projects and research I’ve been up to the past few weeks. This post is going to cover some new information about some old side projects I’ve been working on, some new side projects, and interesting new features in Suricata.

Project Avenger

I’m going to kick things off and talk about a new project I’m starting, as its kind of the catalyst that kicked off all of the other activity I want to talk about today. At work, we have goals that we set for ourselves every year. Kinda like a new year’s resolution, if you will.

One of the things I wanted to do was to look at features and functionality that suricata can do that falls outside of what we do on the Emerging Threats team.

If you haven’t been on this forum for a while, the wiki subforum details a lot of our standards around rule writing, QA testing, Metadata tagging, etc – We aren’t afraid to tell you how the sausage is made.

To that effect, there are quite a few things that suricata can do that we don’t write configurations or detection for. I know I’ve mentioned this in the past, but just to re-iterate, The Emerging Threats ruleset is standardized on a couple of versions of Suricata and Snort. We provide three rulesets:

  • A ruleset that can utilize keywords and protocol headers available in Suricata 5.0
  • A ruleset that can utilize keywords and protocol headers available in Suricata 7.0
  • A ruleset that, in general is compatible with supported versions of Snort 2.9.x

If you play tabletop RPGs like Dungeons and Dragons, think of it like “we standardize on the D&D 3.5 ruleset”, or the D&D 5 ruleset, etc. That means that whatever keywords and protocol support come out in new releases, we can’t use immediately unless we commit to providing a “rule fork” that supports the features of that new version of Suricata, like how we did for Suricata 7.0.3, dropping support for Suricata 4 in the process.

In addition to all of that, our ruleset is designed to where it can be used with a default snort.conf or suricata.yaml. If the user wants or needs to make any customization, that is entirely up to them. The only thing a user should need to do in order to get started with our ruleset is to point the config file (or command line argument) to the emerging threats ruleset. As you might imagine, that precludes us from getting involved in a lot of the other features available – at least for right now.

That brings me to the Avenger project. This project of mine is aiming to change that, or at least start experimenting with those other features, and introduce our users to them in order to help address issues that are slightly outside of what the ET rulesets can do currently.

Right now, this github repo is very sparse, but I’m hoping to change that. But before we get into that, let’s talk about Suricata 8’s support for websockets, and how that enabled coverage for a pair of BeyondTrust product CVEs.

Websocket Support in Suricata 8

I’ll give you a recent example. Suricata 8 introduced the websocket protocol header (e.g.:alert websocket $HOME_NET any -> $EXTERNAL_NET any), as well as several websocket keywords. This is kind of a big deal, because if you know anything about websockets, even if you have TLS inspection, which is a high wall to climb in and of itself, websockets uses a randomly derived masking key to xor the payload.

Because with Suricata we don’t have the option to say “use byte_extract to grab these bytes where the masking key is defined, save the value, and use that as an XOR key for the xor transform” (at least… not in the Suricata rule syntax), the current versions of Suricata we support can’t make use of decrypted websocket traffic.

BeyondTrust CVEs CVE-2026-1731 and CVE-2024-12356

Recently while I was doing research for new rule coverage, I came across details on a vulnerability in BeyondTrust Remote Support/Privileged Remote Access products. I immediately noticed that it was a vulnerability in their websocket implementation, and that outside of spotting the URL that is used to upgrade HTTP connections to the websocket protocol, there wouldn’t be much I could do, with the current rulesets we support, but Suricata 8 could potentially cover this vulnerability.

So I set about creating a (kinda jank) reproduction of the exploit that involved using python’s websocket-server, and websocat, kind of like the tutorial I wrote for exploit reproductions not that long ago, over here.

I managed to create an unencrypted pcap, and then managed to create a pair of rules – one to detect the websocket upgrade request, then another to detect the payload over the websocket protocol. Assuming you are somehow able to decrypt the traffic (if its encrypted), then the rules should provide effective coverage.

Not too long after that, I decided to start the Avenger project, post up the rules I had, then got my first contribution from @stu4rt for another Beyond Trust PRA/RS vulnerability, this time from 2024 – a very similar websocket-based vulnerability.

So these are examples of rules that we can’t effectively cover with the rule syntaxes we currently support, so I’m doing my level best to fill in those gaps. My goal in the future for these rules is to merge them into a future Emerging Threats rule fork for Suricata that supports the use of the websocket rule header, and websocket keywords. But aside from unsupported rule features/keywords, I also mentioned that I wanted to experiment with new and/or existing Suricata features but I wasn’t sure where to start. I know that in the past, we did some experimentation with Suricata’s lua scripting, and I may revisit that. There’s also the possibility of fleshing out how to do filestore, http request body logging, ip reputation, and so on, and so forth. Then something caught my eye: Suricata 8 introduced support for nDPI.

Introduction to nDPI

So, what is nDPI? In a nutshell, it’s an ntop-derived project meant to introduce deep packet inspection capabilities into the ntop family. I think of it as a project for network application and protocol fingerprinting. Its relatively new, and its under rapid development right now. If you’d like to learn more, ntop’s product page about is here, and here is the github repo.

Adding support for the nDPI plugin to suricata 8.0.3

Enabling nDPI plugin support for suricata is really easy. Download and compile nDPI version 4.14 from the releases page. Be sure to make note of the required software packages for your Linux distribution before trying to compile it from source. Then, while compiling the suricata 8.0.3 tarball, add --enable-ndpi --with-ndpi=/usr/src/nDPI-4.14 to your configure command. So, for example if you configure Suricata like so:

./configure --enable-lua --enable-geoip --enable-hiredis

to add ndpi support, change the configure command to this:

./configure --enable-lua --enable-geoip --enable-hiredis --enable-ndpi --with-ndpi=/usr/src/nDPI-4.14

Note that if you compiled ndpi in another directory other than /usr/src/nDPI-4.14, you’ll need to point the configure command to the correct directory.

That’s really all there is to it. Suricata will automatically create the ndpi.so library, and place it in /usr/local/lib/suricata/ndpi.so, and will add an entry in the plugins directive in suricata.yaml:

If you compiled Suricata with ndpi support, and the configure command completed successfully, suricata should have automatically enabled the ndpi.so plugin automatically and placed the library in /usr/local/lib/suricata/ndpi.so

Another way to confirm ndpi plugin support is via suricata --build-info. You’ll want to look for this output:

Screenshot From 2026-02-20 12-24-40

If you see this in the output of suricata --build-info, you should be good to go.

Note: Some of you might have noticed that the latest version of nDPI is currently 5.0 so why aren’t we using that? well, 5.0 introduces a lot of awesome new features, but also changes a lot of the internal API for nDPI, and those changes have not been accounted for in Suricata just yet.

Maybe if you have some measure of skill in the subject you might be able to help the nDPI and OISF teams fix the code for it to be compatible with Suricata 8.0.3 and beyond, but until code is updated, version 4.14 is the latest version that Suricata 8.0.3 supports.

Autosuricata nDPI update

So, what if you don’t want to do all that manual work? Autosuricata is a side project of mine I’ve been tinkering on for years that I’ve mentioned numerous times. I recently updated the installer script to allow users to automatically install and support nDPI. I’ve made a new configuration option in the file full_autosuricata.conf that allows the user to decide if they want to compile suricata with or without nDPI support. For full details, take a look at the patch notes at the bottom of the readme.md over here.

Autosuricata has been updated to automatically build Suricata with nDPI support. Navigate to full_autosuricata.conf, and check that the nDPI_support variable has been set to yes. This is the default setting, but can be turned off if the user isn’t interested.

Note: Autosuricata also downloads and compiles vectorscan for hyperscan support in the latest suricata release as well.

nDPI support for Dalton

Dalton is another commonly mentioned project that I love dearly for how easy it makes detection engineering. Currently, I have an issue submitted to see about adding support for nDPI (and hyperscan – more on that in a minute), but it might be a little bit before that gets merged in. In the meantime, I whipped up a little tutorial on how to modify bits of Dalton, in order to create a custom Suricata container with nDPI support. It involves creating a new Dockerfile, a new suricata.yaml file, and modifying the docker-compose.yml to build an image with the new dockerfile. Here’s how to do it:

If you pull down Dalton via github, one of the directories is dalton-agent. And under that directory is the Dockerfiles directory. So the full path is dalton/dalton-agent/Dockerfiles. In this directory, we’re gonna create a new dockerfile. If you’re not familiar with them, think of a dockerfile as a bash setup script with more steps. I would recommend naming it Dockerfile_suricata_nDPI. Using your favorite text editor, input the information below:

# Builds Suricata Dalton agent using Suricata source tarball
FROM ubuntu:24.04

ARG SURI_VERSION
ARG ENABLE_RUST

# tcpdump is for pcap analysis; not *required* for
#  the agent but nice to have....
# changing the python3.8 package to python3 python3-dev and python3-pip
# hadolint ignore=DL3008
RUN apt-get update -y && \
    DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
    python3 python3-dev python3-pip \
    tcpdump \
    libpcre3 libpcre3-dbg libpcre3-dev libnss3-dev \
    build-essential autoconf automake bison cmake flex gettext git libtool libboost-all-dev libpcap-dev libnet1-dev \
    libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libpcap-dev libcap-ng-dev libcap-ng0 \
    make libmagic-dev libmaxminddb-dev libnuma-dev libjansson-dev libjansson4 libjson-c-dev libsqlite3-dev pkg-config ragel rustc cargo \
    liblua5.1-dev libevent-dev libpcre2-dev librrd-dev && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

# for debugging agent
#RUN apt-get install -y less nano

# download, build, and install Suricata from source
RUN mkdir -p /src/suricata-${SURI_VERSION}
WORKDIR /src
RUN git clone https://github.com/VectorCamp/vectorscan
WORKDIR /src/vectorscan
RUN mkdir vectorscan-build
WORKDIR /src/vectorscan/vectorscan-build
RUN cmake -DBUILD_SHARED_LIBS=On ../ && make -j $(nproc) && make install && ldconfig
WORKDIR /src
ADD https://github.com/ntop/nDPI/archive/refs/tags/4.14.tar.gz 4.14.tar.gz
RUN tar -xzvf 4.14.tar.gz
WORKDIR /src/nDPI-4.14
RUN ./autogen.sh && ./configure && make -j $(nproc) && ldconfig
WORKDIR /src
ADD https://www.openinfosecfoundation.org/download/suricata-${SURI_VERSION}.tar.gz suricata-${SURI_VERSION}.tar.gz
RUN tar -zxf suricata-${SURI_VERSION}.tar.gz -C suricata-${SURI_VERSION} --strip-components=1
WORKDIR /src/suricata-${SURI_VERSION}
# Note: Some Suricata versions between 2.0 and 5.0 won't compile on newer linux kernels (like
# Ubuntu 20.04) without some patching/tweaking, which is why this Dockerfile uses Ubuntu 18.04.
# However, this line will patch the Suricata code sufficiently to get Suricata compile on newer
# kernel that Ubuntu 20.04 uses.  Keeping it here for if/when it is needed.
# Ref: https://github.com/OISF/suricata/pull/4057/files
#RUN if [ -n "$(echo $SURI_VERSION | grep -P '^[0-4]\x2E')" ] && [ -z "$(grep '#include <linux/sockios.h>' 'src/source-af-packet.c')" ]; then \
#        sed -i 's|#ifdef HAVE_AF_PACKET|#ifdef HAVE_AF_PACKET\n\n#if HAVE_LINUX_SOCKIOS_H\n#include <linux/sockios.h>\n#endif\n|' src/source-af-packet.c; \
#    fi;
# configure, make, and install
# hadolint ignore=SC2046
RUN ./configure --enable-profiling ${ENABLE_RUST} --enable-lua --enable-ndpi --with-ndpi=/src/nDPI-4.14 && make -j $(nproc) && make install && make install-conf && ldconfig

RUN mkdir -p /opt/dalton-agent/
WORKDIR /opt/dalton-agent
COPY dalton-agent.py /opt/dalton-agent/dalton-agent.py
COPY dalton-agent.conf /opt/dalton-agent/dalton-agent.conf

COPY http.lua /opt/dalton-agent/http.lua
COPY dns.lua /opt/dalton-agent/dns.lua
COPY tls.lua /opt/dalton-agent/tls.lua

RUN sed -i 's/REPLACE_AT_DOCKER_BUILD-VERSION/'"${SURI_VERSION}"'/' /opt/dalton-agent/dalton-agent.conf

CMD ["python3", "/opt/dalton-agent/dalton-agent.py", "-c", "/opt/dalton-agent/dalton-agent.conf"]

The file above is a copy of the existing Dockerfile_suricata, but modified to acquire the prerequisites for both vectorscan, and nDPI.

Vectorscan is drop-in replacement of Intel’s hyperscan library. It enables higher performance for applications that utilize regular expressions. For some reason or another, the ubuntu 24.04 dockerhub image doesn’t have the vectorscan or libvectorscan5 package in the library, so we compile it from source.

Note: The performance increase from utilizing hyperscan doesn’t come without a cost. What you gain in regular expression performance while Suricata is running, you lose in startup time – If Suricata is compiled with hyperscan support, it creates a database on startup to boost performance. In an application like Dalton where Suricata containers are getting started and stopped constantly, this results in delays. I’ve seen hyperscan cause 15-20 second delays on startup before results are delivered for a single Dalton job. So, you might want to bear this in mind.

If you’re not interested in hyperscan/vectorscan support, remove lines 29 through 34:

WORKDIR /src
RUN git clone https://github.com/VectorCamp/vectorscan
WORKDIR /src/vectorscan
RUN mkdir vectorscan-build
WORKDIR /src/vectorscan/vectorscan-build
RUN cmake -DBUILD_SHARED_LIBS=On ../ && make -j $(nproc) && make install && ldconfig

When finished, save your file. Next up, we have to modify the docker-compose.yml, located at dalton/docker-compose.yml. Before doing so, I recommend making a backup of the file before modifying it. Run the command cp docker-compose.yml docker-compose.yml.bak. Using your favorite text editor, open docker-compose.yml, and Look for the line that reads agent-suricata-current. Below this line, there is a line that reads dockerfile: Dockerfiles/Dockerfile_suricata. Change that line to dockerfile: Dockerfiles/Dockerfile_suricata_nDPI and save the file. The entry should look like this:

# Suricata current (latest) from source
  agent-suricata-current:
    build:
      context: ./dalton-agent
      dockerfile: Dockerfiles/Dockerfile_suricata_nDPI
      args:
        - SURI_VERSION=current
        - http_proxy=${http_proxy}
        - https_proxy=${https_proxy}
        - no_proxy=${no_proxy}
    image: suricata-current:latest
    container_name: suricata-current
    environment:
      - AGENT_DEBUG=${AGENT_DEBUG}
    restart: always

By default, dalton will supply you with a container running whatever the current latest release of Suricata is. and we just modified the compose file to use the Dockerfile we built as the “recipe” to build a container running whatever the current latest suricata release happens to be. But what if you just want this docker file to build a Suricata 8.0.3 container specifically? You can add an entry to your docker-compose.yml underneath the agent-suricata-current entry that looks something like this:

  agent-suricata-8.0.3:
    build:
      context: ./dalton-agent
      dockerfile: Dockerfiles/Dockerfile_suricata_nDPI
      args:
        - SURI_VERSION=current
        - http_proxy=${http_proxy}
        - https_proxy=${https_proxy}
        - no_proxy=${no_proxy}
    image: suricata-8.0.3:latest
    container_name: suricata-8.0.3
    environment:
      - AGENT_DEBUG=${AGENT_DEBUG}
    restart: always

There’s one more thing you’ll need to do before you can use the ndpi functionality, and that involves setting up a functional suricata.yaml in which the ndpi.so plugin is activated. Now, you could just modify the existing suricata-7.0.0.yaml file in dalton/app/static/engine-configs/suricata, or you can use this suricata-8.0.0.yaml file I took from a fresh suricata 8.0.3 compile, and just modified to be compatible with Dalton. This is a HUGE, 2200+ line document, that I can’t host here, but I can host on github. I made a gist over here:

Here’s what you need do:

cd to dalton/app/static/engine-configs/suricata, then run wget https://gist.githubusercontent.com/da667/7cbd6cd6973dd31c8eea2af08adce2e2/raw/2558925614c98a7ee4eca4b5323b78af9eceb4a8/suricata-8.0.0.yaml

You are entirely welcome to audit this yaml file for yourself, but the only changes I have made were to enable pcap read and pcap file support, and to disable checksums for pcap files. Additionally the default rule files, and default rule directories have been commented out. All of these changes are necessary for Dalton to function correctly – dalton reads pcaps, and more often than not enabling checksums for pcaps results in Suricata failing to analyze the packets. As far as the rules are concerned, Dalton stores suricata rulesets in dalton/rulesets. In that directory, are subdirectories for Snort, Suricata, and Zeek. So dalton would access the .rules file located in dalton/rulesets/suricata/filename.rules.

With the above configurations completed, if you have existing docker images running with Dalton, here is a quick and dirty shell script that can be used to stop the running images, and purge all of them:

#!/bin/bash
docker-compose stop
docker system prune -a
docker image prune
docker rmi $(docker images -a -q)
exit 0

Note that if you have other docker images on your host, that this will wipe out everything. Use at your own risk.

If you haven’t run Dalton yet, or the script above is finished, run:
./start-dalton.sh, and wait for the magic to happen.

With all of that work done, I happened to have a pcap nearby to test out nDPI support. The pcap contained decrypted HTTP traffic to github.com, featuring a Windows Powershell user-agent. So here is the rule I wrote:

alert http any any -> any any (msg:"Github connection with Powershell UA"; requires: keyword ndpi-protocol; ndpi-protocol:Github; http.user_agent; content:"WindowsPowerShell|2f|"; sid:1;)

Screenshot From 2026-02-19 21-26-56

Shot…

…and Chaser.

the requires: keyword <keyword name> section is highly recommend for use with the new ndpi plugin, to confirm that the plugin keywords are available, and the plugin is enabled. If the plugin is not available or otherwise not enabled, instead of faulting on the rule for containing keywords it can’t utilize, it will automatically skip rules in which the requires keyword are not met. A single requires keyword can be used to define multiple keywords at once, for example:

requires: keyword ndpi-protocol, keyword ndpi-risk;

How do I determine what ndpi protocols and risks are available?

Its alluded to in the Suricata read the docs page for nDPI, but the nDPI source code also includes a sample program called ndpiReader. Its located in the ndpi-4.14/example directory. So if placed the ndpi source code in /usr/src, the full path will be: /usr/src/ndpi-4.14/example.

Running the command ./ndpiReader -h dumps out all of the supported options for this program, all of the protocols that can be named in the ndpi-protocol keyword, as well as all of the risks that can be named in the ndpi-risk keyword.

This is a small capture of the supported protocols that can be named via the ndpi-protocol keyword. There are 448 in total. Some are transport protocols such as TCP, UDP, DNS, IMAP, etc. while others are application layer protocols like Github, Steam, or Facebook.

Here is a screen capture of some of the supported risks that can be named via the ndpi-risk keyword. There are 56 in total, ranging anywhere from NDPI_RISKY_DOMAIN, to NDPI_CLEAR_TEXT_CREDENTIALS.

I have high hopes for this plugin and its ability to enable interesting rules, especially as related to informational and/or hunting rules. The example I used is just looking for a powershell user-agent visiting Github, but what about wget, curl, python-requests, etc.? What about requests for those user-agents specifically visiting pastebin-like sites? And that’s just off the top of my head. My hope is that this partnership continues, as the features, protocols support, and risk categories have exploded in ndpi 5.0.

Appeal to user community: I want your input

The Avenger project is 100% open for anyone to submit issues or pull requests for things that they want to see coverage and detection for that sort-of fall outside of what the Emerging Threats ruleset can do in its current form. Do you have suggestions? Do you have rules that use features that are not yet implemented, or maybe disabled by default? Bring them to me, and let’s get them out there.

That’s all I have for now.

Happy Hunting,

-Tony

A side note about Snort 2.9.x

I wasn’t exactly sure where to fit this in to this explosion of news, but I felt it was important to mention, anyway. In case you’re not aware, on Snort.org, it has been announced that essentially every branch of Snort 2.9.x, aside from the most recently developed branch, 2.9.20, is considered End of Life as of December 18th, 2025. While we haven’t made any official announcements about following suit, if you are a Snort 2.9.x user, I would highly advise updating your sensors to 2.9.20 to ensure support from Cisco and the Snort team as necessary. Additionally, in the long term, I would recommend planning to move platforms to either Snort 3, or Suricata. This is just my opinion, but the writing is on the wall, and I don’t believe there will be support for Snort 2.9.x much longer.

While I can’t say anything definitively, the last release announcement on the Snort blog was announcing Snort 2.9.11.1 in 2018. I couldn’t find a release blog post for any version of snort after that. However I did download the Snort 2.9.20 tarball, and its timestamp states it was created on June 6th, of 2022.

If 2.9.11.1 was made in 2018, and was EOL’d in 2025, that’s seven years difference. Now, where would we be seven years out from 2022? You might start seeing EOL announcements in 2029. Do bear in mind, this is my best guess, and doesn’t account for the fact that Snort 2.9.x is VERY widely deployed around the world.

1 Like