A powerful, easily deployable network traffic analysis tool suite for network security monitoring
When a PCAP is uploaded (either through Malcolm’s upload web interface or just copied manually into the ./pcap/upload
directory), the pcap-monitor
container has a script that picks up those PCAP files and publishes to a ZeroMQ topic that can be subscribed to by any other process that wants to analyze that PCAP. In Malcolm (at the time of the v24.10.1 release), there are three such ZeroMQ topics: the zeek
, suricata
and arkime
containers. These actually share the same script to run the PCAP through Zeek, Suricata, and Arkime, respectively. For an example to follow, the zeek
container is the less complicated of the two. To integrate a new PCAP processing tool into Malcolm (named cooltool
for this example) the process would entail:
zeek
and arkime
services use bind mounts to access the local ./pcap
directoryzeek
, suricata
, and arkime
use) that subscribes to the PCAP topic port (30441
as defined in pcap_utils.py) and handles the PCAP files published there, each PCAP file represented by a JSON dictionary with name
, tags
, size
, type
and mime
keys (search for FILE_INFO_
in pcap_utils.py). This script should be added to and run by the cooltool.Dockerfile
-generated container.cooltool
’s data into Malcolm, whether by writing it directly info OpenSearch or by sending log files for parsing and enrichment by Logstash (especially see the section on Parsing a new log data source)While that might be a bit of hand-waving, these general steps take care of the PCAP processing piece: users shouldn’t have to really edit any existing code to add a new PCAP processor, only create a new container to subscribe to ZeroMQ topic and handle the PCAPs it receives.
The PCAP_PIPELINE_VERBOSITY
environment variables in ./config/upload-common.env
can be set to -v
, -vv
, etc., to increase the verbosity of debug logging from the output of the containers involved in the PCAP processing pipeline.