Skip to content

Monitoring your Onion Services

There are basically two complementary approaches for monitoring your Onion Services.

From the inside

Monitoring the tor daemon

The first one is to monitor availability and other metrics data directly from the tor daemon process. This is available through the MetricsPort and MetricsPolicy C Tor configurations, with are available on Onionspray via the tor_metrics_port and tor_metrics_port_policy configurations.

Beware the MetricsPort

Before enabling MetricsPort, it is important to understand that exposing tor metrics publicly is dangerous to the Tor network users. Please take extra precaution and care when opening this port. Set a very strict access policy with MetricsPortPolicy and consider using your operating systems firewall features for defense in depth.

Encrypt the MetricsPort connections

We recommend, for the prometheus format, that the only address that can access this port should be the Prometheus server itself. Remember that the connection is unencrypted (HTTP) hence consider using a tool like stunnel to secure the link from this port to the server.

Offer the MetricsPort through Onion Services

Optionally you can expose the MetricsPort to an external Prometheus instance through an authenticated Onion Service, although this configuration is not supported out-of-the-box by Onionspray and you would need to setup a custom configuration to make Prometheus proxy requests through the Tor network.

You can setup tor(1) MetricsPort and MetricsPortPolicy in an Onionspray project like this on it's configuration file:

set tor_metrics_port 127.0.0.1:9035
set tor_metrics_port_policy accept 127.0.0.1

Per-project MetricsPort

As each Onionspray project spawns it's own tor daemon process, you'll have to enable these settings for each project you want to monitor, and chose a different IP/port pair for each.

Monitoring OpenResty

This consists in monitoring the OpenResty instance for each project.

Monitoring logs

Besides monitoring the tor and the web proxy, you can also setup log monitoring for both services and for each project, which are available in the projects/<project-name>/log folder.

Onionbalance logs are available at onionbalance/ folder.

Circuit IDs

The C Tor daemon has a functionality to export Onion Service circuit identifiers to the HTTP request forwarded to the web proxy (the HiddenServiceExportCircuitID C Tor configuration setting).

This can be used either to rate limiting requests or to give some rough estimate of unique users connecting to the service. We say rough because it will be hard to guess -- in this level -- whether many circuits means different clients or the same client opening many connections (like a script or many tabs in Tor Browser).

So while providing precise data on unique clients is not possible, it's possible to get the Onion Service "circuit identifier" for each client connecting to Onion Services. This data is not personal identifiable information, so collecting it won't expose any users.

They way to enable this for a project is to use the following configuration:

set tor_export_circuit_id haproxy

Once you enable this feature, you'll see entries like this at the NGINX log:

fc00:dead:beef:4dad::0:30 - - [11/Jan/2024:17:08:43 +0000] "GET /static/fonts/fontawesome/webfonts/fa-brands-400.woff2 HTTP/1.1" 200 73936 "-" "Mozilla/5.0 (Windows NT 10.0; rv:109.0) Gecko/20100101 Firefox/115.0"

The "deaf:beef" IPv6-like field is the encoded circuit ID identifier. You can use it as-is or parse the way explained at the HiddenServiceExportCircuitID in the tor(1) manpage.

You can further expose this identifier to the backend/upstream HTTP web server by using the following configuration setting in conjunction with tor_export_circuit_id:

set nginx_x_onion_circuit_id 1

This makes NGINX add a X-Onion-CircuitID HTTP header to any request passed to the upstream site, which can them be used of metrics gathering or to do rate limiting.

From the outside

With Onionprobe you can monitor your Onion Services from anywhere the Tor network is reachable.