4.0x more findings

Measured outcomes, not a feature list. This page compares findings coverage between Feroxbuster OSS and Feroxbuster Pro using an anonymized benchmark dataset.

Dataset: 10 targets. Wordlist length: 4,749.

Total findings

2,885 vs 724

Sum of per-target rows in the benchmark output (Pro vs OSS) (4.0x)

Pro-only unique endpoints

2,164

Unique endpoints found only by Pro in this dataset. OSS-only unique endpoints: 37.

Beyond the wordlist

69.8%

of Pro findings came from discovery-driven scanning, not the wordlist.

Unique findings overlap
Shared findings are overlap. Pro-only and OSS-only are the unique tails.
This diagram uses de-duplicated unique endpoints across included targets. Totals in other charts can count repeats across targets.

Unique endpoints (deduped): 2,883

  • Shared682 (23.7%)
  • Pro-only2,164 (75.1%)
  • OSS-only37 (1.3%)

Pro-only accounts for 75.1% of categorized unique findings in this dataset.

Where does Pro pull ahead? Here is the target-by-target breakdown.

Coverage by target
Total unique endpoints found per target. Sorted by largest Pro advantage.
Targets are anonymized. Each row shows total OSS (muted) and Pro (colored) unique endpoints.

Pro finds more per request sent, making each request count.

Findings per 1k requests
Higher is better. Pro surfaces more findings with fewer total requests.
Aggregate across all targets: OSS ~1.29, Pro ~6.23 findings per 1,000 requests.

Where do the extra findings come from? Most of Pro's advantage is beyond the wordlist.

Wordlist hits vs discovery-driven findings
Wordlist-surfaced findings vs the remainder. The remainder is total findings minus wordlist hits.
Only 30.2% of Pro findings came from direct wordlist probing in this dataset.
Pro-only findings by target
Pro-only findings sorted by target, with a cumulative share line.
Each row is an anonymized target. The cumulative line shows what share of Pro-only findings are accounted for by the targets listed so far.

Resource efficiency

Requests sent

463,319

vs 561,947 OSS

18% fewer requests
CPU time

574s

vs 884s OSS

35% less CPU
Analysis time

6373s

vs 3352s OSS

1.9x wall time

Pro spends more time analyzing responses, not just sending requests.

Memory (median)

~69 MB

identical for both editions

Peak ~1197 MB Pro vs ~590 MB OSS, driven by a few large targets.

Environment-sensitive. Not universal performance claims.

How discovery-driven scanning works

Pro doesn't just probe a wordlist. It analyzes every response to extract referenced paths, building a map of what the application actually exposes.

1. Wordlist hit

GET /assets/app.bundle.js

Standard wordlist probe returns 200

2. Response analysis

extract("/api/internal/...")

Pro parses the JS and finds referenced API routes

3. Discovered endpoint

GET /api/internal/audit/export

A valid endpoint no wordlist would contain

Illustrative example. Real discovery chains vary by target and application behavior.

About this benchmark

10 anonymized targets scanned with a constant 4,749-entry wordlist and identical scan configuration (concurrency, scan limits, rate limits) across both editions. Findings coverage is the primary KPI: total findings are the sum of per-target rows, and unique endpoints are split into shared, Pro-only, and OSS-only buckets. Coverage varies by application behavior and hosting environment.

Validate in your environment

Run Pro against a representative target you are allowed to test and compare coverage against the OSS run. If you want context on configuration and interpreting results, start with the docs.