4.0x more findings
Measured outcomes, not a feature list. This page compares findings coverage between Feroxbuster OSS and Feroxbuster Pro using an anonymized benchmark dataset.
Dataset: 10 targets. Wordlist length: 4,749.
2,885 vs 724
Sum of per-target rows in the benchmark output (Pro vs OSS) (4.0x)
2,164
Unique endpoints found only by Pro in this dataset. OSS-only unique endpoints: 37.
69.8%
of Pro findings came from discovery-driven scanning, not the wordlist.
Unique endpoints (deduped): 2,883
- Shared682 (23.7%)
- Pro-only2,164 (75.1%)
- OSS-only37 (1.3%)
Pro-only accounts for 75.1% of categorized unique findings in this dataset.
Where does Pro pull ahead? Here is the target-by-target breakdown.
Pro finds more per request sent, making each request count.
Where do the extra findings come from? Most of Pro's advantage is beyond the wordlist.
Resource efficiency
463,319
vs 561,947 OSS
18% fewer requests574s
vs 884s OSS
35% less CPU6373s
vs 3352s OSS
1.9x wall timePro spends more time analyzing responses, not just sending requests.
~69 MB
identical for both editionsPeak ~1197 MB Pro vs ~590 MB OSS, driven by a few large targets.
Environment-sensitive. Not universal performance claims.
Pro doesn't just probe a wordlist. It analyzes every response to extract referenced paths, building a map of what the application actually exposes.
1. Wordlist hit
GET /assets/app.bundle.js
Standard wordlist probe returns 200
2. Response analysis
extract("/api/internal/...")
Pro parses the JS and finds referenced API routes
3. Discovered endpoint
GET /api/internal/audit/export
A valid endpoint no wordlist would contain
Illustrative example. Real discovery chains vary by target and application behavior.
10 anonymized targets scanned with a constant 4,749-entry wordlist and identical scan configuration (concurrency, scan limits, rate limits) across both editions. Findings coverage is the primary KPI: total findings are the sum of per-target rows, and unique endpoints are split into shared, Pro-only, and OSS-only buckets. Coverage varies by application behavior and hosting environment.
Validate in your environment
Run Pro against a representative target you are allowed to test and compare coverage against the OSS run. If you want context on configuration and interpreting results, start with the docs.