Go-based multi-tracker torrent aggregator. Port of C# project jacred (mainly from https://github.com/jacred-fdb/jacred)
Collects torrent metadata from 20 Russian/Ukrainian trackers into a unified flat-file database with search, sync, and stats APIs.
- Quick Start
- Configuration
- Web UI
- Parser Endpoints
- Search API
- Stats API
- Sync API
- Database Admin Endpoints
- Dev/Maintenance Endpoints
- Config Hot-Reload
- Search Result Caching
- FDB Audit Log
- Parser Logging
- Cron Examples
- Database Structure
# Build for current platform
go build -o ./Dist/jacred ./cmd
./Dist/jacred
# Listens on :9117 by default
# Health check
curl http://127.0.0.1:9117/health
# Parse first page of rutor
curl http://127.0.0.1:9117/cron/rutor/parse
# Search
curl "http://127.0.0.1:9117/api/v1.0/torrents?search=Interstellar"chmod +x build_all.sh
./build_all.shBuilds binaries for all supported platforms into Dist/ and creates a release archive:
Dist/
jacred-linux-amd64
jacred-linux-arm64
jacred-linux-arm
jacred-linux-386
jacred-darwin-amd64
jacred-darwin-arm64
jacred-windows-amd64.exe
jacred-windows-arm64.exe
jacred-windows-386.exe
jacred-freebsd-amd64
jacred-freebsd-arm64
jacred-{version}-{gitSHA}.tar.gz ← binaries + init.yaml + init.yaml.example
(the web UI is embedded into each binary via //go:embed)
Requires Go 1.21+. All binaries are statically linked (CGO_ENABLED=0), no external dependencies.
Configuration file: init.yaml in working directory.
listenip: "any" # IP to bind ("any" = 0.0.0.0)
listenport: 9117 # Listening port
apikey: "" # API key required for /api/v1.0/* (empty = no auth)
devkey: "" # Key for dev endpoints (empty = local IP only)
log: true # Enable logging
logParsers: false # Global gate for per-tracker parser logs (both must be true)
logFdb: false # FDB audit log: JSON Lines per bucket change
logFdbRetentionDays: 7 # Delete FDB log files older than N days
logFdbMaxSizeMb: 0 # Max total FDB log size in MB (0 = unlimited)
logFdbMaxFiles: 0 # Max number of FDB log files (0 = unlimited)
fdbPathLevels: 2 # Directory nesting depth for bucket files
mergeduplicates: true # Merge duplicate torrents from different trackers
mergenumduplicates: true # Merge numeric ID variations
openstats: true # Enable /stats/* endpoints (no auth)
opensync: true # Enable /sync/fdb/torrents (V2 protocol, no auth)
opensync_v1: false # Enable /sync/torrents (V1 protocol, no auth)
web: true # Serve web UI (index.html, stats.html, settings.html)
timeStatsUpdate: 90 # Rebuild stats.json every N minutes
memlimit: 0 # Hard cap on Go heap in MB (0 = no limit)
gcpercent: 50 # GC frequency: lower = more GC, less peak RAM (default 50)Control Go runtime memory usage. Essential on VPS with limited RAM.
memlimit: 1500 # Hard cap on Go heap in MB; GC becomes very aggressive near the limit
gcpercent: 50 # Go's GOGC knob: 50 = GC at +50% heap growth (default 50, Go default is 100)Recommended settings by RAM:
| VPS RAM | memlimit |
gcpercent |
evercache.maxOpenWriteTask |
|---|---|---|---|
| 1 GB | 700 |
20 |
200 |
| 2 GB | 1500 |
30 |
300 |
| 4 GB+ | 0 |
50 |
500 |
memlimit: 0 disables the hard cap (Go default behaviour).
For trackers protected by CloudFlare (megapeer, bitru, anistar, anifilm, torrentby, mazepa), the system uses an embedded flaresolverr-go solver — no external Docker service required.
How it works:
- The first request to a CF-gated domain spawns a real browser (Camoufox via geckodriver by default, or Chrome for Testing via chromedriver) on a virtual Xvfb display, lets it pass the CF managed challenge, and extracts the resulting
cf_clearancecookies and User-Agent. - Solved cookies are cached in memory for 60 minutes per domain and persisted to
Data/temp/flare/so quick restarts reuse them. - Subsequent requests for the same domain use plain
net/httpwith those cookies + the browser's UA (fast path). Only on a fresh CF challenge (403 or interstitial markers in the body) is the browser invoked again. - Idle browser sessions are reaped after 5 minutes of inactivity to free RAM (~800 MB per Camoufox process).
flaresolverr_go:
headless: true # true = headless mode, false = visible (still needs Xvfb on servers)
browser_backend: geckodriver # geckodriver (Camoufox, recommended) | chromedriver (Chrome for Testing)
browser_path: "" # Browser binary. Empty + geckodriver = auto-download Camoufox (~680 MB, one-time) into ~/.cache. Empty + chromedriver = system Chrome.
driver_path: "" # Custom geckodriver/chromedriver path. Empty = library auto-downloads a matching driver.
chrome_version: "" # Pin Chrome for Testing to a major (e.g. "146") — only used with browser_backend: chromedriver.Requires Xvfb on headless servers (apt install xvfb). jacred starts its own Xvfb instance on :99-:119 if DISPLAY is unset.
CloudFlare detection is automatic — fetchmode in init.yaml is now an optional hint, not a requirement. When any parser's standard request returns a CF interstitial page (markers like <title>Just a moment, cf-browser-verification, window._cf_chl_opt), the Fetcher:
- Logs
cf-auto: detected CloudFlare on <domain> — future requests will use flaresolverr. - Flags the domain in an in-memory + on-disk registry (
Data/temp/cf_auto.json). - Transparently retries the same request through flaresolverr-go in the same call — the caller gets the real response.
- Routes every subsequent request for that domain through flaresolverr automatically (skipping the wasted standard roundtrip).
Setting fetchmode: "flaresolverr" explicitly still works; it just saves the very first wasted request. POST requests are auto-promoted too: when CF rejects a POST, the original body never reached the upstream API, so the retry through flaresolverr (which obtains cf_clearance and re-issues the POST via net/http + browser UA) is safe.
Registry entries persist across restarts and age out after 30 days (any domain that's gone quiet that long is rechecked from scratch). To inspect or clear the registry manually, use /admin/cf-domains (see Database Admin Endpoints).
Megapeer:
fetchmode: "flaresolverr" # optional: skip the first detection roundtrip for known CF sites
host: "https://megapeer.vip"If a previously-flagged site stops using CF, either wait 30 days or curl -X DELETE 'http://127.0.0.1:9117/admin/cf-domains?domain=megapeer.vip'. A circuit breaker also puts a domain into a 3-minute cooldown after a flaresolverr solve failure to prevent retry storms.
Keeps recently opened buckets in RAM to reduce disk reads on repeated searches.
The cache is hard-capped at maxOpenWriteTask entries; when full the oldest
dropCacheTake entries are evicted immediately. Stale entries (older than
validHour) are swept every 10 minutes by a background goroutine.
evercache:
enable: true # Enable in-memory caching of buckets
validHour: 1 # Cache TTL in hours; entries older than this are evicted
maxOpenWriteTask: 500 # Max buckets held in memory (hard cap)
dropCacheTake: 100 # How many to evict when the cap is hitsyncapi: "http://other-instance:9117" # URL of remote jacred to sync from
timeSync: 60 # Sync interval in seconds
synctrackers: # Sync only these trackers (empty = all)
- "Rutor"
- "Kinozal"
disable_trackers: # Never sync these trackers
- "Mazepa"
syncsport: true # Sync sport torrents
syncspidr: true # Enable spider (metadata-only) sync
timeSyncSpidr: 60 # Spider sync interval in secondsEach tracker section is optional — defaults are used if omitted.
Kinozal:
host: "https://kinozal.tv" # Override default host
cookie: "uid=abc123; pass=..." # Session cookie (required for login-only trackers)
login:
u: "username"
p: "password"
reqMinute: 8 # Max requests per minute (rate limiting)
parseDelay: 7000 # Delay between category/page requests in ms
log: false # Log this tracker's requests
useproxy: false # Route requests through globalproxyDefault hosts:
| Tracker | Default Host |
|---|---|
| Rutor | https://rutor.is |
| Megapeer | https://megapeer.vip |
| TorrentBy | https://torrent.by |
| Kinozal | https://kinozal.tv |
| NNMClub | https://nnmclub.to |
| Bitru | https://bitru.org |
| Toloka | https://toloka.to |
| Mazepa | https://mazepa.to |
| Rutracker | https://rutracker.org |
| Selezen | https://use.selezen.club |
| Lostfilm | https://www.lostfilm.tv |
| Animelayer | https://animelayer.ru |
| Anidub | https://tr.anidub.com |
| Aniliberty | https://aniliberty.top |
| Knaben | https://api.knaben.org |
| Anistar | https://anistar.org |
| Anifilm | https://anifilm.pro |
| Leproduction | https://www.le-production.tv |
| Baibako | http://baibako.tv |
| Korsars | https://korsars.pro |
| Ultradox | https://ultradox.top |
globalproxy:
- pattern: "\.onion" # Regex: apply proxy when URL matches
list:
- "socks5://127.0.0.1:9050"
- "http://proxy.example.com:8080"
useAuth: false
username: ""
password: ""
BypassOnLocal: true # Skip proxy for 127.x / 192.168.x / 10.xAll parser endpoints return:
{
"status": "ok",
"fetched": 150,
"added": 12,
"updated": 5,
"skipped": 133,
"failed": 0
}Multi-category parsers also include "by_category": [...].
There are five distinct parsing strategies across the 21 trackers:
Parse exactly one page. Default is page 0 (most recent) for most trackers; Bitru defaults to page 1.
Trackers: Rutor, Selezen, Bitru, Kinozal, NNMClub, RuTracker, TorrentBy, Toloka, Korsars, Ultradox
Note: Rutor, Bitru, Kinozal, NNMClub, RuTracker, TorrentBy, Toloka also support task-based parsing (see §4 below). The
parse?page=Nendpoint is available as a single-page fallback.
# Parse the latest page (default)
curl "http://127.0.0.1:9117/cron/rutor/parse"
# Parse a specific page
curl "http://127.0.0.1:9117/cron/kinozal/parse?page=3"Parse N pages starting from the first. 0 means unlimited (all pages).
Trackers: Megapeer (maxpage, default 1), Animelayer (maxpage, default 1), Baibako (maxpage, default 10), Anistar (limit_page, default 0 = all), Leproduction (limit_page, default 0 = all)
# Megapeer/Animelayer/Baibako: default = 1 page
curl "http://127.0.0.1:9117/cron/megapeer/parse"
# Parse up to 5 pages
curl "http://127.0.0.1:9117/cron/megapeer/parse?maxpage=5"
# Anistar/Leproduction: default = all pages (limit_page=0)
curl "http://127.0.0.1:9117/cron/anistar/parse"
# Limit to 3 pages
curl "http://127.0.0.1:9117/cron/anistar/parse?limit_page=3"
curl "http://127.0.0.1:9117/cron/leproduction/parse?limit_page=3"Parse pages from N to M inclusive.
Trackers: Anidub, Aniliberty, Selezen
# Parse pages 1 to 5
curl "http://127.0.0.1:9117/cron/anidub/parse?parseFrom=1&parseTo=5"
# Parse only page 1 (single page)
curl "http://127.0.0.1:9117/cron/selezen/parse?parseFrom=1&parseTo=1"
# Aniliberty also returns lastPage in response
curl "http://127.0.0.1:9117/cron/aniliberty/parse?parseFrom=1&parseTo=3"For large trackers with hundreds of category pages. Works in three steps:
- Discover all pages by category and year → stores tasks in
Data/{tracker}_tasks.json - Parse all discovered tasks (can be interrupted and resumed)
- Parse latest — shortcut to parse only the most recent N pages
Trackers: Rutor, Selezen, Bitru, Kinozal, NNMClub, RuTracker, TorrentBy, Toloka, Korsars, Ultradox
# Step 1: Discover all pages and build task list (run once or periodically)
curl "http://127.0.0.1:9117/cron/kinozal/updatetasksparse"
# Step 2: Parse all discovered tasks (can take a long time)
curl "http://127.0.0.1:9117/cron/kinozal/parsealltask"
curl "http://127.0.0.1:9117/cron/kinozal/parsealltask?force=true" # ignore "updated today" flag
# Or: Parse only the latest N pages (quick daily update)
curl "http://127.0.0.1:9117/cron/kinozal/parselatest" # default pages=100 (kinozal); others default pages=5
curl "http://127.0.0.1:9117/cron/kinozal/parselatest?pages=10"
# Fallback: parse a single known page
curl "http://127.0.0.1:9117/cron/kinozal/parse?page=0"Task state is persisted — interrupted parsealltask resumes from where it stopped.
Tracker: Anifilm only
# Incremental: only new/updated since last run (default)
curl "http://127.0.0.1:9117/cron/anifilm/parse"
curl "http://127.0.0.1:9117/cron/anifilm/parse?fullparse=false"
# Full re-parse: all pages
curl "http://127.0.0.1:9117/cron/anifilm/parse?fullparse=true"GET /cron/rutor/parse
page=N (default 0) — parse page N across all 11 categories
GET /cron/rutor/updatetasksparse — discover all pages per category
GET /cron/rutor/parsealltask — parse all discovered tasks
GET /cron/rutor/parselatest
pages=N (default 5) — parse latest N pages per category
Parses all 11 categories: movies, music, serials, documentaries, cartoons, anime, sport, Ukrainian content.
Without parameters parse fetches one page (page 0) from all categories simultaneously.
GET /cron/megapeer/parse
maxpage=N (default 1) — parse up to N pages
GET /cron/anidub/parse
parseFrom=N (default 0) — page range start
parseTo=M (default 0) — page range end
Without parameters: parses only page 0.
GET /cron/aniliberty/parse
parseFrom=N (default 0) — page range start
parseTo=M (default 0) — page range end
Response includes: { ..., "lastPage": 42 }
Without parameters: parses only page 0.
GET /cron/animelayer/parse
maxpage=N (default 1)
GET /cron/anistar/parse
limit_page=N (default 0 = parse all pages)
limitPage=N (alias)
GET /cron/anifilm/parse
fullparse=false (default) — only new/updated since last run
fullparse=true — re-parse all pages
GET /cron/baibako/parse
maxpage=N (default 10)
GET /cron/bitru/parse
page=N (default 1) — parse single page
GET /cron/bitru/updatetasksparse — discover all category pages
GET /cron/bitru/parsealltask — parse all discovered tasks
GET /cron/bitru/parselatest
pages=N (default 5) — parse latest N pages only
GET /cron/bitruapi/parse
limit=N (default 100) — number of recent items to fetch via API
GET /cron/bitruapi/parsefromdate
lastnewtor=YYYY-MM-DD — fetch items newer than this date
limit=N (default 100)
GET /cron/kinozal/parse
page=N (default 0) — parse single page
GET /cron/kinozal/updatetasksparse
GET /cron/kinozal/parsealltask
force=true — re-parse all tasks even if marked updated today
GET /cron/kinozal/parselatest
pages=N (default 100, max 100) — scan unfiltered pages 0..N-1 per category, no year filter
GET /cron/knaben/parse
from=N (default 0) — offset in results
size=N (default 300) — results per page
pages=N (default 1) — number of pages to fetch
query=string — search query
hours=N (0 = ignore time filter) — only items from last N hours
orderBy=string (default "date") — sort order
categories=a,b,c — comma-separated category filters
GET /cron/leproduction/parse
limit_page=N (default 0 = parse all pages)
GET /cron/lostfilm/parse — parse main catalog (latest releases)
GET /cron/lostfilm/parsepages
pageFrom=N (default 1)
pageTo=N (default 1)
GET /cron/lostfilm/parseseasonpacks
series=SeriesName — parse all season packs for a specific series
GET /cron/lostfilm/verifypage
series=SeriesName — verify parsed data for a series
GET /cron/lostfilm/stats — Lostfilm-specific stats
GET /cron/mazepa/parse — no parameters, parses current page
GET /cron/nnmclub/parse
page=N (default 0)
GET /cron/nnmclub/updatetasksparse
GET /cron/nnmclub/parsealltask
GET /cron/nnmclub/parselatest
pages=N (default 5)
GET /cron/rutracker/parse
page=N (default 0)
GET /cron/rutracker/updatetasksparse
GET /cron/rutracker/parsealltask
GET /cron/rutracker/parselatest
pages=N (default 5)
GET /cron/korsars/parse
page=N (default 0)
GET /cron/korsars/updatetasksparse
GET /cron/korsars/parsealltask
GET /cron/korsars/parselatest
pages=N (default 5)
phpBB-mod tracker (films / series / cartoons), 24 forum IDs hardcoded. Login required (Korsars.login.u/p); magnets are inline in the listing so no extra dl.php round-trip per torrent.
GET /cron/ultradox/parse
page=N (default 0)
GET /cron/ultradox/updatetasksparse
GET /cron/ultradox/parsealltask
GET /cron/ultradox/parselatest
pages=N (default 5)
Listing-then-detail tracker, no login. Six sections: serial-hd, hd, rufilm, camrip, webrips, anime. Listing rows expose placeholder magnets with empty btih, so the parser follows each title link to the detail page where every quality variant has a full info-hash. One torrent record is stored per quality variant. Sid/pir are placeholder values (1) — the site doesn't expose peer counts. Upstream's TLS certificate is expired, so the default config carries insecureSkipVerify: true.
GET /cron/selezen/parse
parseFrom=N (default 0) — page range start
parseTo=M (default 0) — page range end
GET /cron/selezen/updatetasksparse — discover all pages
GET /cron/selezen/parsealltask — parse all discovered tasks
GET /cron/selezen/parselatest
pages=N (default 5) — parse latest N pages only
Without parameters parse parses only page 1.
GET /cron/toloka/parse
page=N (default 0)
GET /cron/toloka/updatetasksparse
GET /cron/toloka/parsealltask
GET /cron/toloka/parselatest
pages=N (default 5)
GET /cron/torrentby/parse
page=N (default 0)
GET /cron/torrentby/updatetasksparse
GET /cron/torrentby/parsealltask
GET /cron/torrentby/parselatest
pages=N (default 5)
Full-text torrent search.
Query Parameters:
| Parameter | Aliases | Type | Description |
|---|---|---|---|
search |
q |
string | Full-text search in title and name fields |
altname |
altName |
string | Search in original/alternative name |
exact |
— | bool | Exact match instead of fuzzy |
type |
— | string | Content type: фильм, сериал, аниме, музыка, etc. |
tracker |
trackerName |
string | Filter by tracker name (e.g. Kinozal) |
voice |
voices |
string | Filter by dubbing studio or voice |
videotype |
videoType |
string | Video format filter (e.g. hdr, sdr) |
relased |
released |
int | Release year (e.g. 2023) |
quality |
— | int | Quality code: 480, 720, 1080, 2160 |
season |
— | int | Season number |
sort |
— | string | Sort order: date, size, sid |
# Basic search
curl "http://127.0.0.1:9117/api/v1.0/torrents?search=Interstellar"
# Filter by tracker and quality
curl "http://127.0.0.1:9117/api/v1.0/torrents?search=Dune&tracker=Kinozal&quality=1080"
# Anime by season
curl "http://127.0.0.1:9117/api/v1.0/torrents?search=Naruto&type=аниме&season=5"
# Exact title match
curl "http://127.0.0.1:9117/api/v1.0/torrents?search=Inception&exact=true"Response:
[
{
"tracker": "Kinozal",
"url": "https://kinozal.tv/details.php?id=123456",
"title": "Дюна / Dune (2021) BDRip 1080p",
"size": 15032385536,
"sizeName": "14.0 GB",
"createTime": "2021-11-15 12:30:00",
"updateTime": "2021-11-15 12:30:00",
"sid": 350,
"pir": 12,
"magnet": "magnet:?xt=urn:btih:...",
"name": "дюна",
"originalname": "dune",
"relased": 2021,
"videotype": "sdr",
"quality": 1080,
"voices": "Дублированный",
"seasons": "",
"types": ["фильм", "зарубежный"]
}
]Paginated listing of quality metadata.
| Parameter | Default | Description |
|---|---|---|
name |
— | Filter by name |
originalname / originalName |
— | Filter by original name |
type |
— | Filter by type |
page |
1 |
Page number |
take |
1000 |
Items per page |
Jackett-compatible API for use with Sonarr, Radarr, etc.
| Parameter | Aliases | Description |
|---|---|---|
query |
q |
Search query |
title |
— | Title to match |
title_original |
— | Original title |
year |
— | Release year |
is_serial |
— | 1 for series, 0 for movies |
category |
— | Category prefix: mov_, tv_, anime_, etc. |
apikey |
apiKey |
API key (if configured) |
curl "http://127.0.0.1:9117/api/v2.0/indexers/jacred/results?q=Dune&year=2021"Response: { "Results": [...], "jacred": true }
Validate API key.
curl "http://127.0.0.1:9117/api/v1.0/conf?apikey=your-key"
# Response: {"apikey": true}Stats endpoints are open (no API key required) when openstats: true.
Refresh Data/temp/stats.json
Returns pre-computed stats from Data/temp/stats.json (rebuilt every timeStatsUpdate seconds).
| Parameter | Aliases | Default | Description |
|---|---|---|---|
trackerName |
— | — | If set: compute on-demand for this tracker |
newtoday |
newToday |
0 |
1 = only torrents added today |
updatedtoday |
updatedToday |
0 |
1 = only torrents updated today |
limit |
take |
200 |
Max items to return |
# Full stats (from cache)
curl "http://127.0.0.1:9117/stats/torrents"
# Today's new torrents from Kinozal
curl "http://127.0.0.1:9117/stats/torrents?trackerName=Kinozal&newtoday=1"Per-tracker statistics summary.
| Parameter | Default | Description |
|---|---|---|
newtoday / newToday |
0 |
Filter to today's new torrents |
updatedtoday / updatedToday |
0 |
Filter to today's updated torrents |
limit / take |
200 |
Max items |
curl "http://127.0.0.1:9117/stats/trackers"
curl "http://127.0.0.1:9117/stats/trackers?newtoday=1&limit=50"Stats for a specific tracker.
curl "http://127.0.0.1:9117/stats/trackers/Rutor"
curl "http://127.0.0.1:9117/stats/trackers/Rutor?newtoday=1"Torrents added today from the specified tracker.
curl "http://127.0.0.1:9117/stats/trackers/Kinozal/new?limit=100"Torrents updated today from the specified tracker.
curl "http://127.0.0.1:9117/stats/trackers/Kinozal/updated"Multi-instance synchronization. Enabled by opensync: true in config.
Discovery endpoint — check protocol version.
{ "fbd": true, "spidr": true, "version": 2 }List database bucket keys (low-level, for replication).
| Parameter | Default | Description |
|---|---|---|
key |
— | Substring filter on bucket key (e.g. matrix) |
limit / take |
20 |
Max entries to return |
curl "http://127.0.0.1:9117/sync/fdb?key=matrix&limit=5"Incremental sync — returns torrents modified after a timestamp.
| Parameter | Aliases | Required | Description |
|---|---|---|---|
time |
fileTime |
Yes | Return only buckets with fileTime > this value (Windows FILETIME format) |
start |
startTime |
No | Return only torrents with updateTime > this value |
spidr |
— | No | true = return only url/sid/pir metadata (lighter payload) |
take |
limit |
No | Batch size (default 2000) |
# Initial sync (time=0 = get everything)
curl "http://127.0.0.1:9117/sync/fdb/torrents?time=0&take=2000"
# Incremental sync using fileTime from previous response
curl "http://127.0.0.1:9117/sync/fdb/torrents?time=133476543210000000"
# Spider mode (metadata only, faster)
curl "http://127.0.0.1:9117/sync/fdb/torrents?time=0&spidr=true"Response:
{
"nextread": true,
"countread": 2000,
"take": 2000,
"collections": [
{
"Key": "матрица:the matrix",
"Value": {
"time": "2024-01-15 10:30:45",
"fileTime": 133476543210000000,
"torrents": {
"https://kinozal.tv/details.php?id=123": { ... }
}
}
}
]
}When nextread: true, call again with the last received fileTime to get more data.
Enabled with opensync_v1: true.
| Parameter | Aliases | Required | Description |
|---|---|---|---|
time |
fileTime |
Yes | Timestamp filter |
trackerName |
tracker |
No | Filter by tracker |
take |
limit |
No | Batch size (default 2000) |
curl "http://127.0.0.1:9117/sync/torrents?time=0&trackerName=Rutor"Response: flat array of { "key": "name:originalname", "value": {...} }
Manually flush in-memory database to disk (Data/masterDb.bz).
curl "http://127.0.0.1:9117/jsondb/save"
# "work" — already saving
# "ok" — saved successfullyA daily backup Data/masterDb_DD-MM-YYYY.bz is created on each save. Backups older than 3 days are auto-deleted.
Available from local IP only (127.0.0.1, ::1, fe80::/10 link-local, fc00::/7 ULA, IPv4-mapped IPv6).
Scans all buckets for corrupted entries.
| Parameter | Aliases | Default | Description |
|---|---|---|---|
sampleSize |
sample, limit |
20 |
Max examples per issue type |
curl "http://127.0.0.1:9117/dev/findcorrupt"
curl "http://127.0.0.1:9117/dev/findcorrupt?sampleSize=50"Response:
{
"ok": true,
"totalFdbKeys": 12500,
"totalTorrents": 480000,
"corrupt": {
"nullValue": { "count": 0, "sample": [] },
"missingName": { "count": 12, "sample": [...] },
"missingOriginalname": { "count": 3, "sample": [...] },
"missingTrackerName": { "count": 0, "sample": [] }
}
}Deletes entries where torrent value is null.
curl "http://127.0.0.1:9117/dev/removeNullValues"
# Response: { "ok": true, "removed": 5, "files": 3 }Finds bucket keys where name == originalname (potential duplicates).
| Parameter | Aliases | Default | Description |
|---|---|---|---|
tracker |
trackerName |
— | Filter: only keys containing this tracker's torrents |
excludeNumeric |
— | true |
Exclude purely numeric keys |
curl "http://127.0.0.1:9117/dev/findDuplicateKeys"
curl "http://127.0.0.1:9117/dev/findDuplicateKeys?tracker=Kinozal&excludeNumeric=false"Finds torrents with empty _sn (search name) or _so (search original) fields.
| Parameter | Aliases | Default | Description |
|---|---|---|---|
sampleSize |
sample, limit |
20 |
Max examples per category |
curl "http://127.0.0.1:9117/dev/findEmptySearchFields"Recomputes quality, videotype, voices, languages, seasons for all stored torrents.
curl "http://127.0.0.1:9117/dev/updateDetails"Update search fields for a specific bucket.
| Parameter | Required | Description |
|---|---|---|
bucket |
Yes | Bucket key in format name:originalname |
fieldName |
No | Field to update |
value |
No | New value |
curl "http://127.0.0.1:9117/dev/updateSearchName?bucket=матрица:the+matrix&fieldName=_sn&value=матрица"Update torrent size in a bucket.
| Parameter | Required | Description |
|---|---|---|
bucket |
Yes | Bucket key |
value |
Yes | Size with suffix: 500MB, 14.7GB, 1.2TB, or bytes |
curl "http://127.0.0.1:9117/dev/updateSize?bucket=матрица:the+matrix&value=14.7GB"Resets checkTime to yesterday for all torrents (forces re-check on next parse).
curl "http://127.0.0.1:9117/dev/resetCheckTime"Normalizes Knaben torrent names/years/titles and migrates to correct bucket keys.
curl "http://127.0.0.1:9117/dev/fixKnabenNames"Cleans Bitru torrent titles (strips quality tags, codec info, release groups).
curl "http://127.0.0.1:9117/dev/fixBitruNames"Auto-populates empty _sn/_so search fields and migrates torrents to correct buckets.
curl "http://127.0.0.1:9117/dev/fixEmptySearchFields"Appends ?hash=<btih> to Aniliberty URLs using magnet link hashes.
curl "http://127.0.0.1:9117/dev/migrateAnilibertyUrls"Deduplicates Aniliberty torrents by magnet hash, keeping the most recent.
curl "http://127.0.0.1:9117/dev/removeDuplicateAniliberty"Normalizes http→https for Animelayer URLs and removes duplicates by hex ID.
curl "http://127.0.0.1:9117/dev/fixAnimelayerDuplicates"Normalizes http→https for Kinozal URLs and removes duplicates by id=NNN, keeping the most recent.
curl "http://127.0.0.1:9117/dev/fixKinozalUrls"Normalizes Selezen host (old hosts like open.selezen.org → configured Selezen.Host) and removes duplicates by numeric item ID, keeping the most recent.
curl "http://127.0.0.1:9117/dev/fixSelezenUrls"Delete an entire bucket (all torrents under a key).
| Parameter | Required | Description |
|---|---|---|
key |
Yes | Bucket key in format name:originalname |
migrateName |
No | If set: move torrents to this new name instead of deleting |
migrateOriginalname |
No | New originalname for migration target |
# Delete bucket
curl "http://127.0.0.1:9117/dev/removeBucket?key=матрица:the+matrix"
# Rename/migrate bucket to new key
curl "http://127.0.0.1:9117/dev/removeBucket?key=матрица:the+matrix&migrateName=the+matrix&migrateOriginalname=the+matrix"When web: true, three pages are served. The UI assets (HTML, icons, manifest) are embedded into the binary at build time via //go:embed all:wwwroot in server/embed.go — no external wwwroot/ directory is required at runtime. If a wwwroot/ directory exists alongside the binary, individual files in it override the embedded copy (handy for live-editing during development).
| Path | Purpose |
|---|---|
/ |
index.html — torrent search (Kinopoisk / IMDB / title), filters, magnet links, TorrServer launcher |
/stats |
stats.html — per-tracker statistics dashboard (new/updated/checked counts, last run time) |
/settings |
settings.html — editor for the full init.yaml config (server, logging, sync, trackers, proxies) |
The Settings page talks to /admin/config (GET to load, POST to save) and is local-only (127.0.0.1 / RFC1918 / link-local / ULA). On save the server writes init.yaml atomically, re-parses it with the same loader used on startup, and calls UpdateConfig to apply new values to the running server, DB, and all 21 parsers — no restart needed.
Returns the current in-memory config as JSON. Local IPs only.
Accepts a full config JSON body, overlays it on top of the current config, writes init.yaml (atomic temp-file + rename), then reloads. Local IPs only.
# Example: dump current config
curl -s "http://127.0.0.1:9117/admin/config" > current.json
# Example: tweak and push back
jq '.log = true' current.json | curl -X POST -H 'Content-Type: application/json' \
--data-binary @- "http://127.0.0.1:9117/admin/config"Responses: {"ok": true} on success, {"ok": false, "error": "..."} on failure. The JSON writer normalizes to the structure init.yaml parser expects — existing comments in init.yaml are lost on save (the file is regenerated from the config struct).
Returns the auto-detected CloudFlare domain registry — domains that returned a CF challenge during a standard fetch and were promoted to flaresolverr routing automatically. Local IPs only.
curl -s "http://127.0.0.1:9117/admin/cf-domains"{
"ok": true,
"count": 2,
"domains": [
{"domain": "megapeer.vip", "detected": "2026-04-30T12:34:56Z", "ageHours": 21},
{"domain": "bitru.org", "detected": "2026-05-01T08:10:00Z", "ageHours": 2}
]
}Clears one entry (when ?domain= is supplied) or the whole registry (when omitted). Useful when a site stops using CloudFlare. Returns {"ok": true, "removed": N}. Local IPs only.
# Clear a single domain
curl -X DELETE "http://127.0.0.1:9117/admin/cf-domains?domain=megapeer.vip"
# Clear all
curl -X DELETE "http://127.0.0.1:9117/admin/cf-domains"The init.yaml file is checked for changes every 10 seconds. When the file modification time changes, the config is reloaded automatically — no restart needed. Saves from the Settings page use the same reload path.
Hot-reloadable settings include: API keys, logging flags, sync settings, stats update interval, tracker hosts/cookies/credentials, rate limits.
Search endpoints (/api/v1.0/torrents and /api/v2.0/indexers/*/results) cache results in memory with a 5-minute TTL. Cache hits return an X-Cache: HIT header. The cache is keyed by the full query string.
When logFdb: true, every bucket change is logged to Data/log/fdb.YYYY-MM-dd.log in JSON Lines format. Each line records the incoming and existing torrent data for the changed entry.
Retention is controlled by logFdbRetentionDays, logFdbMaxSizeMb, and logFdbMaxFiles. Cleanup runs automatically after each masterDb save.
Two-level logging control:
- Global gate:
logParsers: truemust be set to enable any parser logging - Per-tracker: each tracker's
log: trueenables logging for that specific tracker
Both must be true for logs to be written. Log files are stored in Data/log/{tracker}.log.
Typical external crontab (/etc/cron.d/jacred or Data/crontab):
# Lightweight daily update — latest pages only
0 6 * * * root curl -s "http://127.0.0.1:9117/cron/rutor/parselatest?pages=3" >/dev/null
5 6 * * * root curl -s "http://127.0.0.1:9117/cron/kinozal/parselatest?pages=3" >/dev/null
10 6 * * * root curl -s "http://127.0.0.1:9117/cron/rutracker/parselatest?pages=3" >/dev/null
15 6 * * * root curl -s "http://127.0.0.1:9117/cron/nnmclub/parselatest?pages=3" >/dev/null
20 6 * * * root curl -s "http://127.0.0.1:9117/cron/torrentby/parselatest?pages=3" >/dev/null
# Anime trackers — multiple pages
30 6 * * * root curl -s "http://127.0.0.1:9117/cron/animelayer/parse?maxpage=3" >/dev/null
35 6 * * * root curl -s "http://127.0.0.1:9117/cron/anidub/parse?parseFrom=1&parseTo=3" >/dev/null
40 6 * * * root curl -s "http://127.0.0.1:9117/cron/anistar/parse?limit_page=3" >/dev/null
45 6 * * * root curl -s "http://127.0.0.1:9117/cron/anifilm/parse" >/dev/null
# Weekly full re-parse (task-based trackers)
0 2 * * 0 root curl -s "http://127.0.0.1:9117/cron/kinozal/updatetasksparse" >/dev/null
30 2 * * 0 root curl -s "http://127.0.0.1:9117/cron/kinozal/parsealltask" >/dev/null
# Force DB save after heavy parse
0 8 * * * root curl -s "http://127.0.0.1:9117/jsondb/save" >/dev/nullData/
masterDb.bz # Gzipped JSON index: key → {fileTime, updateTime, path}
masterDb_DD-MM-YYYY.bz # Daily backups (auto-created, kept 3 days)
fdb/
ab/cdef012... # Bucket files (gzipped JSON): url → torrent object
temp/
stats.json # Pre-computed stats cache
log/
YYYY-MM-DD.log # Application log (when log: true)
kinozal.log # Per-tracker add/update/skip/fail logs
fdb.YYYY-MM-dd.log # FDB audit log: JSON Lines per bucket change
{tracker}_tasks.json # Task state for incremental parsers
Each bucket file key is derived from MD5 of the bucket key (name:originalname) split into 2-char prefix directory.
| Field | Type | Description |
|---|---|---|
url |
string | Tracker page URL (primary key within bucket) |
title |
string | Full torrent title as shown on tracker |
name |
string | Normalized Russian/English name |
originalname |
string | Original (usually English) name |
trackerName |
string | Source tracker name |
relased |
int | Release year |
size |
int64 | Size in bytes |
sizeName |
string | Human-readable size (e.g. 14.0 GB) |
sid |
int | Seeders |
pir |
int | Peers/leechers |
magnet |
string | Magnet link |
btih |
string | Info hash |
quality |
int | 480, 720, 1080, 2160 |
videotype |
string | sdr, hdr |
voices |
string | Dubbing studios |
languages |
string | rus, ukr |
seasons |
string | Season list, e.g. 1-5 or 1,3,7 |
types |
[]string | Content type tags |
createTime |
string | First seen timestamp |
updateTime |
string | Last updated timestamp |
checkTime |
string | Last checked/parsed timestamp |
_sn |
string | Search-normalized name |
_so |
string | Search-normalized original name |
GET /health → {"status": "OK"}
GET /version → {"version": "...", "gitSha": "...", "gitBranch": "...", "buildDate": "..."}
GET /lastupdatedb → last database write timestamp
GET / → index.html (if web: true)
GET /stats → stats.html (if web: true)
GET /settings → settings.html (if web: true)