NeuroMiner OBS Plugin is an OS-level input capture plugin for OBS Studio, designed for world-model game data collection. It records global keyboard/mouse input and aligns it with OBS video frames, producing synchronized multimodal training data.
For each OBS recording session, the plugin creates a UUID folder and writes:
recording.<ext>: the original recorded video moved into the session folderactions.jsonl: per-frame input snapshots synchronized to OBS frame timingevents.jsonl: raw sub-frame input events with microsecond session offsetsmetadata.json: session metadata (game name, platform, fps, duration, bias)
The plugin starts/stops data collection automatically with OBS recording start/stop events.
This project targets game interaction dataset generation for world-model and behavior model training. The data format is built to support:
- frame-level action conditioning (
actions.jsonl) - sub-frame event reconstruction (
events.jsonl) - latency compensation and resampling in post-processing scripts
| Platform | Status |
|---|---|
| macOS | Supported |
| Windows x64 | Supported |
| Linux | Future |
Download the macos-universal artifact, extract it, then run the .pkg installer.
Download the windows-x64 artifact, extract it, then run the generated
*-windows-x64-setup.exe installer. The installer auto-detects OBS installation
paths and pre-fills the OBS plugin directory (...\obs-studio\obs-plugins).
- Open OBS and find the
NeuroMiner Input Recorderdock. - Set a game name (used in
metadata.json). - Keep
Enable Input Captureenabled. - Start OBS recording.
- Interact with the game normally.
- Stop OBS recording.
- Locate the generated UUID session folder next to your recording output.
cmake --preset macos
cmake --build --preset macoscmake --preset windows-x64
cmake --build --preset windows-x64 --config RelWithDebInfo --parallelQuick processing commands:
cd scripts
uv run python estimate_latency.py /path/to/<UUID>/
uv run python resample_events.py /path/to/<UUID>/For full options and detailed script behavior, see scripts/README.md.
Each recording session is written to one UUID directory.
- OBS output video file moved into the session directory.
- Extension depends on your OBS recording format (for example,
.mp4).
stream_name(string): session UUID.game_name(string): value set in the plugin dock.recorder_version(string): plugin version.platform("mac" | "windows" | "linux"): runtime platform tag.video_meta(object):width(int)height(int)fps(number)total_frames(int)duration_ms(int)
session_start_timestamp_ms(int): Unix timestamp in milliseconds.input_latency_bias_ms(number): bias used for post-processing alignment.
One JSON object per video frame. Each line contains:
frame(int): frame index.timestamp_ms(int): wall-clock timestamp when frame snapshot was recorded.frame_pts_ms(number): deterministic frame PTS (frame * 1000 / fps).capture_ns(int): OBS compositor timestamp in nanoseconds.key(string[]): currently pressed keys at this frame.mouse(object):dx,dy(int): accumulated mouse movement since previous frame.x,y(int): latest absolute mouse position.scroll_dy(int): accumulated scroll delta since previous frame.button(string[]): currently pressed mouse buttons.
Raw sub-frame input event stream. Each line includes:
type(string): one of:key_down,key_upmouse_movemouse_button_down,mouse_button_upscroll- (
flags_changedon macOS)
timestamp_ms(int): wall-clock event timestamp.session_offset_us(int): microseconds from session start.- Optional fields by event type:
key(string) for keyboard eventsbutton(string) for mouse button eventsdx,dy,x,yfor mouse movementscroll_dyfor scroll events
- Produced by
scripts/resample_events.py. - Reconstructed high-precision per-frame actions derived from
events.jsonlandinput_latency_bias_ms. - Recommended output for training ingestion after latency compensation.
- Input capture is global while a recording session is active.
- Do not record sensitive input (passwords, private chats, credentials).
- Use test accounts and controlled environments for dataset collection.
Licensed under GPL-2.0-or-later. See LICENSE.