This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
FlashForge Python API is a comprehensive Python library for controlling FlashForge 3D printers. The library provides dual-protocol support:
- HTTP API: Modern REST-like API for Adventurer 5M/5X series printers
- TCP/G-code API: Legacy protocol supporting all networked FlashForge printers
The architecture is fully async/await throughout and uses Pydantic for type-safe data models.
The library has a layered client architecture:
-
flashforge.client.FlashForgeClient(Main unified client atflashforge/client.py)- The primary user-facing API that orchestrates both HTTP and TCP communication
- Manages HTTP session (aiohttp) for modern API endpoints
- Contains a
tcp_clientinstance for legacy operations - Provides 5 control modules:
control: Movement, LED, filtration, camera operationsjob_control: Print job managementinfo: Status and machine informationfiles: File operations (upload/download/list)temp_control: Temperature settings
- Automatically detects printer capabilities (is_ad5x, is_pro) based on model
-
flashforge.tcp.ff_client.FlashForgeClient(TCP high-level client)- Extends
FlashForgeTcpClient - Implements G-code/M-code command workflows
- Used internally by the main client's TCP operations
- Contains
GCodeControllerinstance for command execution
- Extends
-
flashforge.tcp.tcp_client.FlashForgeTcpClient(TCP low-level client)- Base TCP communication layer managing socket connections
- Handles raw command sending/receiving
- Maintains keep-alive connections
- Default port: 8899, timeout: 5.0s
flashforge/
├── client.py # Main FlashForgeClient (HTTP + TCP orchestrator)
├── discovery/ # UDP-based printer discovery
│ └── discovery.py # FlashForgePrinterDiscovery
├── tcp/ # TCP/G-code protocol implementation
│ ├── tcp_client.py # Low-level TCP socket management
│ ├── ff_client.py # High-level G-code client
│ ├── gcode/ # G-code command definitions and controller
│ │ ├── gcodes.py # GCodes enum with all commands
│ │ └── gcode_controller.py # GCodeController for executing commands
│ └── parsers/ # Response parsers for TCP commands
│ ├── temp_info.py # M105 temperature parsing
│ ├── printer_info.py # M115 printer info parsing
│ ├── thumbnail_info.py # M662 thumbnail extraction
│ ├── endstop_status.py # M119 endstop parsing
│ ├── location_info.py # M114 position parsing
│ └── print_status.py # M27 print progress parsing
├── api/ # HTTP API implementation
│ ├── constants/ # Command and endpoint definitions
│ │ ├── commands.py # Commands enum
│ │ └── endpoints.py # Endpoints class
│ ├── controls/ # Control modules (used by main client)
│ │ ├── control.py # Control class
│ │ ├── job_control.py # JobControl class
│ │ ├── info.py # Info class
│ │ ├── files.py # Files class (named 'files' for user API)
│ │ └── temp_control.py # TempControl class
│ ├── network/ # Network utilities
│ │ ├── utils.py # NetworkUtils for HTTP requests
│ │ └── fnet_code.py # FNetCode for authentication
│ ├── filament/ # Filament handling
│ └── misc/ # Utilities (temperature, scientific notation)
└── models/ # Pydantic models for API responses
├── responses.py # All HTTP response models
└── machine_info.py # Machine state and info models
Dual Protocol Strategy: HTTP is used for high-level operations (printer status, file listing, job control commands) while TCP/G-code is used for real-time operations (temperature monitoring via M105, print progress via M27, thumbnails via M662).
Model Detection: The client sets _is_ad5x flag by checking printer name for "5M" or "5X" which enables/disables certain API features (LED control, camera, filtration).
Parser Pattern: TCP responses are parsed by specialized parser classes in tcp/parsers/ that extract structured data from text responses (e.g., M105 returns text like T0:25/0 T1:25/0 B:25/0 which TempInfo parses).
# Create virtual environment
python -m venv .venv
# Activate virtual environment
# Windows:
.venv\Scripts\activate
# Linux/Mac:
source .venv/bin/activate
# Install development dependencies
pip install -e ".[dev]"
# Or install all optional dependencies:
pip install -e ".[all]"# Run all tests
pytest
# Run tests with verbose output
pytest -v
# Run tests with coverage
pytest --cov=flashforge --cov-report=html --cov-report=term
# Run specific test file
pytest tests/test_parsers.py
# Run tests matching pattern
pytest -k "test_temp"
# Skip slow/integration tests
pytest -m "not slow and not integration"
# Run only network tests
pytest -m network
# Alternative: Use the test runner script
python tests/run_tests.py# Format code with Black (line length: 100)
black flashforge/ tests/
# Lint with Ruff
ruff check flashforge/ tests/
# Type check with mypy (strict mode enabled)
mypy flashforge/
# Run all pre-commit hooks
pre-commit run --all-filesIMPORTANT: Releases are managed through GitHub Actions workflow, not manual PyPI uploads.
-
Prepare the release locally:
# Update version in pyproject.toml (e.g., 1.0.2 -> 1.0.3) # Update CHANGELOG.md with new version section and changes git add pyproject.toml CHANGELOG.md git commit -m "chore: bump version to X.Y.Z" git push
-
Trigger GitHub Actions workflow:
- Go to: https://github.com/GhostTypes/ff-5mp-api-py/actions
- Click "Publish Release" workflow
- Click "Run workflow" button
- Enter version number (e.g.,
1.0.3) - Click green "Run workflow" button
-
Workflow automatically:
- Validates version format (X.Y.Z)
- Verifies version in
pyproject.tomlmatches input - Checks tag doesn't already exist
- Creates and pushes git tag
vX.Y.Z - Builds package with Hatchling
- Verifies build with
twine check - Creates GitHub Release with auto-generated changelog
- Publishes to PyPI using
PYPI_API_TOKENsecret
Publishing uses GitHub Secrets (not .pypirc):
- Secret name:
PYPI_API_TOKEN - Location: Repository Settings → Secrets and variables → Actions
- Format: PyPI API token (starts with
pypi-) - Fallback: Workflow gracefully skips PyPI upload if secret not configured
# Clean previous builds
rm -rf dist/ build/ *.egg-info
# Build package
python -m build
# Check distribution
twine check dist/*
# DO NOT manually upload to PyPI - use workflow insteadVersion Management:
- Current version: 1.0.2 (as of 2025-12-26)
- Package name:
flashforge-python-api - PyPI: https://pypi.org/project/flashforge-python-api/
- Build system: Hatchling (defined in
pyproject.toml)
- Never manually upload to PyPI - always use the GitHub Actions workflow
- Always update CHANGELOG.md before releasing - workflow doesn't auto-generate it
- Version must match between
pyproject.tomland workflow input or it will fail - Tags are permanent - workflow prevents duplicate tags
- Linear history required - workflow creates tags on current HEAD, no version bump commits
- Unit tests:
test_parsers.py,test_utility_classes.py- Test individual components - Integration tests:
test_ad5x_live_integration.py,test_5m_pro_live_integration.py- Require actual printer - Component tests:
test_client.py,test_control.py, etc. - Test control modules
- pytest config in
pyproject.tomlunder[tool.pytest.ini_options] - Markers:
slow,integration,network - Async mode:
auto(pytest-asyncio) - Test fixtures in
tests/fixtures/andtests/conftest.py - Printer configuration:
tests/printer_config.py(for live tests)
Live integration tests require:
- A networked FlashForge printer
- Printer credentials (IP, serial, check code) configured in
tests/printer_config.py - Mark tests with
@pytest.mark.integrationor@pytest.mark.network
- Use HTTP for: Status queries (
get_printer_status), file listing, job control (start/pause/cancel), printer info - Use TCP for: Real-time temperature (
M105), print progress (M27), endstops (M119), thumbnails (M662), direct G-code
- HTTP requires: IP address, serial number, check code
- HTTP endpoint construction:
http://{ip}:{port}/...(port 8898) - HTTP auth via
FNetCode.generate()addsfnetCodeandserialNumberto requests - TCP only requires IP (port 8899), no auth
All API methods are async and should be awaited:
async with FlashForgeClient(ip, serial, check) as client:
await client.initialize() # Required for HTTP session setup
status = await client.get_printer_status()
await client.dispose() # Or use context manager- Pydantic models in
models/responses.pyvalidate all API responses - Recent fix (v1.0.1):
estimated_timechanged frominttofloatfor validation - Mypy strict mode enabled - all functions must have type hints
Certain features only work on specific models:
- LED control: Adventurer 5M/5X only (check
client.led_control) - Filtration: Adventurer 5M Pro only (check
client.filtration_control)
- HTTP errors: Wrapped in aiohttp exceptions
- TCP errors: Socket timeouts, connection refused
- Parser errors: Invalid response formats from TCP commands
- Always check
client.initialize()return value before operations
-
Main docs in
docs/directory:README.md: Documentation overviewclient.md: FlashForgeClient API referencemodels.md: Pydantic model descriptionsprotocols.md: HTTP vs TCP protocol detailsadvanced.md: Advanced usage patternsapi_reference.md: Complete API listing
-
Examples in
examples/:discovery_example.py: Printer discovery usagetcp_client_example.py: Direct TCP client usageunified_client_example.py: Main client usagecomplete_feature_demo.py: Comprehensive feature demonstration
Full Support (HTTP + TCP):
- FlashForge AD5X
- FlashForge Adventurer 5M / 5M Pro
- FlashForge Adventurer 4
Partial Support (TCP only):
- FlashForge Adventurer 3
Core runtime (required):
aiohttp>=3.8.0- Async HTTP clientpydantic>=2.0.0- Data validation and modelsnetifaces>=0.11.0- Network interface enumeration for discoveryrequests>=2.31.0- Sync HTTP (used in some utilities)
Development (optional [dev]):
pytest>=7.0.0,pytest-asyncio>=0.21.0,pytest-cov>=4.0.0black>=23.0.0,ruff>=0.1.0,mypy>=1.0.0pre-commit>=3.0.0
Imaging (optional [imaging]):
pillow>=10.0.0- For thumbnail image processing
Python version: Requires Python 3.8+
- Always call
await client.initialize()before using the main FlashForgeClient (sets up HTTP session) - Model detection depends on printer name response - early operations may not have full capability info
- TCP keep-alive runs as background task - call
dispose()or use context manager to clean up - Temperature queries via TCP (
client.tcp_client.get_temp_info()) return parsed objects, not raw values - Thumbnail extraction (M662) can be slow and returns large payloads - use with caution
- File uploads for AD5X models have different parameters than older models (see
AD5XUploadParams)
- Version bump must be manual - Workflow does NOT automatically update
pyproject.tomlorCHANGELOG.md - Workflow validates version match - Input version must exactly match version in
pyproject.tomlor it fails - No timestamped versions - Previous workflow created orphaned commits with timestamp versions (e.g.,
v1.0.0-20251122005123) which caused duplicate changelogs. Current workflow uses clean tags only - Changelog duplication - If you see duplicate PRs in GitHub release notes, it means there's a tag on an orphaned commit outside the main branch lineage. Delete the orphaned tag to fix
- Linear git history required - Workflow creates tags on current HEAD without making commits. All version bumps must be committed to
mainbefore running workflow - PyPI token is required - Without
PYPI_API_TOKENsecret, workflow completes but skips PyPI upload