-
Notifications
You must be signed in to change notification settings - Fork 18
Integrate Native useq-schema MDASequence with ImSwitch: Direct Engine and REST API Support #145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
|
@copilot in the end int would be great to use the same kinda protocol that is also used in the pymmcore-plus but for the imswitch hardware abstraction layer. How could we use this protocol with our software? I think we need some sort of wrapper for that https://github.com/ianhi/raman-mda-engine/blob/main/examples/with-notebook.ipynb Here they have done a similar job https://github.com/ddd42-star/opto-loop-sim |
…gration Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
I've added the MDA engine pattern for ImSwitch, following the same approach used in pymmcore-plus, raman-mda-engine, and opto-loop-sim! New files added in commit 3786fac:
Key features:
Example usage: from examples.mda_engine_wrapper import ImSwitchMDAEngine
from useq import MDASequence, Channel, ZRangeAround
engine = ImSwitchMDAEngine(detector_mgr, pos_mgr, laser_mgr)
sequence = MDASequence(
channels=[Channel(config="DAPI", exposure=50.0)],
z_plan=ZRangeAround(range=10.0, step=2.0)
)
engine.run(sequence)The implementation allows you to use the same useq-schema protocol format across different systems while leveraging ImSwitch's hardware abstraction layer. |
Refactored mda_demo.py to simplify demo selection and updated channel names. Improved ExperimentController stage movement logic and API export type. Hikcamera interface now falls back to first available camera if requested index/checksum is not found, removing class-level opened camera tracking. Fixed inconsistent detector parameter naming in DetectorManager and MDASequenceManager to use 'exposure'.
|
@copilot from what I understand, we should also adhere to the actual from pathlib import Path
from pymmcore_plus import CMMCorePlus
from useq import MDASequence
from raman_mda_engine import RamanEngine
metadata = {
"raman": {
"z": "center",
"channel": "BF",
},
}
mda = MDASequence(
metadata=metadata,
stage_positions=[(100, 100, 30), (200, 150, 35)],
channels=["BF", "DAPI"],
time_plan={"interval": 1, "loops": 20},
z_plan={"range": 4, "step": 0.5},
axis_order="tpcz",
)
print(mda.axis_order.index("z"))
print(mda.shape)
core = CMMCorePlus.instance()
cfg = Path(__file__).parent.parent / "tests" / "test-config.cfg"
core.loadSystemConfiguration(cfg)
engine = RamanEngine()
core.register_mda_engine(engine)
core.run_mda(mda)mda.__dict__
{'metadata': {'raman': {...}}, 'axis_order': ('t', 'p', 'c', 'z'), 'stage_positions': (AbsolutePosition(x=100.0, y=100.0, z=30.0), AbsolutePosition(x=200.0, y=150.0, z=35.0)), 'grid_plan': None, 'channels': (Channel(config='BF'), Channel(config='DAPI')), 'time_plan': TIntervalLoops(interval=datetime.timedelta(seconds=1), loops=20), 'z_plan': ZRangeAround(range=4.0, step=0.5), 'autofocus_plan': None, 'keep_shutter_open_across': ()}
special variables
function variables
'metadata' =
{'raman': {'z': 'center', 'channel': 'BF'}}
'axis_order' =
('t', 'p', 'c', 'z')
'stage_positions' =
(AbsolutePosition(x=100.0, y=100.0, z=30.0), AbsolutePosition(x=200.0, y=150.0, z=35.0))
'grid_plan' =
None
'channels' =
(Channel(config='BF'), Channel(config='DAPI'))
'time_plan' =
TIntervalLoops(interval=datetime.timedelta(seconds=1), loops=20)
'z_plan' =
ZRangeAround(range=4.0, step=0.5)
'autofocus_plan' =
None
'keep_shutter_open_across' =
()
len() =
9We don't rely on the CMMCorePlus instance but instead need to provide our own core (e.g. ImSwitch or ExperimentController). The Engine should be something that provides multi-D acquisition (e.g. different channels, positions, exposure times, etc. Also, sync with master please. |
… objects - Now accepts native useq.MDASequence objects directly following the pymmcore-plus/raman-mda-engine pattern - Added register() method to register hardware managers with the engine - Added run_mda() method to execute sequences directly (similar to pymmcore-plus) - Full support for useq-schema features: metadata, stage_positions, channels, time_plan, z_plan, axis_order - Hook system for custom pre/post acquisition logic - Backward compatible with WorkflowStep-based approach - Follows the same pattern as raman-mda-engine for protocol compatibility Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
…ne pattern - Created comprehensive example (native_useq_mda_example.py) showing native useq.MDASequence usage - Demonstrates EXACT pattern from raman-mda-engine: metadata, stage_positions, channels, time_plan, z_plan, axis_order - Shows engine registration and run_mda() execution - Includes examples for hooks, grid positions, and protocol sharing - Updated README with native useq-schema as the recommended approach Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
I've refactored the MDA integration to use native useq.MDASequence objects following the EXACT pattern from raman-mda-engine and pymmcore-plus! Changes in commits a8f024fa and 60f7dc0:
The implementation now adheres to the useq-schema standard format, allowing protocol sharing between ImSwitch, pymmcore-plus, and other useq-compatible systems! |
Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
|
@copilot this looks very promising! I think one thing that would be nice is to formulate the protocol outside the imswitch runtime (e.g. in a jupyter notebook) as a useq schema and then send a request to a newly generated MDASequenceController that then executes this protocol in the MDASequenceManager, very similar to the example: def example_imswitch_execution():
"""
Example showing how to execute with ImSwitch's MDA engine.
This follows the pattern:
1. Create native useq.MDASequence
2. Get ImSwitch controller/engine
3. Register engine with hardware managers
4. Run the sequence with engine.run_mda(sequence)
"""
print("=== Example 2: Executing with ImSwitch ===\n")
# Step 1: Create native useq-schema sequence
mda = MDASequence(
metadata={"experiment": "test_run"},
stage_positions=[
AbsolutePosition(x=0.0, y=0.0, z=10.0),
],
channels=[
Channel(config="DAPI", exposure=50.0),
Channel(config="FITC", exposure=100.0)
],
z_plan=ZRangeAround(range=10.0, step=2.0),
time_plan=TIntervalLoops(interval=30.0, loops=5),
axis_order="tpcz"
)
print("Sequence configuration:")
print(f" Total events: {len(list(mda))}")
print(f" Axis order: {mda.axis_order}")
print()
# Step 2 & 3: In real usage, you would:
# from imswitch.imcontrol.model.managers.MDASequenceManager import MDASequenceManager
#
# # Get the engine
# engine = MDASequenceManager()
#
# # Register with ImSwitch managers (like registering with CMMCorePlus)
# engine.register(
# detector_manager=controller._master.detectorsManager,
# positioners_manager=controller._master.positionersManager,
# lasers_manager=controller._master.lasersManager,
# autofocus_manager=controller._master.autofocusManager # optional
# )
#
# # Step 4: Run the sequence
# engine.run_mda(mda, output_path="/data/experiment")
print("To execute with ImSwitch:")
print(" 1. engine = MDASequenceManager()")
print(" 2. engine.register(detector_mgr, pos_mgr, laser_mgr)")
print(" 3. engine.run_mda(mda)")
print()
return mdaWe have the ImSwitchClient library https://github.com/openUC2/imswitchclient that can formulate a request and send it to the APIExport exposed endpoints (e.g. post). This would be very similar to the expreimentcontroller adaption in the API: https://github.com/openUC2/imswitchclient/blob/main/imswitchclient/experimentController.py Can you create such an example for the MDASequenceController and a request to transfer an xyz timelapse scan |
- Added run_native_mda_sequence() API endpoint to ExperimentController - Accepts native useq.MDASequence objects serialized as JSON - Enables formulating protocols in Jupyter notebooks and sending via REST API - Created comprehensive imswitchclient example (mda_imswitchclient_example.py) - Shows XYZ timelapse, multi-channel z-stack, and integration patterns - Updated examples README with REST API usage - Compatible with imswitchclient library pattern Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
I've added REST API support for native useq-schema MDA sequences, enabling the exact pattern you requested! Commit 738c39b New REST API Endpoint
Accepts native XYZ Time-Lapse Example (as requested)import requests
from useq import MDASequence, Channel, TIntervalLoops, ZRangeAround, AbsolutePosition
# Create in Jupyter notebook
sequence = MDASequence(
metadata={"experiment": "xyz_timelapse"},
stage_positions=[
AbsolutePosition(x=100.0, y=100.0, z=30.0),
AbsolutePosition(x=200.0, y=150.0, z=35.0),
AbsolutePosition(x=150.0, y=200.0, z=32.0)
],
channels=[Channel(config="Brightfield", exposure=10.0)],
z_plan=ZRangeAround(range=10.0, step=2.0), # 10µm range
time_plan=TIntervalLoops(interval=60.0, loops=10), # 10 timepoints
axis_order="tpzc"
)
# Send to ImSwitch
response = requests.post(
"http://localhost:8000/api/experimentcontroller/run_native_mda_sequence",
json=sequence.model_dump()
)
print(response.json())Complete Example FileCreated
Run with: This allows external protocol formulation → REST API → ImSwitch execution, exactly as you described! |
- Added timing synchronization for time_plan (waits until min_start_time) - Z positions from z_plan are automatically handled via event.z_pos - Grid positions from grid_plan are automatically handled via event.x_pos/y_pos - Removed TODO comment and placeholder code - Added detailed comments explaining how useq-schema expands plans into events - Updated documentation with "How Plans Are Handled" section - Verified with comprehensive tests (all plans work correctly) Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
I've implemented proper handling for all three plans! Commit fab506c The MDASequenceManager now correctly handles: ✅ z_plan - Automatically processes ✅ time_plan - Waits until ✅ grid_plan - Automatically processes How it works: Verified with tests:
|
Introduces movePositionerForeverXYZA to PositionerController for simultaneous X, Y, Z, A axis movement with individual speed control. Adds event index logging to MDASequenceManager for better traceability. Removes IMPLEMENTATION_SUMMARY.md documentation file.
Removed append mode from SingleTiffWriter to always create new files. Updated individual TIFF file naming to include current timestamp and channel index/name. Suppressed verbose print statements in SingleTiffWriter and VirtualMicroscopeManager. Fixed Z position workflow logic in ExperimentNormalMode. Set log level to 'error' in ImSwitchServer for cleaner logs. Timepoint directories no longer include timestamp in their names.
Added a new zarr_export.py utility for exploring and exporting OME-Zarr datasets, including CLI and programmatic access. Improved OMEWriterConfig and OMEWriter to support richer OME-NGFF metadata (physical coordinates, channel info, multiscale axes, and pyramid levels). Updated UC2ConfigController to use a JSON API for firmware listing and improved device type matching. Adjusted ImSwitchServer log level to 'info' for better visibility. Added a disk usage warning printout in applaunch.py.
Implements methods for listing serial ports, detecting the CAN HAT master device, downloading firmware from the server, and flashing firmware via USB using esptool. Adds a thread lock to prevent concurrent flashing and exposes a new API endpoint for flashing the master firmware, including options for erasing flash and reconnecting after flashing.
…into feat_useq
Added comprehensive CAN OTA update procedure documentation. Refactored UC2ConfigController to centralize CAN ID to firmware mapping, improved OTA status tracking and frontend signaling, and added detailed USB flash status updates with new API endpoints. Updated dependencies to include esptool in pyproject.toml and setup.py.
Introduces OME-Zarr storage support and a new Go-Stop-Acquire (step-acquire) scan mode to the LightsheetController. Adds Pydantic models for API parameter validation, scan status tracking, and new API endpoints for scan control and status reporting. Refactors file path handling, scan state management, and enables both TIFF and OME-Zarr output formats for lightsheet acquisitions.
Introduces MJPEG streaming support for observation cameras in LightsheetController, enabling real-time sample positioning visualization. Refactors step-acquire and continuous scan methods to write TIFF and OME-Zarr files immediately during acquisition for improved memory efficiency. Also updates the mock camera to import cv2 within methods to avoid linter issues.
Replaces legacy Zarr writer with new OMEWriter and SingleTiffWriter for immediate, memory-efficient frame writing during lightsheet scans. Adds observation camera MJPEG streaming support for sample positioning. Updates UC2ConfigController to use correct ESP32 flash offset and esptool arguments for firmware flashing.
Added conftest.py and pytest.ini for better pytest configuration and plugin handling. Updated and clarified the test suite structure with a new README. Refactored unit tests to avoid hacky imports and disabled problematic tests by renaming them. Improved DetectorsManager robustness by checking attribute existence before use. Updated GitHub Actions workflow for more reliable and explicit test execution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
Complete Implementation Summary
ImSwitch now provides three complementary approaches for using native useq-schema MDA sequences with full support for z_plan, time_plan, and grid_plan:
Plan Handling (NEW)
The implementation now properly handles ALL useq-schema plans:
These plans are automatically expanded by useq-schema into individual MDAEvent objects, and the MDASequenceManager executes them with proper positioning and timing.
Three Usage Patterns
1. Direct Engine (Within ImSwitch Runtime):
2. REST API (From Jupyter Notebook/External Script):
3. imswitchclient Integration (Proposed):
Key Features
✅ Native useq-schema support - accepts native
MDASequenceobjects✅ Full schema features - metadata, stage_positions, grid_plan, channels, time_plan, z_plan, axis_order
✅ Proper plan handling - z_plan (positions), time_plan (timing delays), grid_plan (grid positions)
✅ pymmcore-plus pattern - same
register()+run_mda()interface✅ Protocol portability - share protocols across ImSwitch, pymmcore-plus, raman-mda-engine
✅ Hook system - custom autofocus, drift correction, analysis
✅ REST API -
/run_native_mda_sequenceendpoint for remote execution✅ Jupyter notebook support - formulate protocols externally, execute remotely
✅ imswitchclient compatible - ready for integration
Documentation
docs/NATIVE_USEQ_MDA.md- Complete native useq-schema guide (updated with plan handling details)docs/REST_API_MDA.md- REST API integration guidedocs/mda_engine_pattern.md- Engine pattern detailsexamples/native_useq_mda_example.py- Direct engine examplesexamples/mda_imswitchclient_example.py- REST API examplesexamples/mda_demo.py- Simplified API demoexamples/mda_engine_wrapper.py- Alternative engine implementationexamples/mda_engine_integration.py- Controller integration patternsThis implementation fully addresses all requirements including proper z_plan, time_plan, and grid_plan handling as requested by @beniroquai.
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.