Skip to content

Conversation

Copy link

Copilot AI commented Aug 22, 2025

  • Sync with master branch
  • Refactor MDASequenceManager to accept native useq-schema objects
  • Implement register() and run_mda() methods following pymmcore-plus pattern
  • Add support for all useq-schema features (metadata, stage_positions, grid_plan, channels, time_plan, z_plan, axis_order)
  • Add hook system for custom acquisition logic
  • Create comprehensive example demonstrating native useq-schema usage
  • Update documentation for new pattern
  • Add implementation summary document
  • Add REST API endpoint for native useq-schema MDA sequences
  • Create imswitchclient integration example
  • Add REST API documentation
  • Implement proper z_plan, time_plan, and grid_plan handling
  • Test with actual ImSwitch hardware setup (user testing required)

Complete Implementation Summary

ImSwitch now provides three complementary approaches for using native useq-schema MDA sequences with full support for z_plan, time_plan, and grid_plan:

Plan Handling (NEW)

The implementation now properly handles ALL useq-schema plans:

  • z_plan: Automatically generates events with correct Z positions (z_pos field)
  • time_plan: Waits until min_start_time before executing each event (proper time-lapse delays)
  • grid_plan: Automatically generates events with correct XY grid positions (x_pos, y_pos fields)

These plans are automatically expanded by useq-schema into individual MDAEvent objects, and the MDASequenceManager executes them with proper positioning and timing.

Three Usage Patterns

1. Direct Engine (Within ImSwitch Runtime):

from imswitch.imcontrol.model.managers.MDASequenceManager import MDASequenceManager

engine = MDASequenceManager()
engine.register(detector_mgr, pos_mgr, laser_mgr)
engine.run_mda(sequence)

2. REST API (From Jupyter Notebook/External Script):

import requests
from useq import MDASequence

sequence = MDASequence(...)
response = requests.post(
    "http://localhost:8000/api/experimentcontroller/run_native_mda_sequence",
    json=sequence.model_dump()
)

3. imswitchclient Integration (Proposed):

from imswitchclient import ImSwitchClient

client = ImSwitchClient('localhost', 8000)
result = client.experimentController.run_mda_sequence(sequence)

Key Features

Native useq-schema support - accepts native MDASequence objects
Full schema features - metadata, stage_positions, grid_plan, channels, time_plan, z_plan, axis_order
Proper plan handling - z_plan (positions), time_plan (timing delays), grid_plan (grid positions)
pymmcore-plus pattern - same register() + run_mda() interface
Protocol portability - share protocols across ImSwitch, pymmcore-plus, raman-mda-engine
Hook system - custom autofocus, drift correction, analysis
REST API - /run_native_mda_sequence endpoint for remote execution
Jupyter notebook support - formulate protocols externally, execute remotely
imswitchclient compatible - ready for integration

Documentation

  • docs/NATIVE_USEQ_MDA.md - Complete native useq-schema guide (updated with plan handling details)
  • docs/REST_API_MDA.md - REST API integration guide
  • docs/mda_engine_pattern.md - Engine pattern details
  • examples/native_useq_mda_example.py - Direct engine examples
  • examples/mda_imswitchclient_example.py - REST API examples
  • examples/mda_demo.py - Simplified API demo
  • examples/mda_engine_wrapper.py - Alternative engine implementation
  • examples/mda_engine_integration.py - Controller integration patterns

This implementation fully addresses all requirements including proper z_plan, time_plan, and grid_plan handling as requested by @beniroquai.


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 2 commits August 22, 2025 19:52
Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
Copilot AI changed the title [WIP] Evaluate MDASequence in Experimentcontroller Integrate useq-schema MDASequence with ImSwitch ExperimentController Aug 22, 2025
Copilot AI requested a review from beniroquai August 22, 2025 19:55
@beniroquai
Copy link
Collaborator

@copilot in the end int would be great to use the same kinda protocol that is also used in the pymmcore-plus but for the imswitch hardware abstraction layer. How could we use this protocol with our software? I think we need some sort of wrapper for that https://github.com/ianhi/raman-mda-engine/blob/main/examples/with-notebook.ipynb

Here they have done a similar job https://github.com/ddd42-star/opto-loop-sim

…gration

Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
Copy link
Author

Copilot AI commented Dec 9, 2025

@copilot in the end int would be great to use the same kinda protocol that is also used in the pymmcore-plus but for the imswitch hardware abstraction layer. How could we use this protocol with our so...

I've added the MDA engine pattern for ImSwitch, following the same approach used in pymmcore-plus, raman-mda-engine, and opto-loop-sim!

New files added in commit 3786fac:

  1. examples/mda_engine_wrapper.py - Complete MDA engine implementation that uses ImSwitch's HAL directly
  2. docs/mda_engine_pattern.md - Comprehensive guide on using useq-schema with ImSwitch HAL
  3. examples/mda_engine_integration.py - Integration examples showing how to add this to controllers

Key features:

  • Direct hardware control via ImSwitch managers (same pattern as pymmcore-plus with Micro-Manager)
  • Hook system for custom pre/post acquisition logic (autofocus, drift correction, etc.)
  • Protocol compatibility - the same MDASequence definition works with both ImSwitch and pymmcore-plus
  • Choose between high-level REST API or direct engine pattern based on your needs

Example usage:

from examples.mda_engine_wrapper import ImSwitchMDAEngine
from useq import MDASequence, Channel, ZRangeAround

engine = ImSwitchMDAEngine(detector_mgr, pos_mgr, laser_mgr)
sequence = MDASequence(
    channels=[Channel(config="DAPI", exposure=50.0)],
    z_plan=ZRangeAround(range=10.0, step=2.0)
)
engine.run(sequence)

The implementation allows you to use the same useq-schema protocol format across different systems while leveraging ImSwitch's hardware abstraction layer.

Copilot AI changed the title Integrate useq-schema MDASequence with ImSwitch ExperimentController Integrate useq-schema MDASequence with ImSwitch: REST API and Direct HAL Engine Pattern Dec 9, 2025
Refactored mda_demo.py to simplify demo selection and updated channel names. Improved ExperimentController stage movement logic and API export type. Hikcamera interface now falls back to first available camera if requested index/checksum is not found, removing class-level opened camera tracking. Fixed inconsistent detector parameter naming in DetectorManager and MDASequenceManager to use 'exposure'.
@beniroquai
Copy link
Collaborator

@copilot from what I understand, we should also adhere to the actual MDASequence from useq.
I have the following example from the raman people. They use this scheme and I think we should adhere to these format in order to create a reproducible protocol

from pathlib import Path

from pymmcore_plus import CMMCorePlus
from useq import MDASequence

from raman_mda_engine import RamanEngine

metadata = {
    "raman": {
        "z": "center",
        "channel": "BF",
    },
}
mda = MDASequence(
    metadata=metadata,
    stage_positions=[(100, 100, 30), (200, 150, 35)],
    channels=["BF", "DAPI"],
    time_plan={"interval": 1, "loops": 20},
    z_plan={"range": 4, "step": 0.5},
    axis_order="tpcz",
)
print(mda.axis_order.index("z"))
print(mda.shape)

core = CMMCorePlus.instance()

cfg = Path(__file__).parent.parent / "tests" / "test-config.cfg"
core.loadSystemConfiguration(cfg)

engine = RamanEngine()
core.register_mda_engine(engine)
core.run_mda(mda)
mda.__dict__
{'metadata': {'raman': {...}}, 'axis_order': ('t', 'p', 'c', 'z'), 'stage_positions': (AbsolutePosition(x=100.0, y=100.0, z=30.0), AbsolutePosition(x=200.0, y=150.0, z=35.0)), 'grid_plan': None, 'channels': (Channel(config='BF'), Channel(config='DAPI')), 'time_plan': TIntervalLoops(interval=datetime.timedelta(seconds=1), loops=20), 'z_plan': ZRangeAround(range=4.0, step=0.5), 'autofocus_plan': None, 'keep_shutter_open_across': ()}
special variables
function variables
'metadata' =
{'raman': {'z': 'center', 'channel': 'BF'}}
'axis_order' =
('t', 'p', 'c', 'z')
'stage_positions' =
(AbsolutePosition(x=100.0, y=100.0, z=30.0), AbsolutePosition(x=200.0, y=150.0, z=35.0))
'grid_plan' =
None
'channels' =
(Channel(config='BF'), Channel(config='DAPI'))
'time_plan' =
TIntervalLoops(interval=datetime.timedelta(seconds=1), loops=20)
'z_plan' =
ZRangeAround(range=4.0, step=0.5)
'autofocus_plan' =
None
'keep_shutter_open_across' =
()
len() =
9

We don't rely on the CMMCorePlus instance but instead need to provide our own core (e.g. ImSwitch or ExperimentController). The Engine should be something that provides multi-D acquisition (e.g. different channels, positions, exposure times, etc.

Also, sync with master please.

beniroquai and others added 3 commits December 21, 2025 13:55
… objects

- Now accepts native useq.MDASequence objects directly following the pymmcore-plus/raman-mda-engine pattern
- Added register() method to register hardware managers with the engine
- Added run_mda() method to execute sequences directly (similar to pymmcore-plus)
- Full support for useq-schema features: metadata, stage_positions, channels, time_plan, z_plan, axis_order
- Hook system for custom pre/post acquisition logic
- Backward compatible with WorkflowStep-based approach
- Follows the same pattern as raman-mda-engine for protocol compatibility

Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
…ne pattern

- Created comprehensive example (native_useq_mda_example.py) showing native useq.MDASequence usage
- Demonstrates EXACT pattern from raman-mda-engine: metadata, stage_positions, channels, time_plan, z_plan, axis_order
- Shows engine registration and run_mda() execution
- Includes examples for hooks, grid positions, and protocol sharing
- Updated README with native useq-schema as the recommended approach

Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
Copy link
Author

Copilot AI commented Dec 21, 2025

@copilot from what I understand, we should also adhere to the actual MDASequence from useq.
I have the following example from the raman people. They use this scheme and I think we should adhere to...

I've refactored the MDA integration to use native useq.MDASequence objects following the EXACT pattern from raman-mda-engine and pymmcore-plus!

Changes in commits a8f024fa and 60f7dc0:

  1. Refactored MDASequenceManager to be a native useq-schema engine:

    • Added register() method to register hardware managers (like CMMCorePlus.register_mda_engine())
    • Added run_mda(sequence) method to execute native MDASequence objects directly
    • Full support for all useq-schema features: metadata, stage_positions, grid_plan, channels, time_plan, z_plan, autofocus_plan, axis_order, keep_shutter_open_across
  2. Created comprehensive example (examples/native_useq_mda_example.py) showing:

    from useq import MDASequence, Channel, TIntervalLoops, ZRangeAround, AbsolutePosition
    
    # Native useq-schema object (SAME as raman-mda-engine example)
    mda = MDASequence(
        metadata={"raman": {"z": "center", "channel": "BF"}},
        stage_positions=[
            AbsolutePosition(x=100.0, y=100.0, z=30.0),
            AbsolutePosition(x=200.0, y=150.0, z=35.0)
        ],
        channels=[Channel(config="BF"), Channel(config="DAPI")],
        time_plan=TIntervalLoops(interval=1, loops=20),
        z_plan=ZRangeAround(range=4.0, step=0.5),
        axis_order="tpcz"
    )
    
    # Register and run (same pattern as raman-mda-engine)
    engine = MDASequenceManager()
    engine.register(detector_mgr, pos_mgr, laser_mgr)
    engine.run_mda(mda)
  3. Synced with master branch

The implementation now adheres to the useq-schema standard format, allowing protocol sharing between ImSwitch, pymmcore-plus, and other useq-compatible systems!

Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
Copilot AI changed the title Integrate useq-schema MDASequence with ImSwitch: REST API and Direct HAL Engine Pattern Integrate Native useq-schema MDASequence with ImSwitch Following pymmcore-plus Pattern Dec 21, 2025
@beniroquai
Copy link
Collaborator

@copilot this looks very promising! I think one thing that would be nice is to formulate the protocol outside the imswitch runtime (e.g. in a jupyter notebook) as a useq schema and then send a request to a newly generated MDASequenceController that then executes this protocol in the MDASequenceManager, very similar to the example:

def example_imswitch_execution():
    """
    Example showing how to execute with ImSwitch's MDA engine.
    
    This follows the pattern:
    1. Create native useq.MDASequence
    2. Get ImSwitch controller/engine
    3. Register engine with hardware managers
    4. Run the sequence with engine.run_mda(sequence)
    """
    print("=== Example 2: Executing with ImSwitch ===\n")
    
    # Step 1: Create native useq-schema sequence
    mda = MDASequence(
        metadata={"experiment": "test_run"},
        stage_positions=[
            AbsolutePosition(x=0.0, y=0.0, z=10.0),
        ],
        channels=[
            Channel(config="DAPI", exposure=50.0),
            Channel(config="FITC", exposure=100.0)
        ],
        z_plan=ZRangeAround(range=10.0, step=2.0),
        time_plan=TIntervalLoops(interval=30.0, loops=5),
        axis_order="tpcz"
    )
    
    print("Sequence configuration:")
    print(f"  Total events: {len(list(mda))}")
    print(f"  Axis order: {mda.axis_order}")
    print()
    
    # Step 2 & 3: In real usage, you would:
    #   from imswitch.imcontrol.model.managers.MDASequenceManager import MDASequenceManager
    #   
    #   # Get the engine
    #   engine = MDASequenceManager()
    #   
    #   # Register with ImSwitch managers (like registering with CMMCorePlus)
    #   engine.register(
    #       detector_manager=controller._master.detectorsManager,
    #       positioners_manager=controller._master.positionersManager,
    #       lasers_manager=controller._master.lasersManager,
    #       autofocus_manager=controller._master.autofocusManager  # optional
    #   )
    #   
    #   # Step 4: Run the sequence
    #   engine.run_mda(mda, output_path="/data/experiment")
    
    print("To execute with ImSwitch:")
    print("  1. engine = MDASequenceManager()")
    print("  2. engine.register(detector_mgr, pos_mgr, laser_mgr)")
    print("  3. engine.run_mda(mda)")
    print()
    
    return mda

We have the ImSwitchClient library https://github.com/openUC2/imswitchclient that can formulate a request and send it to the APIExport exposed endpoints (e.g. post). This would be very similar to the expreimentcontroller adaption in the API: https://github.com/openUC2/imswitchclient/blob/main/imswitchclient/experimentController.py

Can you create such an example for the MDASequenceController and a request to transfer an xyz timelapse scan

- Added run_native_mda_sequence() API endpoint to ExperimentController
- Accepts native useq.MDASequence objects serialized as JSON
- Enables formulating protocols in Jupyter notebooks and sending via REST API
- Created comprehensive imswitchclient example (mda_imswitchclient_example.py)
- Shows XYZ timelapse, multi-channel z-stack, and integration patterns
- Updated examples README with REST API usage
- Compatible with imswitchclient library pattern

Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
Copy link
Author

Copilot AI commented Dec 21, 2025

@copilot this looks very promising! I think one thing that would be nice is to formulate the protocol outside the imswitch runtime (e.g. in a jupyter notebook) as a useq schema and then send a request...

I've added REST API support for native useq-schema MDA sequences, enabling the exact pattern you requested! Commit 738c39b

New REST API Endpoint

POST /api/experimentcontroller/run_native_mda_sequence

Accepts native useq.MDASequence objects serialized as JSON, allowing you to formulate protocols in Jupyter notebooks and send them to ImSwitch for execution.

XYZ Time-Lapse Example (as requested)

import requests
from useq import MDASequence, Channel, TIntervalLoops, ZRangeAround, AbsolutePosition

# Create in Jupyter notebook
sequence = MDASequence(
    metadata={"experiment": "xyz_timelapse"},
    stage_positions=[
        AbsolutePosition(x=100.0, y=100.0, z=30.0),
        AbsolutePosition(x=200.0, y=150.0, z=35.0),
        AbsolutePosition(x=150.0, y=200.0, z=32.0)
    ],
    channels=[Channel(config="Brightfield", exposure=10.0)],
    z_plan=ZRangeAround(range=10.0, step=2.0),  # 10µm range
    time_plan=TIntervalLoops(interval=60.0, loops=10),  # 10 timepoints
    axis_order="tpzc"
)

# Send to ImSwitch
response = requests.post(
    "http://localhost:8000/api/experimentcontroller/run_native_mda_sequence",
    json=sequence.model_dump()
)
print(response.json())

Complete Example File

Created examples/mda_imswitchclient_example.py with:

  • XYZ time-lapse scan
  • Multi-channel Z-stack
  • Integration with imswitchclient pattern
  • Raw requests usage (no imswitchclient needed)

Run with: python examples/mda_imswitchclient_example.py

This allows external protocol formulationREST APIImSwitch execution, exactly as you described!

- Added timing synchronization for time_plan (waits until min_start_time)
- Z positions from z_plan are automatically handled via event.z_pos
- Grid positions from grid_plan are automatically handled via event.x_pos/y_pos
- Removed TODO comment and placeholder code
- Added detailed comments explaining how useq-schema expands plans into events
- Updated documentation with "How Plans Are Handled" section
- Verified with comprehensive tests (all plans work correctly)

Co-authored-by: beniroquai <4345528+beniroquai@users.noreply.github.com>
Copy link
Author

Copilot AI commented Dec 22, 2025

@copilot again, this looks very promising, however the z_plan, time_plan, grid_plan handling are missing in the current version. Can you integrate them?

I've implemented proper handling for all three plans! Commit fab506c

The MDASequenceManager now correctly handles:

✅ z_plan - Automatically processes event.z_pos from useq-schema (e.g., ZRangeAround generates events with correct Z positions)

✅ time_plan - Waits until event.min_start_time before executing each event (e.g., TIntervalLoops with interval=5s properly delays between timepoints)

✅ grid_plan - Automatically processes event.x_pos and event.y_pos from useq-schema (e.g., GridRowsColumns generates events with correct grid positions)

How it works:
useq-schema automatically expands these high-level plans into individual MDAEvent objects with the appropriate fields set. The engine then executes each event with proper positioning and timing.

Verified with tests:

  • z_plan: 5 events with correct Z positions (-2, -1, 0, 1, 2 µm)
  • time_plan: Proper 1s delays between timepoints
  • grid_plan: 2×2 grid with correct XY positions
  • Combined: All plans work together (24 events from 2 timepoints × 2 positions × 2 channels × 3 Z-slices)

Introduces movePositionerForeverXYZA to PositionerController for simultaneous X, Y, Z, A axis movement with individual speed control. Adds event index logging to MDASequenceManager for better traceability. Removes IMPLEMENTATION_SUMMARY.md documentation file.
Removed append mode from SingleTiffWriter to always create new files. Updated individual TIFF file naming to include current timestamp and channel index/name. Suppressed verbose print statements in SingleTiffWriter and VirtualMicroscopeManager. Fixed Z position workflow logic in ExperimentNormalMode. Set log level to 'error' in ImSwitchServer for cleaner logs. Timepoint directories no longer include timestamp in their names.
Added a new zarr_export.py utility for exploring and exporting OME-Zarr datasets, including CLI and programmatic access. Improved OMEWriterConfig and OMEWriter to support richer OME-NGFF metadata (physical coordinates, channel info, multiscale axes, and pyramid levels). Updated UC2ConfigController to use a JSON API for firmware listing and improved device type matching. Adjusted ImSwitchServer log level to 'info' for better visibility. Added a disk usage warning printout in applaunch.py.
Implements methods for listing serial ports, detecting the CAN HAT master device, downloading firmware from the server, and flashing firmware via USB using esptool. Adds a thread lock to prevent concurrent flashing and exposes a new API endpoint for flashing the master firmware, including options for erasing flash and reconnecting after flashing.
Added comprehensive CAN OTA update procedure documentation. Refactored UC2ConfigController to centralize CAN ID to firmware mapping, improved OTA status tracking and frontend signaling, and added detailed USB flash status updates with new API endpoints. Updated dependencies to include esptool in pyproject.toml and setup.py.
Introduces OME-Zarr storage support and a new Go-Stop-Acquire (step-acquire) scan mode to the LightsheetController. Adds Pydantic models for API parameter validation, scan status tracking, and new API endpoints for scan control and status reporting. Refactors file path handling, scan state management, and enables both TIFF and OME-Zarr output formats for lightsheet acquisitions.
Introduces MJPEG streaming support for observation cameras in LightsheetController, enabling real-time sample positioning visualization. Refactors step-acquire and continuous scan methods to write TIFF and OME-Zarr files immediately during acquisition for improved memory efficiency. Also updates the mock camera to import cv2 within methods to avoid linter issues.
Replaces legacy Zarr writer with new OMEWriter and SingleTiffWriter for immediate, memory-efficient frame writing during lightsheet scans. Adds observation camera MJPEG streaming support for sample positioning. Updates UC2ConfigController to use correct ESP32 flash offset and esptool arguments for firmware flashing.
Added conftest.py and pytest.ini for better pytest configuration and plugin handling. Updated and clarified the test suite structure with a new README. Refactored unit tests to avoid hacky imports and disabled problematic tests by renaming them. Improved DetectorsManager robustness by checking attribute existence before use. Updated GitHub Actions workflow for more reliable and explicit test execution.
@beniroquai beniroquai marked this pull request as ready for review January 5, 2026 19:30
Copilot AI review requested due to automatic review settings January 5, 2026 19:30
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants