Conversation
Enable --parallel N for the Appium driver. All N sessions hit the same Appium URL with identical capabilities — the server (local or cloud) allocates devices. Cloud providers (Sauce Labs, BrowserStack) get per-session result reporting. Changes: - determineExecutionMode: generate virtual IDs for Appium (like browser) - createAppiumWorkers: create N sessions against same URL - executeAppiumParallel: orchestrate workers via existing ParallelRunner - Per-worker cloud provider detection and reporting - Remove "parallel not yet supported" error for Appium
|
Hi @omnarayan . Thanks for this PR. It’s an important improvement when using maestro-runner with Appium. I tested this PR on the Sauce Labs platform. Here is the command I ran: maestro-runner --driver appium My main expectation is that one YAML file should run on one device.
I hope this makes sense, |
Don't create more Appium sessions than there are flows to run. With --parallel 3 and 2 flows, only 2 sessions are created instead of wasting a third device.
|
@eyaly, really appreciate you taking the time to test on Sauce Labs and putting together such detailed feedback — it made tracking this down much easier! Issue 2 ( Issue 3 ( The fix is pushed to this PR. Would love it if you could re-test when you get a chance — your feedback has been invaluable! 🙏 |
|
Thanks. For Issue 3 (--parallel 1), where the number of YAML files is greater than the parallel executions: From my perspective: So even without defining the parallel parameter, all YAML files should run sequentially on the same device (or different devices) but with different Appium job IDs. This would result in multiple executions on Sauce Labs. The parallel parameter defines how many executions run at the same time. For example: Currently, in this scenario (3 YAML files with --parallel 2): 2 YAML files run on the same device with the same Appium job ID The third YAML file runs on a second device in parallel I hope this makes sense. |
@eyaly , I respect your view on this, and honestly I think you're right — there's no wrong answer here, just two different ways of looking at it. The reason I went with this design (and I did add a
That said, I'll admit point 1 isn't exactly airtight, and I can see it being argued the other way. At the end of the day, we probably need to support both behaviors — the real question is just which one becomes the default. And to be clear: I'm not trying to make a final call here or act like I have the authority to close this out. I don't. This is still very much on the table — just wanted to share where my head was at when I built it this way. |
|
Hi @omnarayan - I can see your points. With Appium, both approaches are actually used:
With Android Espresso, you define the number of shards, and the tests are split and executed across that number of Android devices (so it is your approach) : From what I’ve observed with maestro-runner, if I have 10 YAML files and use "--parallel 2", the files are evenly distributed between the two devices (5 YAML files per device). This is great :-) One improvement I can add latter for Sauce Labs executions - adding the YAML file name to the logs when execution starts. Currently if 5 YAML files are executed with the same Appium Job , it’s difficult to tell when one YAML file ends and another begins. If both approaches are technically possible to implement on your side, you could keep the current behavior as the default. And, It would be great to have an optional parameter that allows running each YAML file in a new Appium session. Thanks, |
Populate DeviceName, DeviceID, OSVersion from Appium session caps in GetPlatformInfo. Add SessionID to report.Device (omitempty, only shows for Appium). Session ID now appears in parallel console output, per-flow detail section, and JSON/HTML reports. No impact on non-Appium drivers.
|
@eyaly Thanks for the quality-of-life improvement suggestions! Here's what we've implemented — please review. Changes Implemented1. YAML filename and flow name in logAdded across all relevant log output locations: Parallel execution log: Per-flow output: JSON report now includes device & session details: "device": {
"id": "11171JEC200939",
"name": "11171JEC200939",
"platform": "android",
"osVersion": "13",
"sessionId": "3a9c2fd1-d35e-4cbd-8f54-61a0203a177b",
"isSimulator": true
}2. Per-YAML Appium session (optional parameter)The option to run each YAML file in a new Appium session already exists in the YAML config. Would you also like it exposed as a CLI argument? On Scheduling: Pull-Based vs. Pre-AssignmentSince you raised how maestro-runner uses a pull-based scheduler, not static pre-assignment:
This avoids the classic pitfalls of static sharding:
So while it may look like files are evenly distributed (e.g. 5 per device with On Android Espresso ShardingYou're right that Espresso's For context: at DeviceLab, we've implemented pull-based scheduling for Espresso, Appium, and other frameworks for exactly this reason — it consistently yields better device utilisation regardless of test duration variance. |
Summary
--parallel Nfor the Appium driver — all N sessions hit the same--appium-urlwith identical capabilities, server (local or cloud) allocates devicesParallelRunnerwork-queue infrastructure — no changes to the executor layerUsage
Test plan
go build ./...passesgo test ./pkg/cli/... ./pkg/executor/...passes--parallel 2— flows distributed across both devices--parallel 1/ no flag) — unchanged behavior--parallel 2Notes
appium:udidper session — Appium XCUITest driver doesn't auto-distribute across booted simulators with identical caps