feat: support whisper with openvino #417
Open
ziyuanguo1998 wants to merge 2 commits into
Open
Conversation
| # Get arguments | ||
| output_dir: str = oliveJson["output_dir"] | ||
| cache_dir: str = oliveJson["cache_dir"] | ||
| config_pass = oliveJson["passes"]["aitkpython"] |
| config_pass = oliveJson["passes"]["aitkpython"] | ||
|
|
||
| # The cwd is model project folder | ||
| history_folder = os.path.dirname(args.config) |
| # Licensed under the MIT License. | ||
| # -------------------------------------------------------------------------- | ||
| import os | ||
| import sys |
Contributor
There was a problem hiding this comment.
Pull request overview
This PR adds an Intel OpenVINO workflow for the openai/whisper-large-v3-turbo recipe under the AITK/Foundry Toolkit structure, alongside updates to the central model list/docs to advertise Intel CPU/GPU/NPU support.
Changes:
- Add an OpenVINO AitkPython workflow (
ov_workflow.json+ov_workflow.py) plus an inference notebook sample. - Add OpenVINO conversion/encapsulation Olive configs and supporting conversion scripts/configs in the recipe folder.
- Register Intel runtimes for this model in
.aitk/configs/model_list.jsonand.aitk/docs/guide/ModelList.md, and bump the recipe version.
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| openai-whisper-large-v3-turbo/aitk/whisper_large_v3_turbo_encapsulate.json | Adds OpenVINO encapsulation Olive config (currently has inconsistent system/EP vs target_device). |
| openai-whisper-large-v3-turbo/aitk/whisper_large_v3_turbo_default_ov_npu.json | Adds OpenVINO optimum conversion Olive config (currently has inconsistent system/EP vs intended OpenVINO NPU conversion). |
| openai-whisper-large-v3-turbo/aitk/qnn_evaluate.py | Removes an unused import. |
| openai-whisper-large-v3-turbo/aitk/ov_workflow.py | Adds the AitkPython entrypoint for OpenVINO conversion/eval (currently ignores runtime/output_dir). |
| openai-whisper-large-v3-turbo/aitk/ov_workflow.json.config | Adds the ModelParameter config for the OpenVINO workflow (currently includes a non-Olive JSON in oliveFile, and a mismatched runtime path). |
| openai-whisper-large-v3-turbo/aitk/ov_workflow.json | Adds the AitkPython Olive workflow wrapper (currently has output_dir inconsistent with produced artifacts). |
| openai-whisper-large-v3-turbo/aitk/ov_workflow_inference_sample.ipynb | Adds an OpenVINO inference example notebook. |
| openai-whisper-large-v3-turbo/aitk/model_project.config | Registers the new ov_workflow.json template and bumps model version. |
| openai-whisper-large-v3-turbo/aitk/info.yml | Registers the OpenVINO recipe entry and bumps version (currently has invalid YAML indentation for devices). |
| openai-whisper-large-v3-turbo/aitk/convert_whisper_to_ovir.py | Adds a conversion+encapsulation helper script for Whisper → OpenVINO IR + encapsulated ONNX. |
| openai-whisper-large-v3-turbo/aitk/audio_processor_config_default.json | Adds the default audio feature extraction config used by the OpenVINO workflow. |
| .aitk/requirements/requirements-IntelNPU-WP.txt | Adds a new runtime requirements set for IntelNPU with WP feature. |
| .aitk/docs/guide/ModelList.md | Updates docs to list Intel CPU/GPU/NPU support for Whisper. |
| .aitk/configs/model_list.json | Updates the model registry to include IntelCPU/IntelGPU/IntelNPU runtimes and bumps version. |
Comments suppressed due to low confidence (1)
openai-whisper-large-v3-turbo/aitk/ov_workflow.json.config:30
- The runtime parameter path points to
systems.local_system..., butov_workflow.jsonusessystems.target_system. This will be rewritten by sanitize and can break runtime selection until then; update it tosystems.target_system.accelerators.0.device.
],
"path": "systems.local_system.accelerators.0.device",
"values": [
"cpu",
"npu",
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "$schema": "https://github.com/microsoft/olive-recipes/raw/refs/heads/main/.aitk/configs/config_schema.json", | ||
| "name": "Convert to Intel CPU/NPU/GPU", | ||
| "oliveFile": { | ||
| "audio_processor_config_default.json": "OpenVINO/audio_processor_config_default.json", |
| "evaluate_input_model": false, | ||
| "target": "target_system", | ||
| "clean_cache": false, | ||
| "output_dir": "model/whisper_ov", |
Comment on lines
+10
to
+12
| - npu | ||
| - cpu | ||
| - gpu |
Comment on lines
+11
to
+13
| "device":"cpu", | ||
| "execution_providers": [ | ||
| "CPUExecutionProvider" |
Comment on lines
+11
to
+13
| "device":"cpu", | ||
| "execution_providers": [ | ||
| "CPUExecutionProvider" |
Comment on lines
+25
to
+29
| # Get arguments | ||
| output_dir: str = oliveJson["output_dir"] | ||
| cache_dir: str = oliveJson["cache_dir"] | ||
| config_pass = oliveJson["passes"]["aitkpython"] | ||
|
|
Comment on lines
+12
to
+16
| parser = argparse.ArgumentParser() | ||
| parser.add_argument("--config", required=True, help="path to input config file") | ||
| parser.add_argument("--model_config", help="path to input model config file") | ||
| parser.add_argument("--runtime", required=True, help="runtime") | ||
| return parser.parse_args() |
Comment on lines
+50
to
+51
| subprocess.run([sys.executable, "convert_whisper_to_ovir.py", | ||
| "--enable_npu_ws", "True"], |
xieofxie
reviewed
May 13, 2026
| urllib3==2.6.3 | ||
| zipp==3.23.0 | ||
| wasdk-Microsoft.Windows.AI.MachineLearning==1.8.251106002 | ||
| wasdk-Microsoft.Windows.ApplicationModel.DynamicDependency.Bootstrap==1.8.251106002 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.