Skip to content

Commit d0efaed

Browse files
authored
Merge pull request #145 from Shenhan11/fix/doc-issues
0ptimize document titles and navigation display
2 parents 1dfa70b + b69e557 commit d0efaed

152 files changed

Lines changed: 427 additions & 289 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/developers/dynamic-mig.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,8 @@
11
---
2-
title: Dynamic MIG Implementation
2+
title: NVIDIA GPU MPS and MIG dynamic slice plugin
3+
linktitle: Dynamic MIG Implementation
34
---
45

5-
## NVIDIA GPU MPS and MIG dynamic slice plugin
6-
76
## Special Thanks
87

98
This feature will not be implemented without the help of @sailorvii.

docs/developers/mindmap.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,4 @@
22
title: HAMi mind map
33
---
44

5-
## Mind map
6-
75
![HAMi VGPU mind map showing project structure and components](../resources/HAMI-VGPU-mind-map-English.png)

docs/developers/protocol.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@
22
title: Protocol design
33
---
44

5-
## Protocol Implementation
6-
75
### Device Registration
86

97
<img src="https://github.com/Project-HAMi/HAMi/raw/master/docs/develop/imgs/protocol_register.png" width="600px" alt="HAMi device registration protocol diagram showing node annotation process" />

docs/developers/scheduling.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ use can set Pod annotation to change this default policy, use `hami.io/node-sche
1515

1616
This is a GPU cluster, having two node, the following story takes this cluster as a prerequisite.
1717

18-
![scheduler-policy-story.png](../resources/scheduler-policy-story.png)
18+
![HAMi scheduler policy story diagram, showing node and GPU resource distribution](../resources/scheduler-policy-story.png)
1919

2020
#### Story 1
2121

@@ -83,7 +83,7 @@ GPU spread, use different GPU cards when possible, egs:
8383

8484
### Node-scheduler-policy
8585

86-
![node-scheduler-policy-demo.png](../resources/node-scheduler-policy-demo.png)
86+
![HAMi node scheduler policy diagram, showing Binpack and Spread node selection](../resources/node-scheduler-policy-demo.png)
8787

8888
#### Binpack
8989

@@ -131,7 +131,7 @@ So, in `Spread` policy we can select `Node2`.
131131

132132
### GPU-scheduler-policy
133133

134-
![gpu-scheduler-policy-demo.png](../resources/gpu-scheduler-policy-demo.png)
134+
![HAMi GPU scheduler policy diagram, comparing Binpack and Spread scores on each card](../resources/gpu-scheduler-policy-demo.png)
135135

136136
#### Binpack
137137

docs/installation/how-to-use-volcano-vgpu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
linktitle: Volcano vGPU
32
title: Volcano vGPU device plugin for Kubernetes
3+
linktitle: Use Volcano vGPU
44
---
55

66
:::note

docs/userguide/ascend-device/device-template.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@
22
title: Ascend device template
33
---
44

5+
Ascend device templates define how a physical Ascend card is sliced into virtual instances that HAMi can schedule.
6+
Each template describes the available memory, AI cores and optional CPU resources for a given card model.
7+
When a Pod requests Ascend resources, HAMi selects a suitable template according to the requested memory and compute.
58

69
```yaml
710
vnpus:

docs/userguide/configure.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,8 @@
11
---
2-
title: Configuration
2+
title: Global Config
3+
linktitle: Configuration
34
---
45

5-
## Global Config
6-
76
## Device Configs: ConfigMap
87

98
:::note

docs/userguide/hygon-device/specify-device-core-usage.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,8 @@
11
---
2-
title: Allocate device core usage
2+
title: Allocate device core to container
3+
linktitle: Allocate device core usage
34
---
45

5-
## Allocate device core to container
6-
76
Allocate a percentage of device core resources by specify resource `hygon.com/dcucores`.
87
Optional, each unit of `hygon.com/dcucores` equals to 1% device cores.
98

docs/userguide/hygon-device/specify-device-uuid-to-use.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@
22
title: Assign to certain device
33
---
44

5-
## Assign to certain device type
6-
75
Sometimes a task may wish to run on a certain DCU, it can fill the `hygon.com/use-gpuuuid` field in pod annotation. HAMi scheduler will try to fit in device with that uuid.
86

97
For example, a task with the following annotation will be assigned to the device with uuid `DCU-123456`

docs/userguide/kueue/how-to-use-kueue.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@
22
title: How to use kueue on HAMi
33
---
44

5-
## Using Kueue with HAMi
6-
75
This guide will help you use Kueue to manage HAMi vGPU resources, including enabling Deployment support, configuring ResourceTransformation, and creating workloads that request vGPU resources.
86

97
## Prerequisites

0 commit comments

Comments
 (0)