You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/docs/etl/expose.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -109,6 +109,6 @@ Go to the Kubernetes Resource Manager component (available from dashboard) and g
109
109
110
110
The platform by default support exposing the methods at the subdomains of ``services.<platform-domain>``, where platform-domain is the domain of the platform instance.
111
111
112
-

112
+

113
113
114
114
*Save* and, after a few moments, you will be able to call the API at the address you defined! If you set *Authentication* to *Basic*, don't forget that you have to provide the credentials.
Copy file name to clipboardExpand all lines: tutorials/docs/flower/flower.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ The platform support this approach natively integrating the Flower framework in
9
9
- support creating a federation, with central Superlink node and a set of client Supernodes distributed potentially outside of the platform in a secure manner (with TLS verification and client authentication)
10
10
- activate the training procedures defined with the server coordination code and client training code managed by the platform.
11
11
12
-
See more details in the description of the corresponding [Flower runtime](../../runtimes/fl.md).
12
+
See more details in the description of the corresponding [Flower runtime](../../../runtimes/fl/).
13
13
14
14
This tutorial demonstrates how to use the Flower FL framework for execution of federated learning tasks. The tutorial is based on official Pandas example of Flower framework.
Copy file name to clipboardExpand all lines: tutorials/docs/ml/deploy.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -130,6 +130,6 @@ Go to the Kubernetes Resource Manager component (available from dashboard) and g
130
130
131
131
The platform by default support exposing the methods at the subdomains of ``services.<platform-domain>``, where platform-domain is the domain of the platform instance.
132
132
133
-

133
+

134
134
135
135
*Save* and, after a few moments, you will be able to call the API at the address you defined! If you set *Authentication* to *Basic*, don't forget that you have to provide the credentials.
Please note the use of the ``profile`` parameter. As the LLM models require specific hardware (GPU in particular), it is necessary to specify the HW requirements as described in the [Configuring Kubernetes executions](../../tasks/kubernetes-resources.md) section. In particular, it is possible to rely on the predefined resource templates of the platform deployment.
52
+
Please note the use of the ``profile`` parameter. As the LLM models require specific hardware (GPU in particular), it is necessary to specify the HW requirements as described in the [Configuring Kubernetes executions](../../../tasks/kubernetes-resources/) section. In particular, it is possible to rely on the predefined resource templates of the platform deployment.
53
53
54
54
As in other scenarios, you need to wait a bit for the service to become available.
55
55
Once the service becomes available, it is possible to make the calls:
As in case of classification models, the LLM models require specific hardware (GPU in particular), it is necessary
136
-
to specify the HW requirements as described in the [Configuring Kubernetes executions](../../tasks/kubernetes-resources.md) section. In particular, it is possible to rely on the predefined resource templates of the platform deployment.
136
+
to specify the HW requirements as described in the [Configuring Kubernetes executions](../../../tasks/kubernetes-resources/) section. In particular, it is possible to rely on the predefined resource templates of the platform deployment.
137
137
138
138
Once the service becomes available, it is possible to make the calls. For example, for the completion requests:
Please note the use of the ``profile`` parameter. As the LLM models require specific hardware (GPU in particular), it is necessary
361
-
to specify the HW requirements as described in the [Configuring Kubernetes executions](../../tasks/kubernetes-resources.md) section. In particular, it is possible to rely on the predefined resource templates of the platform deployment. Also, in case of large models the default disk space may be insufficient and an extra volume should be configured for the underlying deployment.
361
+
to specify the HW requirements as described in the [Configuring Kubernetes executions](../../../tasks/kubernetes-resources) section. In particular, it is possible to rely on the predefined resource templates of the platform deployment. Also, in case of large models the default disk space may be insufficient and an extra volume should be configured for the underlying deployment.
362
362
363
363
Once the service becomes available, it is possible to make the calls:
Copy file name to clipboardExpand all lines: tutorials/docs/mlllm/llmkubeai.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ For what concerns LLM tasks, currently KubeAI runtime allows for deploying the m
10
10
11
11
To accomplish this, it is possible to use one of the KubeAI-supported runtimes, namely [vLLM](https://docs.vllm.ai/), [OLlama](https://ollama.com/), and [Infinity](https://michaelfeil.eu/infinity). in case of vLLM also adapters are supported.
12
12
13
-
For details about the specification, see the corresponding section of [Modelserve](../../runtimes/modelserve.md) reference.
13
+
For details about the specification, see the corresponding section of [Modelserve](../../../runtimes/modelserve) reference.
Copy file name to clipboardExpand all lines: tutorials/docs/mlmlflow/deploy.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,6 +60,6 @@ Go to the Kubernetes Resource Manager component (available from dashboard) and g
60
60
61
61
The platform by default support exposing the methods at the subdomains of ``services.<platform-domain>``, where platform-domain is the domain of the platform instance.
62
62
63
-

63
+

64
64
65
65
*Save* and, after a few moments, you will be able to call the API at the address you defined! If you set *Authentication* to *Basic*, don't forget that you have to provide the credentials.
Copy file name to clipboardExpand all lines: tutorials/docs/mlsklearn/deploy.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,6 +47,6 @@ Go to the Kubernetes Resource Manager component (available from dashboard) and g
47
47
48
48
The platform by default support exposing the methods at the subdomains of ``services.<platform-domain>``, where platform-domain is the domain of the platform instance.
49
49
50
-

50
+

51
51
52
52
*Save* and, after a few moments, you will be able to call the API at the address you defined! If you set *Authentication* to *Basic*, don't forget that you have to provide the credentials.
0 commit comments