You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{< summary-bar feature_name="Compose model runner" >}}
14
14
15
15
Docker Model Runner can be integrated with Docker Compose to run AI models as part of your multi-container applications.
16
-
This allows you to define and run AI-powered applications alongside your other services.
16
+
This lets you define and run AI-powered applications alongside your other services.
17
17
18
18
## Prerequisites
19
19
20
20
- Docker Compose v2.35 or later
21
21
- Docker Desktop 4.41 or later
22
-
- Docker Model Runner enabled in Docker Desktop
23
-
- Apple Silicon Mac (currently Model Runner is only available for Mac with Apple Silicon)
24
-
25
-
## Enabling Docker Model Runner
26
-
27
-
Before you can use Docker Model Runner with Compose, you need to enable it in Docker Desktop, as described in the [Docker Model Runner documentation](/desktop/features/model-runner/).
22
+
- Docker Desktop for Mac with Apple Silicon or Docker Desktop for Windows with NVIDIA GPU
23
+
-[Docker Model Runner enabled in Docker Desktop](/manuals/desktop/features/model-runner.md#enable-docker-model-runner)
28
24
29
25
## Provider services
30
26
@@ -46,27 +42,25 @@ services:
46
42
model: ai/smollm2
47
43
```
48
44
49
-
You should notice the dedicated `provider` attribute in the `ai-runner` service.
50
-
This attribute specifies that the service is a model provider and let you define options such as the name of the model to be used.
45
+
Notice the dedicated `provider` attribute in the `ai-runner` service.
46
+
This attribute specifies that the service is a model provider and lets you define options such as the name of the model to be used.
51
47
52
48
There is also a `depends_on` attribute in the `chat` service.
53
49
This attribute specifies that the `chat` service depends on the `ai-runner` service.
54
50
This means that the `ai-runner` service will be started before the `chat` service to allow injection of model information to the `chat` service.
55
51
56
52
## How it works
57
53
58
-
During the `docker compose up` process, Docker Model Runner will automatically pull and run the specified model.
59
-
It will also send to Compose the model tag name and the URL to access the model runner.
54
+
During the `docker compose up` process, Docker Model Runner automatically pulls and runs the specified model.
55
+
It also sends Compose the model tag name and the URL to access the model runner.
60
56
61
-
Those information will be then pass to services which declare a dependency on the model provider.
62
-
In the example above, the `chat` service will receive 2 env variables prefixed by the service name:
57
+
This information is then passed to services which declare a dependency on the model provider.
58
+
In the example above, the `chat` service receives 2 environment variables prefixed by the service name:
63
59
- `AI-RUNNER_URL`with the URL to access the model runner
64
60
- `AI-RUNNER_MODEL`with the model name which could be passed with the URL to request the model.
65
61
66
-
This allows the `chat` service to interact with the model and use it for its own purposes.
67
-
62
+
This lets the `chat` service to interact with the model and use it for its own purposes.
68
63
69
64
## Reference
70
65
71
-
- [Docker Model Runner documentation](/desktop/features/model-runner/)
72
-
66
+
- [Docker Model Runner documentation](/manuals/desktop/features/model-runner.md)
0 commit comments