You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found the docker instructions to be useful in the README.md and the differences in docker variants such as ffmpeg and cuda support. However, this section was removed in v1.7.4 and I would vote to bring it back.
This is a pull request to add that section back.
Copy file name to clipboardExpand all lines: README.md
+32Lines changed: 32 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -360,6 +360,38 @@ Run the inference examples as usual, for example:
360
360
- If you have trouble with Ascend NPU device, please create a issue with **[CANN]** prefix/tag.
361
361
- If you run successfully with your Ascend NPU device, please help update the table `Verified devices`.
362
362
363
+
## Docker
364
+
365
+
### Prerequisites
366
+
367
+
- Docker must be installed and running on your system.
368
+
- Create a folder to store big models & intermediate files (ex. /whisper/models)
369
+
370
+
### Images
371
+
372
+
We have two Docker images available for this project:
373
+
374
+
1.`ghcr.io/ggerganov/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
375
+
2.`ghcr.io/ggerganov/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
376
+
377
+
### Usage
378
+
379
+
```shell
380
+
# download model and persist it in a local folder
381
+
docker run -it --rm \
382
+
-v path/to/models:/models \
383
+
whisper.cpp:main "./models/download-ggml-model.sh base /models"
0 commit comments