You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-4Lines changed: 8 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,8 +60,8 @@ This image provides various versions that are available via tags. Please read th
60
60
| Tag | Available | Description |
61
61
| :----: | :----: |--- |
62
62
| latest | ✅ | Stable releases |
63
-
| gpu | ✅ | Releases with Nvidia GPU support |
64
-
| gpu-legacy | ✅ | Legacy releases with Nvidia GPU support for pre-Turing cards |
63
+
| gpu | ✅ | Releases with Nvidia GPU support (amd64 only) |
64
+
| gpu-legacy | ✅ | Legacy releases with Nvidia GPU support for pre-Turing cards (amd64 only) |
65
65
66
66
## Application Setup
67
67
@@ -96,6 +96,7 @@ services:
96
96
- TZ=Etc/UTC
97
97
- WHISPER_MODEL=tiny-int8
98
98
- LOCAL_ONLY= #optional
99
+
- USE_TRANSFORMERS= #optional
99
100
- WHISPER_BEAM=1 #optional
100
101
- WHISPER_LANG=en #optional
101
102
volumes:
@@ -115,6 +116,7 @@ docker run -d \
115
116
-e TZ=Etc/UTC \
116
117
-e WHISPER_MODEL=tiny-int8 \
117
118
-e LOCAL_ONLY= `#optional` \
119
+
-e USE_TRANSFORMERS= `#optional` \
118
120
-e WHISPER_BEAM=1 `#optional` \
119
121
-e WHISPER_LANG=en `#optional` \
120
122
-p 10300:10300 \
@@ -133,8 +135,9 @@ Containers are configured using parameters passed at runtime (such as those abov
133
135
|`-e PUID=1000`| for UserID - see below for explanation |
134
136
|`-e PGID=1000`| for GroupID - see below for explanation |
135
137
|`-e TZ=Etc/UTC`| specify a timezone to use, see this [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List). |
136
-
|`-e WHISPER_MODEL=tiny-int8`| Whisper model that will be used for transcription. From `tiny`, `base`, `small` and `medium`, all with`-int8` compressed variants |
138
+
|`-e WHISPER_MODEL=tiny-int8`| Whisper model that will be used for transcription. From [here](https://github.com/home-assistant/addons/blob/master/whisper/config.yaml#L25), smaller models also have`-int8` compressed variants |
137
139
|`-e LOCAL_ONLY=`| If set to `true`, or any other value, the container will not attempt to download models from HuggingFace and will only use locally-provided models. |
140
+
|`-e USE_TRANSFORMERS=`| If set to `true`, or any other value, the container will interpret `WHISPER_MODEL` as a HuggingFace transformers model id. |
138
141
|`-e WHISPER_BEAM=1`| Number of candidates to consider simultaneously during transcription. |
139
142
|`-e WHISPER_LANG=en`| Language that you will speak to the add-on. |
140
143
|`-v /config`| Local path for Whisper config files. |
@@ -302,7 +305,8 @@ Once registered you can define the dockerfile to use with `-f Dockerfile.aarch64
302
305
303
306
## Versions
304
307
305
-
***20.08.25:** - Add gpu-legacy branch for Pascal & earlier cards.
308
+
***07.09.25:** - Add support for transformers models.
309
+
***20.08.25:** - Add gpu-legacy branch for pre-Turing cards.
306
310
***10.08.25:** - Add support for local-only mode.
307
311
***05.12.24:** - Build from Github releases rather than Pypi.
- {env_var: "WHISPER_MODEL", env_value: "tiny-int8", desc: "Whisper model that will be used for transcription. From `tiny`, `base`, `small` and `medium`, all with `-int8` compressed variants", env_options: ["tiny-int8", "tiny", "base-int8", "base", "small-int8", "small", "medium-int8"]}
23
+
- {env_var: "WHISPER_MODEL", env_value: "tiny-int8", desc: "Whisper model that will be used for transcription. From [here](https://github.com/home-assistant/addons/blob/master/whisper/config.yaml#L25), smaller models also have `-int8` compressed variants"}
- {env_var: "LOCAL_ONLY", env_value: "", desc: "If set to `true`, or any other value, the container will not attempt to download models from HuggingFace and will only use locally-provided models."}
34
+
- {env_var: "USE_TRANSFORMERS", env_value: "", desc: "If set to `true`, or any other value, the container will interpret `WHISPER_MODEL` as a HuggingFace transformers model id."}
34
35
- {env_var: "WHISPER_BEAM", env_value: "1", desc: "Number of candidates to consider simultaneously during transcription."}
35
36
- {env_var: "WHISPER_LANG", env_value: "en", desc: "Language that you will speak to the add-on."}
36
37
readonly_supported: true
@@ -85,6 +86,7 @@ init_diagram: |
85
86
"faster-whisper:gpu" <- Base Images
86
87
# changelog
87
88
changelogs:
89
+
- {date: "07.09.25:", desc: "Add support for transformers models."}
88
90
- {date: "20.08.25:", desc: "Add gpu-legacy branch for pre-Turing cards."}
89
91
- {date: "10.08.25:", desc: "Add support for local-only mode."}
90
92
- {date: "05.12.24:", desc: "Build from Github releases rather than Pypi."}
0 commit comments