You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,6 +97,7 @@ services:
97
97
- TZ=Etc/UTC
98
98
- WHISPER_MODEL=tiny-int8
99
99
- LOCAL_ONLY= #optional
100
+
- USE_TRANSFORMERS= #optional
100
101
- WHISPER_BEAM=1 #optional
101
102
- WHISPER_LANG=en #optional
102
103
volumes:
@@ -116,6 +117,7 @@ docker run -d \
116
117
-e TZ=Etc/UTC \
117
118
-e WHISPER_MODEL=tiny-int8 \
118
119
-e LOCAL_ONLY= `#optional` \
120
+
-e USE_TRANSFORMERS= `#optional` \
119
121
-e WHISPER_BEAM=1 `#optional` \
120
122
-e WHISPER_LANG=en `#optional` \
121
123
-p 10300:10300 \
@@ -134,8 +136,9 @@ Containers are configured using parameters passed at runtime (such as those abov
134
136
|`-e PUID=1000`| for UserID - see below for explanation |
135
137
|`-e PGID=1000`| for GroupID - see below for explanation |
136
138
|`-e TZ=Etc/UTC`| specify a timezone to use, see this [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List). |
137
-
|`-e WHISPER_MODEL=tiny-int8`| Whisper model that will be used for transcription. From [here](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/utils.py#L12-L31), all with`-int8` compressed variants |
139
+
|`-e WHISPER_MODEL=tiny-int8`| Whisper model that will be used for transcription. From [here](https://github.com/home-assistant/addons/blob/master/whisper/config.yaml#L25), smaller models also have`-int8` compressed variants |
138
140
|`-e LOCAL_ONLY=`| If set to `true`, or any other value, the container will not attempt to download models from HuggingFace and will only use locally-provided models. |
141
+
|`-e USE_TRANSFORMERS=`| If set to `true`, or any other value, the container will interpret `WHISPER_MODEL` as a HuggingFace transformers model id. |
139
142
|`-e WHISPER_BEAM=1`| Number of candidates to consider simultaneously during transcription. |
140
143
|`-e WHISPER_LANG=en`| Language that you will speak to the add-on. |
141
144
|`-v /config`| Local path for Whisper config files. |
- {env_var: "WHISPER_MODEL", env_value: "tiny-int8", desc: "Whisper model that will be used for transcription. From [here](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/utils.py#L12-L31), all with `-int8` compressed variants"}
25
+
- {env_var: "WHISPER_MODEL", env_value: "tiny-int8", desc: "Whisper model that will be used for transcription. From [here](https://github.com/home-assistant/addons/blob/master/whisper/config.yaml#L25), smaller models also have `-int8` compressed variants"}
- {env_var: "LOCAL_ONLY", env_value: "", desc: "If set to `true`, or any other value, the container will not attempt to download models from HuggingFace and will only use locally-provided models."}
36
+
- {env_var: "USE_TRANSFORMERS", env_value: "", desc: "If set to `true`, or any other value, the container will interpret `WHISPER_MODEL` as a HuggingFace transformers model id."}
36
37
- {env_var: "WHISPER_BEAM", env_value: "1", desc: "Number of candidates to consider simultaneously during transcription."}
37
38
- {env_var: "WHISPER_LANG", env_value: "en", desc: "Language that you will speak to the add-on."}
0 commit comments