Skip to content

[BUG] RuntimeError: cuDNN failed with status CUDNN_STATUS_EXECUTION_FAILED #47

@eximius313

Description

@eximius313

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When I start the container, I get:

wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  |
wyoming-whisper-gpu  |       ██╗     ███████╗██╗ ██████╗
wyoming-whisper-gpu  |       ██║     ██╔════╝██║██╔═══██╗
wyoming-whisper-gpu  |       ██║     ███████╗██║██║   ██║
wyoming-whisper-gpu  |       ██║     ╚════██║██║██║   ██║
wyoming-whisper-gpu  |       ███████╗███████║██║╚██████╔╝
wyoming-whisper-gpu  |       ╚══════╝╚══════╝╚═╝ ╚═════╝
wyoming-whisper-gpu  |
wyoming-whisper-gpu  |    Brought to you by linuxserver.io
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  |
wyoming-whisper-gpu  | To support LSIO projects visit:
wyoming-whisper-gpu  | https://www.linuxserver.io/donate/
wyoming-whisper-gpu  |
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  | GID/UID
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  |
wyoming-whisper-gpu  | User UID:    1000
wyoming-whisper-gpu  | User GID:    1000
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  | Linuxserver.io version: v2.5.0-ls86
wyoming-whisper-gpu  | Build-date: 2025-07-27T06:52:14+00:00
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  |
wyoming-whisper-gpu  | [custom-init] No custom files found, skipping...
wyoming-whisper-gpu  | INFO:__main__:Ready

But when Home assistant makes a request, the error occur:

wyoming-whisper-gpu  | INFO:faster_whisper:Processing audio with duration 00:02.270
wyoming-whisper-gpu  | ERROR:asyncio:Task exception was never retrieved
wyoming-whisper-gpu  | future: <Task finished name='wyoming event handler' coro=<AsyncEventHandler.run() done, defined at /lsiopy/lib/python3.12/site-packages/wyoming/server.py:31> exception=RuntimeError('cuDNN failed with status CUDNN_STATUS_EXECUTION_FAILED')>
wyoming-whisper-gpu  | Traceback (most recent call last):
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/wyoming/server.py", line 41, in run
wyoming-whisper-gpu  |     if not (await self.handle_event(event)):
wyoming-whisper-gpu  |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/wyoming_faster_whisper/handler.py", line 76, in handle_event
wyoming-whisper-gpu  |     text = " ".join(segment.text for segment in segments)
wyoming-whisper-gpu  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/wyoming_faster_whisper/handler.py", line 76, in <genexpr>
wyoming-whisper-gpu  |     text = " ".join(segment.text for segment in segments)
wyoming-whisper-gpu  |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/faster_whisper/transcribe.py", line 1148, in generate_segments
wyoming-whisper-gpu  |     encoder_output = self.encode(segment)
wyoming-whisper-gpu  |                      ^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/faster_whisper/transcribe.py", line 1358, in encode
wyoming-whisper-gpu  |     return self.model.encode(features, to_cpu=to_cpu)
wyoming-whisper-gpu  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  | RuntimeError: cuDNN failed with status CUDNN_STATUS_EXECUTION_FAILED

Expected Behavior

There is no error

Steps To Reproduce

  1. Using docker-compose.yml
  2. run docker compose up
  3. Make request from Home Assistant

Environment

- OS: Ubuntu 24.04.2 LTS
- How docker service was installed:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo groupadd docker
sudo usermod -aG docker $USER
sudo systemctl enable docker

together with 

sudo apt-get install nvidia-container-toolkit nvidia-container-toolkit-base libnvidia-container-tools libnvidia-container1

CPU architecture

x86-64

Docker creation

x-env: &service-env
  environment:
    - NVIDIA_VISIBLE_DEVICES=all
  runtime: nvidia
  deploy:
    resources:
      reservations:
        devices:
          - driver: nvidia
            count: 1
            capabilities:
              - gpu
              - utility
              - compute
  wyoming-whisper-gpu:
    <<: *service-env
    image: lscr.io/linuxserver/faster-whisper:gpu
    container_name: wyoming-whisper-gpu
    restart: unless-stopped
    ports:
      - 10300:10300
    volumes:
      - ./whisperdata:/data
    environment:
      - PUID=1000
      - PGID=1000
      - WHISPER_MODEL=tiny-int8
      - WHISPER_BEAM=1
      - WHISPER_LANG=pl

Container logs

wyoming-whisper-gpu  |
wyoming-whisper-gpu  |       ██╗     ███████╗██╗ ██████╗
wyoming-whisper-gpu  |       ██║     ██╔════╝██║██╔═══██╗
wyoming-whisper-gpu  |       ██║     ███████╗██║██║   ██║
wyoming-whisper-gpu  |       ██║     ╚════██║██║██║   ██║
wyoming-whisper-gpu  |       ███████╗███████║██║╚██████╔╝
wyoming-whisper-gpu  |       ╚══════╝╚══════╝╚═╝ ╚═════╝
wyoming-whisper-gpu  |
wyoming-whisper-gpu  |    Brought to you by linuxserver.io
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  |
wyoming-whisper-gpu  | To support LSIO projects visit:
wyoming-whisper-gpu  | https://www.linuxserver.io/donate/
wyoming-whisper-gpu  |
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  | GID/UID
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  |
wyoming-whisper-gpu  | User UID:    1000
wyoming-whisper-gpu  | User GID:    1000
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  | Linuxserver.io version: v2.5.0-ls86
wyoming-whisper-gpu  | Build-date: 2025-07-27T06:52:14+00:00
wyoming-whisper-gpu  | ───────────────────────────────────────
wyoming-whisper-gpu  |
wyoming-whisper-gpu  | [custom-init] No custom files found, skipping...
wyoming-whisper-gpu  | INFO:__main__:Ready
wyoming-whisper-gpu  | Connection to localhost (127.0.0.1) 10300 port [tcp/*] succeeded!
wyoming-whisper-gpu  | [ls.io-init] done.
wyoming-whisper-gpu  | INFO:faster_whisper:Processing audio with duration 00:01.300
wyoming-whisper-gpu  | ERROR:asyncio:Task exception was never retrieved
wyoming-whisper-gpu  | future: <Task finished name='wyoming event handler' coro=<AsyncEventHandler.run() done, defined at /lsiopy/lib/python3.12/site-packages/wyoming/server.py:31> exception=RuntimeError('cuDNN failed with status CUDNN_STATUS_EXECUTION_FAILED')>
wyoming-whisper-gpu  | Traceback (most recent call last):
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/wyoming/server.py", line 41, in run
wyoming-whisper-gpu  |     if not (await self.handle_event(event)):
wyoming-whisper-gpu  |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/wyoming_faster_whisper/handler.py", line 76, in handle_event
wyoming-whisper-gpu  |     text = " ".join(segment.text for segment in segments)
wyoming-whisper-gpu  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/wyoming_faster_whisper/handler.py", line 76, in <genexpr>
wyoming-whisper-gpu  |     text = " ".join(segment.text for segment in segments)
wyoming-whisper-gpu  |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/faster_whisper/transcribe.py", line 1148, in generate_segments
wyoming-whisper-gpu  |     encoder_output = self.encode(segment)
wyoming-whisper-gpu  |                      ^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  |   File "/lsiopy/lib/python3.12/site-packages/faster_whisper/transcribe.py", line 1358, in encode
wyoming-whisper-gpu  |     return self.model.encode(features, to_cpu=to_cpu)
wyoming-whisper-gpu  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wyoming-whisper-gpu  | RuntimeError: cuDNN failed with status CUDNN_STATUS_EXECUTION_FAILED

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions