Popular repositories Loading
-
-
ipex-llm
ipex-llm PublicForked from intel/ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
Python
-
dynamo
dynamo PublicForked from ai-dynamo/dynamo
A Datacenter Scale Distributed Inference Serving Framework
Rust
-
vllm
vllm PublicForked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Python
-
-
enhancements
enhancements PublicForked from ai-dynamo/enhancements
Enhancement Proposals and Architecture Decisions
If the problem persists, check the GitHub status page or contact support.

