This project demonstrates parameter-efficient fine-tuning of large Vision-Language Models (VLMs), specifically Qwen2-VL-7B-Instruct, using LoRA (Low-Rank Adaptation) and 4-bit quantization.
deep-learning vlm llm generative-ai llm-training agentic-ai vlm-inference vlm-training vlm-finetuning
-
Updated
Mar 15, 2026 - Jupyter Notebook