Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.
Embed a live health badge in a README or docs page.
[](https://www.ai-tools-scout.com/projects/ipex-llm)See how this project stacks up against other inference tools.
Get the fastest-growing projects, useful MCP servers, and technical reads in one weekly email.