Intel NPU
					Collection
				
Latest SOTA models supported on Intel NPU
					• 
				6 items
				• 
				Updated
					
				
Run DeepSeek-r1-distill-qwen-1.5B optimized for Intel NPUs with nexaSDK.
Install nexaSDK and create a free account at sdk.nexa.ai
Activate your device with your access token:
nexa config set license '<access_token>'
Run the model on NPU in one line:
nexa infer NexaAI/deepSeek-r1-distill-qwen-1.5B-intel-npu
deepSeek-r1-distill-qwen-1.5B is a distilled variant of DeepSeek-R1, built on the Qwen-1.5B architecture.
It compresses the reasoning and instruction-following capabilities of larger DeepSeek models into an ultra-lightweight 1.5B parameter model—ideal for fast, efficient deployment on constrained devices while retaining strong performance for its size.
Input: Text prompts including natural language queries, tasks, or code snippets.
Output: Direct responses—answers, explanations, or code—without extra reasoning annotations.