How to Install DeepSeek Locally with Ollama on Ubuntu 24.04
Running AI models like DeepSeek on your own machine is a great way to explore AI capabilities without relying on cloud services. In this guide, we’ll show you how to install DeepSeek locally using Ollama on Ubuntu 24.04 and set up a Web UI for an easy-to-use interface.
Key Takeaways
- Install DeepSeek locally using Ollama on Ubuntu 24.04.
- Set up a Web UI for a better user experience.
- Requires at least 8GB RAM (16GB recommended).
- No cloud dependency, run AI models offline.
What is DeepSeek and Ollama?
DeepSeek is a powerful AI language model used for answering questions, generating text, and other natural language tasks.
Ollama is a platform that simplifies the process of running large language models locally by managing and interacting with models like DeepSeek.
Web UI provides a graphical interface to interact with DeepSeek through your browser, making it easier to use.
Prerequisites
Before you begin, ensure you have the following:
- Ubuntu 24.04 installed.
- A stable internet connection.
- At least 8GB RAM (16GB+ recommended for better performance).
- Basic knowledge of the terminal.
Step 1: Install Python and Git
First, update your system to make sure all packages are up to date:
sudo apt update && sudo apt upgrade -y
Next, install Python (version 3.8 or higher) and Git:
When using Ubuntu as your operating system, you’ll typically find Python already integrated into the system. However, it’s crucial to verify that you’re running Python 3.8 or a more recent version to ensure compatibility with modern applications and libraries.
sudo apt install python3
Python’s package management system, pip, serves as an essential tool for handling dependencies. When working with AI models like DeepSeek and Ollama, you’ll need pip to download and manage their required libraries and components effectively.
sudo apt install python3-pip
Git helps you download and manage code that’s stored on GitHub. It’s the key tool you need to get started with any GitHub project.
sudo apt install git
Check if they are installed:
python3 --version
pip3 --version
git --version
Step 2: Install Ollama for DeepSeek
Ollama makes it easy to run DeepSeek. Install it using the following command:
curl -fsSL https://ollama.com/install.sh | sh
Verify the installation:
ollama --version
Enable Ollama to start automatically:
sudo systemctl start ollama
sudo systemctl enable ollama
DeepSeek AI Models Overview
Main Model
DeepSeek-R1 (67B)
ollama run deepseek-r1:671b
Distilled Models
DeepSeek’s innovative approach transfers knowledge from larger models to smaller ones, creating more efficient AI models. These distilled versions often outperform traditionally trained smaller models.
Available Models and Commands
1.5B Model (Qwen-based)
ollama run deepseek-r1:1.5b
7B Model (Qwen-based)
ollama run deepseek-r1:7b
8B Model (Llama-based)
ollama run deepseek-r1:8b
14B Model (Qwen-based)
ollama run deepseek-r1:14b
32B Model (Qwen-based)
ollama run deepseek-r1:32b
70B Model (Llama-based)
ollama run deepseek-r1:70b
Each model inherits reasoning capabilities from DeepSeek-R1, making them powerful options for various AI tasks.
Step 3: Download and Run DeepSeek Model
(For this tutorial requirements mentioned above) I’m going to run the DeepSeek model r1:7b
ollama run deepseek-r1:7b
This process may take some time, depending on your internet speed.
Once completed, check if the model is available:
ollama list
You should see deepseek listed.
Step 4: Run DeepSeek in a Web UI
The Ollama Web UI offers a more visual and intuitive way to work with DeepSeek models, compared to using command-line instructions. This browser-based interface makes it easier to chat with and control your Ollama models through a simple point-and-click experience.
To interact with DeepSeek through a web interface, we’ll install Open WebUI.
Create a Virtual Environment
sudo apt install python3-venv
python3 -m venv ~/open-webui-venv
source ~/open-webui-venv/bin/activate
Install and Run Open WebUI
pip install open-webui
open-webui serve
Now, open your browser and go to http://localhost:8080 to access the web interface.
In the UI, select the DeepSeek model and start using it for text generation and other tasks.
Running DeepSeek on Cloud Platforms
If you prefer to run DeepSeek on the cloud, consider these options:
- Linode – Affordable, high-performance cloud hosting for deploying AI models.
- Google Cloud (GCP) – GPU-supported virtual machines for better AI performance.
Begin Your AI Development
You’ve successfully installed DeepSeek on Ubuntu 24.04 using Ollama. Now, you can interact with the AI model either through the terminal or a user-friendly web interface. Try it out and explore the power of local AI models!
Now that you have DeepSeek running locally, you might be interested in discovering other AI tools that can enhance your development workflow. Check out our post on Best Coding AI for Developers and Teams to explore a curated selection of AI-powered development tools that can boost your productivity.
Reference:
- https://ollama.com/library/deepseek-r1
- https://github.com/deepseek-ai/DeepSeek-R1