How to install Ollama to run local AI models on Windows 11

Run AI models locally on Windows 11 with the help of Ollama, a popular open-source tool to install and manage LLMs without cloud dependency.

Ollama install on Windows 11
Ollama install on Windows 11 / Image: Mauro Huculak
  • To install Ollama on Windows 11, open Command Prompt as an administrator and run the winget install --id Ollama.Ollama command.
  • Once installed, use the ollama pull <model> command to download and run LLMs like Gemma, LLaMA, and DeepSeek locally.
  • Ollama works independently of the cloud and supports running models through a command-line interface or a custom interface.

On Windows 11, you can install Ollama to download and run multiple AI models locally on your computer, and in this guide, I’ll outline the steps to complete this configuration.

Although Windows 11 comes with many built-in AI models for specific hardware, such as Copilot+ PCs, these are typically small and specialized models. Therefore, the primary way to interface with AI Large Language Models (LLMs) is through online chatbots, such as Microsoft Copilot, OpenAI ChatGPT, and Google Gemini, among others.

The caveat with using these LLMs is that they process all requests in the cloud, which is not something everyone wants to do. This is where Ollama becomes a valuable tool.

Ollama is an open-source tool that allows you to run Large Language Models directly on your local computer running Windows 11, 10, or another platform. It’s designed to make the process of downloading, running, and managing these AI models simple for individual users, developers, and researchers.

In this guide, I’ll outline the steps to set up one of the easiest tools to run LLMs locally on your computer.

Install Ollama on Windows 11 to run AI models locally

To install Ollama locally on Windows 11, follow these steps:

  1. Open Start on Windows 11.

  2. Search for Command Prompt (or Terminal), right-click the top result, and choose the Run as administrator option.

  3. (Option 1) Type this command to install the official Ollama tool and press Enter:

    winget install --id Ollama.Ollama

    winget install Ollama AI

  4. Click the Finish button.

  5. (Option 2) Type this command to uninstall Ollama from Windows 11 and press Enter:

    winget uninstall--id Ollama.Ollama
    Quick tip: It’s recommended to uninstall any AI model from the computer before removing Ollama.

Once you complete the steps, Ollama will run in the background, and then you can use the ollama command-line to interact with it from Command Prompt or PowerShell.

In addition to using the Windows Package Manager (winget) to install this tool, you can always get the Ollama installer from its official page or this GitHub page. 

Install and run AI LLMs with Ollama on Windows 11

Before diving into the installation, open the Ollama library page to decide which AI model you want to install.

On the page, you will find a list of the available models from DeepSeek, Google (gemma), Meta (llama), and others.

You’ll also notice that each model has notations like 1b, 4b, 12b, etc., which indicate the number of parameters in billions. A higher number of parameters indicates a larger and potentially more capable model. However, they’ll also require more capable hardware.

If you’re just starting out, you should download the smallest version (for example, “1b”) available.

To install an AI model using Ollama on Windows 11, follow these steps:

  1. Open Start.

  2. Search for Command Prompt (or Terminal), right-click the top result, and choose the Run as administrator option.

  3. (Optional) Type this command to check the Ollama version and press Enter:

    ollama --version

    Ollama check version command

  4. (Optional) Type this command to confirm that Ollama is running and press Enter:

    curl http://localhost:11434
  5. (Option 1) Type this command to install and run the AI model locally on your computer and press Enter:

    ollama pull gemma3:1b

    Ollama install AI model command

    Quick note: This command downloads the Google Gemma version 3, which includes one billion parameters. In your command, update gemma3:1b for the model name and version you want to install, such as deepseek-r1:1.5b.
  6. (Optional) Type this command to view the installed models and press Enter:

    ollama list
  7. (Option 2) Type this command to uninstall an AI model from your computer and press Enter:

    ollama rm gemma3:1b
  8. Type this command to confirm that the model is no longer on your computer and press Enter:

    ollama list
  9. Type this command to run an AI model installed using Ollama and press Enter:

    ollama run gemma3:1b

    Ollama run AI model command

    In the command, confirm the model you want to start using with Ollama.

  10. (Optional) Type this command to show all the running processes of Ollama and press Enter:

    ollama ps
  11. (Optional) Type this command to show details of a specific model, such as configuration and parameters, and press Enter:

    ollama show gemma3:1b

    Ollama show AI model details

    In the command, confirm the model you want to start using with Ollama.

After you complete the steps, you can start using the LLM through the command-line interface.

You can also use the ollama --help command to view other available commands, and the ollama run --help command to list the commands available for a specific model.

FAQs about Ollama on Windows 11

If you still have questions about the Ollama, the following FAQs will help you clarify different aspects of this tool. You can always post your questions in the comments below.

Do you need specific AI hardware to run Ollama?

No, you can run the basic AI models that you acquired using Ollama using relatively modern hardware, and you don’t exactly need a Neural Processing Unit (NPU).

Ollama can also run directly in the processor, but a graphics card is highly recommended.

What are the Ollama system requirements for Windows 11?

The system requirements to install Ollama on Windows 11 or 10 are moderate:

  • Processor: Intel or AMD x86-64.
  • Memory: 8GB, but 16GB and higher recommended.
  • Storage: 10GB of free space.
  • Graphics: Integrated or dedicated GPU.

Although the requirements aren’t significant, I recommend a modern multi-core processor, at least 32GB of RAM, more than 256GB of free space on an SSD (NVMe preferred), and a graphics card like Nvidia’s RTX 30xx series. However, you can always use an equivalent AMD Radeon GPU.

Also, for advanced models, a minimum of 4GB VRAM is recommended.

The bigger the model, the more resources will be required, meaning that the machine will need more robust hardware.

Does Ollama use virtualization to run on Windows 11?

No, Ollama does not create a virtual machine itself. However, it does create an isolated environment on the system to run LLMs. This environment includes all the necessary components, such as model weights (the pre-trained data) and configuration files.

It’s important to note that Ollama no longer requires WSL2. It runs natively on the latest version of Windows 11 or 10.

Do the Ollama models only operate through a command-line interface?

No, even though the command line is a primary way to manage and quickly interact with Ollama models, it’s not the only way.

Ollama is highly extensible, providing a robust API and community client libraries that allow the creation of custom applications and user interfaces, and Open WebUI is one of them.

About the author

Mauro Huculak is a Windows How-To Expert and founder of Pureinfotech in 2010. With over 22 years as a technology writer and IT Specialist, Mauro specializes in Windows, software, and cross-platform systems such as Linux, Android, and macOS.

Certifications: Microsoft Certified Solutions Associate (MCSA), Cisco Certified Network Professional (CCNP), VMware Certified Professional (VCP), and CompTIA A+ and Network+.

Mauro is a recognized Microsoft MVP and has also been a long-time contributor to Windows Central.

You can follow him on YouTube, Threads, BlueSky, X (Twitter), LinkedIn and About.me. Email him at [email protected].