Youtube icongithub icontwitter iconreddit icondiscord iconinstagram iconfacebook iconthreads icon

How to run Deepseek with LM Studio

Deepseek is gaining alot of popularity in recent time but being a chinese model and the website of the Deepseek being down cyber attach let's take a techincal look at how to run deepseek locally on your machine using LM Studio. Check the complete blog to know more about it.

Deepseek R1 has been going wild for quite some time because of the cost of running it and the accuracy it can provide. But the model is owned by a Chinese company and we don't wanna send our data to China. We will take a technical look at how we can run the Deepseek R1 model locally on your system.

To get started download the LM Studio and install it. It is similar to Ollama which helps you run LLM models locally. Ollama is more of a CLI-based access but with LMstudio you will get the option to chat with the model through a UI just like we do with ChatGPT.

What is LM Studio?

LM Studio is a desktop application that can run on mac os, windows, and Linux. With the help of LM Studio, the user can run Large Language Models (LLMs) on their local machine. It provides a user-friendly interface for managing models and communicating with the model. All of this can be done without using any coding knowledge!

What is DeepSeek R1?

DeepSeek R1 is a state-of-the-art Large Language Model known for its strong performance across a variety of tasks, including text generation, question answering, and code completion when compared to the other models in the market. As this model is open source, we can run it locally and modify it if we can. By running it locally on your machine we can save cost and privacy compared to the Deepseek in web UI. Moreover, Deepseek is going through a DDoS attack which has made their website unusable.

Prerequisites

  • A computer running Windows, macOS, or Linux.
  • At least 16GB of RAM is recommended, 32GB+ for optimal performance
  • At least 10GB+ depending on the model you choose

Setting up LM Studio

  1. Go to the LM Studio website and download the version as per your operating system.
  2. Install the app by following the installation steps.
  3. Launch LM Studio
  4. Click on the "Search" icon 4th option on the left sidebar.
  5. Type "DeepSeek R1" into the search bar and download the modal that best suits your PC configuration. After the model is downloaded load the modal in the app by pressing the load modal button that will appear after the modal is downloaded successfully.
  6. Note - Choose the right model as per your machine's config else the response time for the model will be very slow.
  7. After that, you can chat with the model like a normal chat. The response time will vary as per your system specs

What have we done just now?

The Deepseek R1 model is open source which means the model is free for use and anyone can use the model for free. As these models require GPU power to do the work their price is fairly high and to enhance the user experience these models usually save user data on their servers. If we will be using the model online through any platform then we will have to pay money and also send them our data. To be safe from both things we have just downloaded the model and run the model on our machine using our own machine's GPU and CPU power and the data of the app will be saved on our machine itself so we won't have to worry about privacy too.

If you are new to thing AI thing make sure to read some of our old blogs too.

If you want access to this all over the internet then either you can set up a VPN to your home network and access it or you can host the save on the cloud.

You can also tweak this model a little bit like maintaining the temperature of the content this model is giving out by adjusting it in the LM Studio.

Before loading any model just remember the results of the model running on your machine and the model provided by the company will have different results because of the low parameters. The company runs the highest parameter model on the best hardware while the model you might be running might not be that much capable. The lesser the parameter the more dumb the model will sound.

Twitter Share iconFacebook Share iconReddit Share icon

Related Article - You May Also Like

Apple's AI Strategy Unveiled: Insights into iOS 18 Features and Release
Apple's AI Strategy Unveiled: Insights into iOS 18 Features and Release

The blog outlines Apple's forthcoming AI strategy, set to be revealed at WWDC 2024, potentially incorporating generative...Click below to read more!

iPhone 15 and iPhone 15 Plus: Everything that you need to know
iPhone 15 and iPhone 15 Plus: Everything that you need to know

'iPhone 15 has an all-new design that's simply gorgeous' as quoted by Kaiann from Apple in the launch event of Apple tha...Click below to read more!

OnePlus 13 has launcged in China
OnePlus 13 has launcged in China

On October 31, 2024, OnePlus launched the OnePlus 13 in China, featuring the powerful Qualcomm Snapdragon 8 Elite proces...Click below to read more!

Google Pixel 9 is here!
Google Pixel 9 is here!

Google Pixel 9 was released on 13th August 2024, Pixel 9 features a 6.3-inch OLED display with 120Hz refresh rate, 1080x...Click below to read more!

Google Pixel 9 Pro is here!
Google Pixel 9 Pro is here!

The Google Pixel 9 Pro launched on August 13, 2024, featuring a 6.3-inch OLED display with a 120Hz refresh rate, 2000 ni...Click below to read more!

Apple iPhone 16e is here!
Apple iPhone 16e is here!

The iPhone 16e is here! Experience a stunning design, supersized battery life, and the power of Apple Intelligence. Pre-...Click below to read more!

Apple iPad Air M3
Apple iPad Air M3

Apple has just launched a new Apple iPad Air with the new M3 chip in 2 different screen sizes : 11 inch and 13 inch. To ...Click below to read more!