Snippets

Jake Cannell serving vllm on vast notebook

Created by Dimitri McDaniel
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook shows how to serve a large language model on Vast's GPU platform using the popular open source inference framework [vLLM](https://github.com/vllm-project/vllm). `vLLM` is particularly good at high-throughput serving, for multi user or high load use-cases, and is one of the most popular serving frameworks today.\n",
    "\n",
    "The commands in this notebook can be run here, or copied and pasted into your terminal (Minus the `!` or the `%%bash`). At the end, we will include a way to query your `vLLM` service in either python or with a curl request for the terminal."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "#In an environment of your choice\n",
    "pip install --upgrade vastai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "powershell"
    }
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "# Here we will set our api key\n",
    "vastai set api-key <Your-API-Key-Here>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we are going to look for GPU's on vast. The model that we are using is going to be very small, but to allow for easilly swapping out the model you desire, we will select machines that:\n",
    "1. Have GPU's with Ampere or newer architecture\n",
    "2. Have at least 24gb of GPU RAM (to run 13B parameter LLMs)\n",
    "3. One GPU as `vLLM` primarilly serves one copy of a model.\n",
    "4. Have a static IP address to route requests to\n",
    "5. Have direct port counts available (greater than 1) to enable port reservations\n",
    "6. Use Cuda 12.4 or higher due to `vLLM`'s base image"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "powershell"
    }
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "vastai search offers 'compute_cap >= 800 gpu_ram >= 24 num_gpus = 1 static_ip=true direct_port_count > 1 cuda_vers >= 12.4' \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Copy and Paste the id of a machine that you would like to choose below for `<instance-id>`.\n",
    "We will activate this instance with the `vLLM-OpenAI` template. This template gives us a docker image that uses `vLLM` behind an OpenAI Compatible server. This means that it can slide in to any application that uses the openAI api. All you need to change in your app is the `base_url` and the `model_id` to the model that you are using so that the requests are properly routed to your model.\n",
    "\n",
    "This command also exposes the port 8000 in the docker container, the default openAI server port, and tells the docker container to automatically download and serve the `stabilityai/stablelm-2-zephyr-1_6b`. You can change the model by using any HuggingFace model ID. We chose this because it is fast to download and start playing with.\n",
    "\n",
    "We use vast's `--args` command to funnel the rest of the command to the container, in this case `--model stabilityai/stablelm-2-zephyr-1_6b`, which `vLLM` uses to download the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "powershell"
    }
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "vastai create instance <instance-id> --image vllm/vllm-openai:latest --env '-p 8000:8000' --disk 40 --args --model stabilityai/stablelm-2-zephyr-1_6b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we need to verify that our setup is working. We first need to wait for our machine to download the image and the model and start serving. This will take a few minutes. The logs will show you when it's done. \n",
    "\n",
    "Then, at the top of the instance, there is a button with an IP address in it. Click this and a panel will show up of the ip address and the forwarded ports. \n",
    "You should see something like: \n",
    "```\n",
    "Open Ports\n",
    "XX.XX.XXX.XX:YYYY -> 8000/tcp\n",
    "``` \n",
    "Copy and paste the IP address and the port in the curl command below.\n",
    "\n",
    "This curl command sends and OpenAI compatible request to your vLLM server. You should see the response if everything is setup correctly. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "powershell"
    }
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "# This request assumes you haven't changed the model. If you did, fill it in the \"model\" value in the payload json below\n",
    "curl -X POST http://<Instance-IP-Address>:<Port>/v1/completions -H \"Content-Type: application/json\"  -d '{\"model\" : \"stabilityai/stablelm-2-zephyr-1_6b\", \"prompt\": \"Hello, how are you?\", \"max_tokens\": 50}'\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This next cell replicates exactly the same request but in the python requests library. \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests\n",
    "\n",
    "headers = {\n",
    "    'Content-Type': 'application/json',\n",
    "}\n",
    "\n",
    "json_data = {\n",
    "    'model': 'stabilityai/stablelm-2-zephyr-1_6b',\n",
    "    'prompt': 'Hello, how are you?',\n",
    "    'max_tokens': 50,\n",
    "}\n",
    "\n",
    "response = requests.post('http://<Instance-IP-Address>:<Port>/v1/completions', headers=headers, json=json_data)\n",
    "print(response.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you're looking to build off of this more, we recommend checking out the [OpenAI sdk](https://github.com/openai/openai-python), which we will use here for easier interaction with the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "pip install openai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from openai import OpenAI\n",
    "\n",
    "# Modify OpenAI's API key and API base to use vLLM's's API server.\n",
    "openai_api_key = \"EMPTY\"\n",
    "openai_api_base = \"http://<Instance-IP-Address>:<Port>/v1\"\n",
    "client = OpenAI(\n",
    "    api_key=openai_api_key,\n",
    "    base_url=openai_api_base,\n",
    ")\n",
    "completion = client.completions.create(model=\"stabilityai/stablelm-2-zephyr-1_6b\",\n",
    "                                      prompt=\"Hello, how are you?\",\n",
    "                                      max_tokens=50)\n",
    "print(\"Completion result:\", completion)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Advanced vLLM Usage: Quantized Llama-3-70b-Instruct\n",
    "Now that we've spun up a model on vLLM, we can get into more complicated deployments. We'll work on serving this specific quantized Llama-3 70B [model](https://huggingface.co/casperhansen/llama-3-70b-instruct-awq).\n",
    "With this quantized model, we can easilly serve this model on on 4 4090 GPU's.\n",
    "\n",
    "Overall, A few things need to change:\n",
    "1. The model string need to change to our new model.\n",
    "2. We're going to use 4 GPU's\n",
    "3. We need to provision much more space on our system to be able to download the full set of weights. 100 GB in this case should be fine\n",
    "4. We need to set up tensor parallelism inside vLLM to split up the model across these 4 gpus. \n",
    "5. We need to let vLLM know that this is a quantized model\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "vastai search offers 'compute_cap >= 800 gpu_ram >= 24 num_gpus = 4 static_ip=true direct_port_count > 1 cuda_vers >= 12.4' \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will make a similar search as before, but include parameters to ensure at least 4 GPUs.\n",
    "\n",
    "In our instance creation, we will increase our disk usage to 100GB.\n",
    "\n",
    "Then, we will tell vllm to: 1. use the specific model, 2. split across 4 GPU's, and 3. Let it know that it is in fact a quantized model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "vastai create instance <Instance-ID> --image vllm/vllm-openai:latest --env '-p 8000:8000' --disk 100 --args --model casperhansen/llama-3-70b-instruct-awq --tensor-parallel-size 4  --quantization awq "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "# This request assumes you haven't changed the model. If you did, fill it in the \"model\" value in the payload json below\n",
    "curl -X POST http://<Instance-IP-Address>:<Port>/v1/completions -H \"Content-Type: application/json\"  -d '{\"model\" : \"casperhansen/llama-3-70b-instruct-awq\", \"prompt\": \"Hello, how are you?\", \"max_tokens\": 50}'\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests\n",
    "\n",
    "headers = {\n",
    "    'Content-Type': 'application/json',\n",
    "}\n",
    "\n",
    "json_data = {\n",
    "    'model': 'casperhansen/llama-3-70b-instruct-awq',\n",
    "    'prompt': 'Hello, how are you?',\n",
    "    'max_tokens': 50,\n",
    "}\n",
    "\n",
    "response = requests.post('http://<Instance-IP-Address>:<Port>/v1/completions', headers=headers, json=json_data)\n",
    "print(response.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And with the OpenAI SDK:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from openai import OpenAI\n",
    "\n",
    "# Modify OpenAI's API key and API base to use vLLM's's API server.\n",
    "openai_api_key = \"EMPTY\"\n",
    "openai_api_base = \"http://<Instance-IP-Address>:<Port>/v1\"\n",
    "client = OpenAI(\n",
    "    api_key=openai_api_key,\n",
    "    base_url=openai_api_base,\n",
    ")\n",
    "completion = client.completions.create(model=\"casperhansen/llama-3-70b-instruct-awq\",\n",
    "                                      prompt=\"Hello, how are you?\",\n",
    "                                      max_tokens=50)\n",
    "print(\"Completion result:\", completion)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.19"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}

Comments (0)

HTTPS SSH

You can clone a snippet to your computer for local editing. Learn more.