Accessing a LLM Offline Using Anaconda AI Navigator
The objective of this assignment is to familiarize students with running a large language model (LLM) locally using Anaconda AI Navigator’s built-in graphical user interface (GUI). Students will install Anaconda AI Navigator, test the Phi-3-Mini-4K-Instruct model (or another small model) via the GUI chat interface, configure the built-in API server, and demonstrate its functionality as a basic chatbot application. This lab emphasizes leveraging Anaconda’s out-of-the-box tools for secure, local AI deployment without additional coding.
Students will download and install Anaconda AI Navigator, a desktop application providing a GUI for interacting with LLMs locally. Using its built-in features, they will download and test the Phi-3-Mini-4K-Instruct model (a 3.8B parameter model with a 4K token context) or another small model from the library. Students will explore the GUI chat interface by testing the model with at least three distinct prompts directly in the “Chat” tab. Ask the same question a few times, modifying the settings each time and noting the difference.

Next, they will configure the built-in API server (via the “API Server” tab) to expose the model at a local endpoint (e.g., http://localhost:8080), verify its status in Anaconda Navigator, then visit the page, and test out the LLM. This assignment focuses on using Anaconda AI Navigator’s native tools—no external Python coding or libraries beyond what’s provided are required—highlighting a user-friendly approach to local LLM deployment.
