Local LLMs for Research

Introductory
90 minutes AI Python

About This Workshop

This workshop teaches you how to run open-weight language models (LLMs) locally on your computer for research applications. You'll learn to use Ollama (a simple, powerful tool for running models like Llama, Qwen, and Mistral) to perform text classification, quality evaluation, and batch processing of research data, all without cloud APIs or usage limits.

Get Started
Quick Info
Duration: 90 minutes
Level: Introductory
Materials:  GitHub Repository