Ollama
A lightweight runtime tool for downloading, running, and managing open-source LLMs in local environments
#Ollama#local LLM runtime#self-hosted model tool#local AI tool
What is Ollama?
Ollama is a local runtime that helps developers quickly download and run open-source LLMs on personal or internal machines.
Why is it widely used?
Its setup is simple and CLI-first, which makes local AI pilots easier to start and iterate.
Operational considerations
Model size must match available memory and GPU resources, and team deployments should include version and access control practices.
Related terms
AI Infrastructure
Agent Orchestration
An operating approach that coordinates multiple AI agents and tools under shared routing and control policies
AI Productivity & Collaboration
Agentic Coding
A development style where AI agents handle multi-step coding tasks beyond simple code completion
Natural Language Processing
AGI (Artificial General Intelligence)
A hypothetical AI system capable of performing any intellectual task a human can
Natural Language Processing
AI Agent
An autonomous AI system that can plan, use tools, and take actions to achieve goals
AI Business, Funding & Market
AI App Store
A platform for discovering, installing, and monetizing apps or agents built on top of AI models
AI Ethics & Policy
AI Chip Export Controls
Trade control frameworks that restrict cross-border transfer of high-end AI semiconductors for national security reasons