AI models
Take local control of multiple AI models
OLL AMA
Taking his tentative first step towards world domination, Tam Hanna discovers how to control all the AI LLMs he needs.
Credit: https://ollama.com
OUR EXPERT Tam Hanna has found his customers seeking quick demonstrations of various LLMs. Ollama is a handy tool for achieving this goal.
QUICK TIP
Ollamais not limited to working on Linux. Should you prefer to run your LLMs on a Mac OS machine, visit https://ollama.com/download/mac to download the Apple version of the component. Should you need Windows, there’s a preview version at https://ollama.com/download/ windows.
Large language models, aka LLMs for short, are among the most interesting applications of artificial intelligence; few fields of business don’t profit from LLM usage. While most LLMs today are at least quasiopen source, getting them to run efficiently can be an experience not dissimilar to the challenge of herding cats.
Ollama aims to provide a unified evaluation surface that permits developers, researchers and experimenters easy access to a variety of LLMs. In principle, the system is an abstraction layer that lies between the application and language model. An application program or a developer interacts with the Ollama system, which then marshals the various commands to the underlying LLMs.
Ollama, however, does not limit itself to experimentation. The various models can furthermore be exposed using various APIs and interfaces. If a system can be made to run using one of these models, developers can save the (often eye-wateringly high) fees charged by model providers such as OpenAI.
Due to this, having an Ollama instance ready to run is a rewarding exercise not only for people interested in evaluating the various AI models. This article will show you how to get the system running on a Linux machine with Ubuntu 22.04 LTS.
Getting started with the Ollama environment is a multi-step process. First, the download and installation scripts must be run by entering the following command into a terminal emulator:
$ curl -fsSLt/install.sh | sh
Ollama integrates deeply into the workstation’s operating system; in addition to launching a service, it creates a new user group. Due to that, the installation script requires a superuser run – when prompted for your administrator password, enter it to ensure the installation will run unmolested. When done, the program shows Ollama status information similar to that displayed in the screenshot (above).