Skip to content

Using Agents and Tools

IMPORTANT

Not all LLMs support function calling and the use of tools. Please see the compatibility section for more information.

As outlined by Andrew Ng in Agentic Design Patterns Part 3, Tool Use, LLMs can act as agents by leveraging external tools. Andrew notes some common examples such as web searching or code execution that have obvious benefits when using LLMs.

In the plugin, tools are simply context and actions that are shared with an LLM via a system prompt. The LLM can act as an agent by requesting tools via the chat buffer which in turn orchestrates their use within Neovim. Agents and tools can be added as a participant to the chat buffer by using the @ key.

IMPORTANT

The agentic use of some tools in the plugin results in you, the developer, acting as the human-in-the-loop and approving their use.

How Tools Work

Tools make use of an LLM's function calling ability. All tools in CodeCompanion follow OpenAI's function calling specification, here.

When a tool is added to the chat buffer, the LLM is instructured by the plugin to return a structured JSON schema which has been defined for each tool. The chat buffer parses the LLMs response and detects the tool use before triggering the agent/init.lua file. The agent triggers off a series of events, which sees tool's added to a queue and sequentially worked with their putput being shared back to the LLM via the chat buffer. Depending on the tool, flags may be inserted on the chat buffer for later processing.

An outline of the architecture can be seen here.

Community Tools

There is also a thriving ecosystem of user created tools:

  • VectorCode - A code repository indexing tool to supercharge your LLM experience
  • mcphub.nvim - A powerful Neovim plugin for managing MCP (Model Context Protocol) servers

The section of the discussion forums which is dedicated to user created tools can be found here.

@cmd_runner

The @cmd_runner tool enables an LLM to execute commands on your machine, subject to your authorization. For example:

md
Can you use the @cmd_runner tool to run my test suite with `pytest`?
md
Use the @cmd_runner tool to install any missing libraries in my project

Some commands do not write any data to stdout which means the plugin can't pass the output of the execution to the LLM. When this occurs, the tool will instead share the exit code.

The LLM is specifically instructed to detect if you're running a test suite, and if so, to insert a flag in its request. This is then detected and the outcome of the test is stored in the corresponding flag on the chat buffer. This makes it ideal for workflows to hook into.

@create_file

NOTE

By default, this tool requires user approval before it can be executed

Create a file within the current working directory:

md
Can you create some test fixtures using the @create_file tool?

This tool enables an LLM to search for files in the current working directory by glob pattern. It will return a list of relative paths for any matching files.

md
Use the @file_search tool to list all the lua files in my project

IMPORTANT

This tool requires ripgrep to be installed

This tool enables an LLM to search for text, within files, in the current working directory. For every match, the output ({filename}:{line number} {relative filepath}) will be shared with the LLM:

md
Use the @grep_search tool to find all occurrences of `buf_add_message`?

@insert_edit_into_file

NOTE

By default, when editing files, this tool requires user approval before it can be executed

This tool can edit buffers and files for code changes from an LLM:

md
Use the @insert_edit_into_file tool to refactor the code in #buffer
md
Can you apply the suggested changes to the buffer with the @insert_edit_into_file tool?

@next_edit_suggestion

Inspired by Copilot Next Edit Suggestion, the @next_edit_suggestion tool gives the LLM the ability to show the user where the next edit is. The LLM can only suggest edits in files or buffers that have been shared with it as context.

The jump action can be customised in the opts table:

lua
require("codecompanion").setup({
  strategies = {
    chat = {
      tools = {
        ["next_edit_suggestion"] = {
          opts = {
            --- the default is to open in a new tab, and reuse existing tabs
            --- where possible
            ---@type string|fun(path: string):integer?
            jump_action = 'tabnew',
          },
        }
      }
    }
  }
})

The jump_action can be a VimScript command (as a string), or a lua function that accepts the path to the file and optionally returns the window ID. The window ID is needed if you want the LLM to point you to a specific line in the file.

@read_file

This tool can read the contents of a specific file in the current working directory. This can be useful for an LLM to gain wider context of files that haven't been shared with it.

The @web_search tool enables an LLM to search the web for a specific query. This can be useful to supplement an LLMs knowledge cut off date with more up to date information.

md
Can you use the @web_search tool to tell me the latest version of Neovim?

Currently, the tool uses tavily and you'll need to ensure that an API key has been set accordingly, as per the adapter.

Tool Groups

CodeCompanion comes with two built-in tool groups:

  • @full_stack_dev - Contains cmd_runner, create_file, read_file, and insert_edit_into_file tools
  • @files - Contains create_file, read_file, and insert_edit_into_file tools

When you include a tool group in your chat (e.g., @files), all tools within that group become available to the LLM. By default, all the tools in the group will be shown as a single <group>name</group> reference in the chat buffer.

If you want to show all tools as references in the chat buffer, set the collapse_tools option to false:

lua
require("codecompanion").setup({
  strategies = {
    chat = {
      tools = {
        groups = {
          ["files"] = {
            opts = {
              collapse_tools = false, -- Shows all tools in the group as individual references
            },
          },
        },
      }
    }
  }
})

Approvals

Some tools, such as the @cmd_runner, require the user to approve any actions before they can be executed. If the tool requires this a vim.fn.confirm dialog will prompt you for a response.

Useful Tips

Combining Tools

Consider combining tools for complex tasks:

md
@full_stack_dev I want to play Snake. Can you create the game for me in Python and install any packages you need. Let's save it to ~/Code/Snake. When you've finished writing it, can you open it so I can play?

Automatic Tool Mode

The plugin allows you to run tools on autopilot. This automatically approves any tool use instead of prompting the user, disables any diffs, submits errors and success messages and automatically saves any buffers that the agent has edited. Simply set the global variable vim.g.codecompanion_auto_tool_mode to enable this or set it to nil to undo this. Alternatively, the keymap gta will toggle the feature whist from the chat buffer.

Compatibility

Below is the tool use status of various adapters and models in CodeCompanion:

AdapterModelSupportedNotes
Anthropicclaude-3-opus-20240229
Anthropicclaude-3-5-haiku-20241022
Anthropicclaude-3-5-sonnet-20241022
Anthropicclaude-3-7-sonnet-20250219
Copilotgpt-4o
Copilotgpt-4.1
Copiloto1
Copiloto3-mini
Copiloto4-mini
Copilotclaude-3-5-sonnet
Copilotclaude-3-7-sonnet
Copilotclaude-3-7-sonnet-thoughtDoesn't support function calling
Copilotgemini-2.0-flash-001
Copilotgemini-2.5-pro
DeepSeekdeepseek-chat
DeepSeekdeepseek-reasonerDoesn't support function calling
GeminiGemini-2.0-flash
GeminiGemini-2.5-pro-exp-03-25
Geminigemini-2.5-flash-preview
GitHub ModelsAllNot supported yet
HuggingfaceAllNot supported yet
MistralAllNot supported yet
NovitaAllNot supported yet
OllamaAllIs currently broken
OpenAI CompatibleAllDependent on the model and provider
OpenAIgpt-3.5-turbo
OpenAIgpt-4.1
OpenAIgpt-4
OpenAIgpt-4o
OpenAIgpt-4o-mini
OpenAIo1-2024-12-17
OpenAIo1-mini-2024-09-12Doesn't support function calling
OpenAIo3-mini-2025-01-31
xAIAllNot supported yet

Released under the MIT License.