Skip to content

fix(ollama): add Gemma to tool support heuristic for Ollama and LM Studio#12158

Open
octo-patch wants to merge 1 commit intocontinuedev:mainfrom
octo-patch:fix/gemma-tool-support-ollama-lmstudio
Open

fix(ollama): add Gemma to tool support heuristic for Ollama and LM Studio#12158
octo-patch wants to merge 1 commit intocontinuedev:mainfrom
octo-patch:fix/gemma-tool-support-ollama-lmstudio

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

@octo-patch octo-patch commented Apr 17, 2026

Fixes #12131

Problem

Gemma 3/4 models running via Ollama or LM Studio are not recognized as supporting tool calling, so Continue never offers tools to them. This affects users who run Gemma models locally through either provider.

The root cause: the Ollama provider's static model heuristic in toolSupport.ts is missing "gemma". The LM Studio provider delegates to Ollama's heuristic, so both are affected.

Without this entry, modelSupportsNativeTools() returns false for any Gemma model, and tools are never offered or sent — even if the model's Ollama template includes .Tools support (the secondary check added in #11670).

Note: the openai provider already lists Gemma as supported, and the existing test suite confirms this (lines 109-113 in toolSupport.test.ts).

Solution

Add "gemma" to the Ollama supported-models list with a reference to Google's function calling documentation:
https://ai.google.dev/gemma/docs/capabilities/function-calling

For Ollama, the /api/show template check (from #11670) will still act as a secondary gate — if the specific Gemma model's template doesn't include .Tools, tools will be skipped. This PR only removes the pre-filter that was blocking Gemma entirely.

Testing

Added test cases for Gemma models in both ollama and lmstudio provider sections. All 83 tests pass.

Tests: 83 passed, 83 total

Summary by cubic

Enable native tool calling for Gemma 3/4 models when run via ollama or lmstudio. Adds "gemma" to the Ollama tool-support heuristic so Continue offers tools; LM Studio inherits this, and the existing /api/show template check still gates per-model support.

Written for commit 990aa9e. Summary will update on new commits.

…udio (fixes continuedev#12131)

Gemma 3/4 support function calling per Google's documentation:
https://ai.google.dev/gemma/docs/capabilities/function-calling

The Ollama provider's static heuristic was missing "gemma", preventing
Continue from offering tools to Gemma models served via Ollama or LM
Studio. Without this, modelSupportsNativeTools() returns false and
tools are never sent, even when the model's Ollama template includes
.Tools support.

This fix adds "gemma" to the supported model list, allowing the
secondary /api/show template check (added in continuedev#11670) to make the final
determination for Ollama, and enabling tool use for Gemma models in
LM Studio as well.
@octo-patch octo-patch requested a review from a team as a code owner April 17, 2026 02:26
@octo-patch octo-patch requested review from sestinj and removed request for a team April 17, 2026 02:26
@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Apr 17, 2026
@github-actions
Copy link
Copy Markdown
Contributor


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


octo-patch seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 2 files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

Qwen 3.5 and Gemma 4 can't call tools

1 participant