Skip to content

chore(adapters): move common options to model options#499

Open
harry-whorlow wants to merge 9 commits intomainfrom
move-baseoptions-to-model-options
Open

chore(adapters): move common options to model options#499
harry-whorlow wants to merge 9 commits intomainfrom
move-baseoptions-to-model-options

Conversation

@harry-whorlow
Copy link
Copy Markdown
Contributor

@harry-whorlow harry-whorlow commented Apr 23, 2026

Summary by CodeRabbit

  • New Features

    • Added temperature and topP sampling control options for Gemini model configuration.
  • Refactor

    • Restructured how temperature, max tokens, and top_p parameters are handled across AI providers. These settings now flow through model-scoped provider options rather than top-level configuration fields.
  • Tests

    • Updated e2e smoke test routing configuration.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 23, 2026

📝 Walkthrough

Walkthrough

This PR refactors how temperature, topP, and maxTokens are handled across multiple AI provider adapters. Instead of being explicitly mapped from top-level TextOptions into request parameters, these values are now treated as model-scoped provider options sourced from modelOptions. A new generated route tree file is also added for smoke test e2e routing.

Changes

Cohort / File(s) Summary
Anthropic Adapter Configuration
packages/typescript/ai-anthropic/src/adapters/text.ts
Treats temperature and top_p as model-scoped options in validKeys, copies from modelOptions to validProviderOptions, and changes defaultMaxTokens to derive from modelOptions?.max_tokens instead of options.maxTokens.
Gemini Adapter & Configuration
packages/typescript/ai-gemini/src/adapters/text.ts, packages/typescript/ai-gemini/src/text/text-provider-options.ts
Removes direct mapping of temperature, topP, and maxOutputTokens from options in adapter; adds temperature and topP optional properties to GeminiCommonConfigOptions interface with sampling-related documentation.
Grok Adapter
packages/typescript/ai-grok/src/adapters/text.ts
Adjusts modelOptions type casting to exclude only tools and input, removes explicit mapping of temperature, maxTokens, and topP to request parameters.
Groq Adapter
packages/typescript/ai-groq/src/adapters/text.ts
Tightens modelOptions type cast to exclude only 'tools', removes direct forwarding of temperature, maxTokens, and topP into streaming request object.
Ollama Adapter & Metadata
packages/typescript/ai-ollama/src/adapters/text.ts, packages/typescript/ai-ollama/src/meta/models-meta.ts
Removes synthesis of ollamaOptions from generic TextOptions parameters; updates request option typing from custom OllamaOptions to Ollama's exported Options type.
OpenAI Adapter
packages/typescript/ai-openai/src/adapters/text.ts
Tightens modelOptions type cast to only omit 'tools' | 'metadata' | 'input', allowing temperature/max-tokens/top-p to be provided via modelOptions.
OpenRouter Adapter
packages/typescript/ai-openrouter/src/adapters/text.ts
Extracts variant from modelOptions and composes model string with variant suffix; removes conditional top-level temperature, maxCompletionTokens, and topP fields.
Smoke Tests Routing
packages/typescript/smoke-tests/e2e/src/routeTree.gen.ts
Auto-generated TanStack Router route construction file defining route instances with concrete IDs/paths, route type interfaces, and router configuration with SSR enabled.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 From options they migrate and flow,
To modelOptions, down below,
Temperature hops through each provider's way,
A refactored config dance today! 🌟

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request has no description provided by the author, missing all required template sections including Changes, Checklist items, and Release Impact declaration. Add a comprehensive pull request description explaining the changes, confirm completion of checklist items, and declare the release impact (changeset requirement).
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main objective of the pull request: moving common TextOptions (temperature, topP, maxTokens) from top-level handling to model-scoped provider options across multiple AI adapter packages.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch move-baseoptions-to-model-options

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@harry-whorlow harry-whorlow force-pushed the move-baseoptions-to-model-options branch from 61a42c2 to 2663f0f Compare April 23, 2026 15:15
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 23, 2026

🚀 Changeset Version Preview

11 package(s) bumped directly, 22 bumped as dependents.

🟥 Major bumps

Package Version Reason
@tanstack/ai-fal 0.6.17 → 1.0.0 Changeset
@tanstack/ai-gemini 0.9.1 → 1.0.0 Changeset
@tanstack/ai-grok 0.6.8 → 1.0.0 Changeset
@tanstack/ai-openai 0.8.1 → 1.0.0 Changeset
@tanstack/ai-openrouter 0.8.1 → 1.0.0 Changeset
@tanstack/ai-react 0.7.15 → 1.0.0 Changeset
@tanstack/ai-solid 0.6.19 → 1.0.0 Changeset
@tanstack/ai-svelte 0.6.19 → 1.0.0 Changeset
@tanstack/ai-vue 0.6.19 → 1.0.0 Changeset
@tanstack/ai-anthropic 0.8.1 → 1.0.0 Dependent
@tanstack/ai-code-mode 0.1.7 → 1.0.0 Dependent
@tanstack/ai-code-mode-skills 0.1.7 → 1.0.0 Dependent
@tanstack/ai-elevenlabs 0.1.7 → 1.0.0 Dependent
@tanstack/ai-event-client 0.2.7 → 1.0.0 Dependent
@tanstack/ai-groq 0.1.7 → 1.0.0 Dependent
@tanstack/ai-isolate-node 0.1.7 → 1.0.0 Dependent
@tanstack/ai-isolate-quickjs 0.1.7 → 1.0.0 Dependent
@tanstack/ai-ollama 0.6.9 → 1.0.0 Dependent
@tanstack/ai-preact 0.6.19 → 1.0.0 Dependent
@tanstack/ai-react-ui 0.6.1 → 1.0.0 Dependent
@tanstack/ai-solid-ui 0.6.1 → 1.0.0 Dependent

🟨 Minor bumps

Package Version Reason
@tanstack/ai 0.13.0 → 0.14.0 Changeset
@tanstack/ai-client 0.7.14 → 0.8.0 Changeset

🟩 Patch bumps

Package Version Reason
@tanstack/ai-code-mode-models-eval 0.0.10 → 0.0.11 Dependent
@tanstack/ai-devtools-core 0.3.24 → 0.3.25 Dependent
@tanstack/ai-isolate-cloudflare 0.1.7 → 0.1.8 Dependent
@tanstack/ai-vue-ui 0.1.30 → 0.1.31 Dependent
@tanstack/preact-ai-devtools 0.1.28 → 0.1.29 Dependent
@tanstack/react-ai-devtools 0.2.28 → 0.2.29 Dependent
@tanstack/solid-ai-devtools 0.2.28 → 0.2.29 Dependent
ts-svelte-chat 0.1.36 → 0.1.37 Dependent
ts-vue-chat 0.1.36 → 0.1.37 Dependent
vanilla-chat 0.0.34 → 0.0.35 Dependent

@nx-cloud
Copy link
Copy Markdown

nx-cloud Bot commented Apr 23, 2026

View your CI Pipeline Execution ↗ for commit 5f0ee08

Command Status Duration Result
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 1s View ↗

☁️ Nx Cloud last updated this comment at 2026-04-23 15:24:47 UTC

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new Bot commented Apr 23, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/@tanstack/ai@499

@tanstack/ai-anthropic

npm i https://pkg.pr.new/@tanstack/ai-anthropic@499

@tanstack/ai-client

npm i https://pkg.pr.new/@tanstack/ai-client@499

@tanstack/ai-code-mode

npm i https://pkg.pr.new/@tanstack/ai-code-mode@499

@tanstack/ai-code-mode-skills

npm i https://pkg.pr.new/@tanstack/ai-code-mode-skills@499

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/@tanstack/ai-devtools-core@499

@tanstack/ai-elevenlabs

npm i https://pkg.pr.new/@tanstack/ai-elevenlabs@499

@tanstack/ai-event-client

npm i https://pkg.pr.new/@tanstack/ai-event-client@499

@tanstack/ai-fal

npm i https://pkg.pr.new/@tanstack/ai-fal@499

@tanstack/ai-gemini

npm i https://pkg.pr.new/@tanstack/ai-gemini@499

@tanstack/ai-grok

npm i https://pkg.pr.new/@tanstack/ai-grok@499

@tanstack/ai-groq

npm i https://pkg.pr.new/@tanstack/ai-groq@499

@tanstack/ai-isolate-cloudflare

npm i https://pkg.pr.new/@tanstack/ai-isolate-cloudflare@499

@tanstack/ai-isolate-node

npm i https://pkg.pr.new/@tanstack/ai-isolate-node@499

@tanstack/ai-isolate-quickjs

npm i https://pkg.pr.new/@tanstack/ai-isolate-quickjs@499

@tanstack/ai-ollama

npm i https://pkg.pr.new/@tanstack/ai-ollama@499

@tanstack/ai-openai

npm i https://pkg.pr.new/@tanstack/ai-openai@499

@tanstack/ai-openrouter

npm i https://pkg.pr.new/@tanstack/ai-openrouter@499

@tanstack/ai-preact

npm i https://pkg.pr.new/@tanstack/ai-preact@499

@tanstack/ai-react

npm i https://pkg.pr.new/@tanstack/ai-react@499

@tanstack/ai-react-ui

npm i https://pkg.pr.new/@tanstack/ai-react-ui@499

@tanstack/ai-solid

npm i https://pkg.pr.new/@tanstack/ai-solid@499

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/@tanstack/ai-solid-ui@499

@tanstack/ai-svelte

npm i https://pkg.pr.new/@tanstack/ai-svelte@499

@tanstack/ai-vue

npm i https://pkg.pr.new/@tanstack/ai-vue@499

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/@tanstack/ai-vue-ui@499

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/@tanstack/preact-ai-devtools@499

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/@tanstack/react-ai-devtools@499

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/@tanstack/solid-ai-devtools@499

commit: 5f0ee08

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
packages/typescript/ai-groq/src/adapters/text.ts (1)

409-442: ⚠️ Potential issue | 🟠 Major

Forward validated modelOptions into the Groq request.

modelOptions is validated but not spread into the request, so provider options moved under modelOptions have no effect.

🐛 Proposed fix
     return {
+      ...(modelOptions ?? {}),
       model: options.model,
       messages,
       tools,
       stream: true,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-groq/src/adapters/text.ts` around lines 409 - 442, The
code validates modelOptions but never forwards them into the Groq request;
update the request builder that returns the object containing model, messages,
tools, stream to also merge in the validated modelOptions (spread modelOptions
into the returned object) so provider-specific options in modelOptions are
applied; locate the block using modelOptions, validateTextProviderOptions,
convertToolsToProviderFormat, and convertMessageToGroq and add the modelOptions
spread to that returned payload.
packages/typescript/ai-grok/src/adapters/text.ts (1)

419-457: ⚠️ Potential issue | 🟠 Major

Forward validated modelOptions into the Grok request.

modelOptions is validated but never included in the returned params, so moved fields such as temperature, top_p, and max_tokens are silently dropped.

🐛 Proposed fix
     return {
+      ...(modelOptions ?? {}),
       model: options.model,
       messages,
       tools: tools as Array<OpenAI_SDK.Chat.Completions.ChatCompletionTool>,
       stream: true,
       stream_options: { include_usage: true },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-grok/src/adapters/text.ts` around lines 419 - 457, The
validated modelOptions (variable modelOptions from the text request builder) are
not being forwarded into the returned Grok request params, so include/merge the
properties of modelOptions (already validated by validateTextProviderOptions)
into the returned object for the request (while preserving existing keys like
model, messages, tools, stream, and stream_options); ensure you forward only the
allowed fields (modelOptions was typed as Omit<InternalTextProviderOptions,
'tools' | 'input'>) and maintain the types expected by the API used with
convertToolsToProviderFormat and convertMessageToGrok.
packages/typescript/ai-ollama/src/adapters/text.ts (1)

549-558: ⚠️ Potential issue | 🟠 Major

Align modelOptions with the payload shape before nesting it under options.

The adapter now treats modelOptions as Ollama runtime options, but the public fallback type is still ChatRequest; callers can pass { options: { temperature: 0.2 } }, which this sends as options.options and Ollama will ignore.

🐛 Proposed direction
-  private mapCommonOptionsToOllama(options: TextOptions): ChatRequest {
+  private mapCommonOptionsToOllama(options: TextOptions): ChatRequest {
     const model = options.model
-    const modelOptions = options.modelOptions as
-      | OllamaTextProviderOptions
-      | undefined
+    const modelOptions = options.modelOptions as Partial<OllamaTextProviderOptions> | undefined
 
     return {
       model,
       options: modelOptions,

Either change ResolveModelOptions<TModel>/generated Ollama model options to represent Partial<Options> directly, or destructure full request-shaped modelOptions and pass only its runtime options fields into ChatRequest.options.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 549 - 558,
The adapter is nesting the caller's full ChatRequest-shaped modelOptions under
options (resulting in options.options), so update mapCommonOptionsToOllama to
extract/align the runtime options before constructing the Ollama ChatRequest: in
mapCommonOptionsToOllama (and where OllamaTextProviderOptions is referenced)
destructure modelOptions to pull the runtime fields (e.g., const { options:
runtimeOptions, ... } = modelOptions || {}) or map ResolveModelOptions<TModel>
to Partial<Options>, then pass runtimeOptions as the top-level
ChatRequest.options (not the whole modelOptions object) so Ollama receives the
correct payload shape.
packages/typescript/ai-openai/src/adapters/text.ts (1)

816-843: ⚠️ Potential issue | 🟠 Major

Bridge top-level generation options that are passed to the adapter but not forwarded to OpenAI.

TextOptions exposes temperature, topP, and maxTokens, and the chat activity passes these to adapter.chatStream(), but mapTextOptionsToOpenAI() only forwards provider-shaped modelOptions. These standard fields are silently dropped. Map them to OpenAI-compatible names and spread before modelOptions so they can be overridden by model-specific options if needed.

Example fix
     const tools = options.tools
       ? convertToolsToProviderFormat(options.tools)
       : undefined
 
+    const commonGenerationOptions = {
+      ...(options.temperature !== undefined
+        ? { temperature: options.temperature }
+        : {}),
+      ...(options.topP !== undefined ? { top_p: options.topP } : {}),
+      ...(options.maxTokens !== undefined
+        ? { max_output_tokens: options.maxTokens }
+        : {}),
+    }
+
     const requestParams: Omit<
       OpenAI_SDK.Responses.ResponseCreateParams,
       'stream'
     > = {
       model: options.model,
       metadata: options.metadata,
       instructions: options.systemPrompts?.join('\n'),
+      ...commonGenerationOptions,
       ...modelOptions,
       input,
       tools,
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 816 - 843,
mapTextOptionsToOpenAI is dropping top-level TextOptions fields (temperature,
topP, maxTokens) so bridge-level settings never reach OpenAI; modify
mapTextOptionsToOpenAI to read options.temperature, options.topP and
options.maxTokens, build OpenAI-compatible keys (temperature, top_p, max_tokens)
and include them in requestParams before spreading modelOptions so
model-specific values can override them; keep existing conversion of
messages/tools and preserve validateTextProviderOptions and input handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-gemini/src/text/text-provider-options.ts`:
- Around line 25-33: The GeminiCommonConfigOptions interface is missing the
provider-native maxOutputTokens field so type-safe token limits passed via
modelOptions are not recognized; add an optional maxOutputTokens?: number
property to GeminiCommonConfigOptions so when mapCommonOptionsToGemini spreads
modelOptions into Gemini's config (mapCommonOptionsToGemini, modelOptions) the
token limit is typed and passed through correctly.

In `@terminalOutput`:
- Around line 1-48: The committed terminal build log (terminalOutput) is a
non-source artifact and should be removed from the PR; delete the terminalOutput
file and add its name (or a suitable pattern like /terminalOutput or
/dist-logs/) to .gitignore to prevent future commits. Also, to silence the
vite-tsconfig-paths warning from examples/ts-svelte-chat/tsconfig.json either
run svelte-kit sync from the example (add "postinstall": "cd
examples/ts-svelte-chat && svelte-kit sync" to package.json scripts) or
configure the tsconfigPaths plugin to set ignoreConfigErrors: true (look for the
vite-tsconfig-paths plugin configuration in your Vite config).

---

Outside diff comments:
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 419-457: The validated modelOptions (variable modelOptions from
the text request builder) are not being forwarded into the returned Grok request
params, so include/merge the properties of modelOptions (already validated by
validateTextProviderOptions) into the returned object for the request (while
preserving existing keys like model, messages, tools, stream, and
stream_options); ensure you forward only the allowed fields (modelOptions was
typed as Omit<InternalTextProviderOptions, 'tools' | 'input'>) and maintain the
types expected by the API used with convertToolsToProviderFormat and
convertMessageToGrok.

In `@packages/typescript/ai-groq/src/adapters/text.ts`:
- Around line 409-442: The code validates modelOptions but never forwards them
into the Groq request; update the request builder that returns the object
containing model, messages, tools, stream to also merge in the validated
modelOptions (spread modelOptions into the returned object) so provider-specific
options in modelOptions are applied; locate the block using modelOptions,
validateTextProviderOptions, convertToolsToProviderFormat, and
convertMessageToGroq and add the modelOptions spread to that returned payload.

In `@packages/typescript/ai-ollama/src/adapters/text.ts`:
- Around line 549-558: The adapter is nesting the caller's full
ChatRequest-shaped modelOptions under options (resulting in options.options), so
update mapCommonOptionsToOllama to extract/align the runtime options before
constructing the Ollama ChatRequest: in mapCommonOptionsToOllama (and where
OllamaTextProviderOptions is referenced) destructure modelOptions to pull the
runtime fields (e.g., const { options: runtimeOptions, ... } = modelOptions ||
{}) or map ResolveModelOptions<TModel> to Partial<Options>, then pass
runtimeOptions as the top-level ChatRequest.options (not the whole modelOptions
object) so Ollama receives the correct payload shape.

In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 816-843: mapTextOptionsToOpenAI is dropping top-level TextOptions
fields (temperature, topP, maxTokens) so bridge-level settings never reach
OpenAI; modify mapTextOptionsToOpenAI to read options.temperature, options.topP
and options.maxTokens, build OpenAI-compatible keys (temperature, top_p,
max_tokens) and include them in requestParams before spreading modelOptions so
model-specific values can override them; keep existing conversion of
messages/tools and preserve validateTextProviderOptions and input handling.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f7a2fd92-82c1-4873-8411-c3891d5d92b5

📥 Commits

Reviewing files that changed from the base of the PR and between 54523f5 and 61a42c2.

📒 Files selected for processing (11)
  • packages/typescript/ai-anthropic/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
  • packages/typescript/ai-grok/src/adapters/text.ts
  • packages/typescript/ai-groq/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-openai/src/adapters/text.ts
  • packages/typescript/ai-openrouter/src/adapters/text.ts
  • packages/typescript/smoke-tests/e2e/src/routeTree.gen.ts
  • terminalOutput
💤 Files with no reviewable changes (1)
  • packages/typescript/ai-gemini/src/adapters/text.ts

Comment on lines 25 to +33
export interface GeminiCommonConfigOptions {
/**
* Configuration options for model generation and outputs.
* Controls the randomness of the output. Range: [0.0, 2.0]. Higher values produce more random output.
*/
temperature?: number
/**
* Nucleus sampling probability threshold. Range: (0.0, 1.0). Alter this or temperature, not both.
*/
topP?: number
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Expect: adapter no longer contains a direct maxTokens -> maxOutputTokens mapping,
# and provider option types should expose maxOutputTokens if modelOptions is now the path.
rg -nP --type=ts -C3 '\bmaxTokens\b|\bmaxOutputTokens\b' packages/typescript/ai-gemini packages/typescript/ai/src

Repository: TanStack/ai

Length of output: 11787


🏁 Script executed:

cat -n packages/typescript/ai-gemini/src/text/text-provider-options.ts | head -100

Repository: TanStack/ai

Length of output: 5727


🏁 Script executed:

fd -t f "text.ts" packages/typescript/ai-gemini/src/adapters --exec cat -n {} \; | head -150

Repository: TanStack/ai

Length of output: 5836


🏁 Script executed:

rg -n "mapCommonOptionsToGemini" packages/typescript/ai-gemini/src --context 20 -A 30

Repository: TanStack/ai

Length of output: 8331


🏁 Script executed:

rg -n "ExternalTextProviderOptions" packages/typescript/ai-gemini/src

Repository: TanStack/ai

Length of output: 447


🏁 Script executed:

sed -n '253,260p' packages/typescript/ai-gemini/src/text/text-provider-options.ts

Repository: TanStack/ai

Length of output: 299


🏁 Script executed:

cat -n packages/typescript/ai-gemini/src/text/text-provider-options.ts | tail -50

Repository: TanStack/ai

Length of output: 1928


🏁 Script executed:

rg -n "maxTokens" packages/typescript/ai-gemini/src/adapters/text.ts -B 2 -A 2

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

grep -n "maxOutputTokens" packages/typescript/ai-gemini/src/text/text-provider-options.ts

Repository: TanStack/ai

Length of output: 370


🏁 Script executed:

grep -n "maxTokens" packages/typescript/ai-gemini/tests/gemini-adapter.test.ts -B 3 -A 3

Repository: TanStack/ai

Length of output: 492


🏁 Script executed:

rg -n "maxTokens\|maxOutputTokens" packages/typescript/ai-gemini/src/adapters/ -B 2 -A 2

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

rg -n "maxTokens" packages/typescript/ai-gemini/tests/gemini-adapter.test.ts | grep -A 5 "expect.*maxOutputTokens"

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

cat -n packages/typescript/ai-gemini/tests/gemini-adapter.test.ts | sed -n '104,125p'

Repository: TanStack/ai

Length of output: 795


🏁 Script executed:

cat -n packages/typescript/ai-gemini/tests/gemini-adapter.test.ts | sed -n '200,230p'

Repository: TanStack/ai

Length of output: 1315


🏁 Script executed:

rg -n "class BaseTextAdapter\|chatStream" packages/typescript/ai/src/adapters -B 2 -A 15 | head -80

Repository: TanStack/ai

Length of output: 130


🏁 Script executed:

find packages -name "*.ts" -type f | xargs grep -l "class BaseTextAdapter" | head -5

Repository: TanStack/ai

Length of output: 108


🏁 Script executed:

fd -t f "index.ts" packages/typescript/ai/src/adapters | head -10

Repository: TanStack/ai

Length of output: 177


🏁 Script executed:

cat -n packages/typescript/ai/src/activities/chat/adapter.ts | head -150

Repository: TanStack/ai

Length of output: 5884


🏁 Script executed:

rg -n "maxTokens" packages/typescript/ai/src/activities/chat/adapter.ts -B 3 -A 3

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

rg -n "mapCommonOptionsToGemini" packages/typescript/ai-gemini/src/adapters/text.ts -B 5 -A 40 | grep -A 40 "private mapCommonOptionsToGemini"

Repository: TanStack/ai

Length of output: 1576


🏁 Script executed:

rg -n "temperature\|topP\|maxTokens" packages/typescript/ai/src/activities/chat/adapter.ts | head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

rg -n "maxTokens" packages/typescript/ai-gemini/src/ --type ts -B 2 -A 2 | grep -v test | grep -v ".test.ts"

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

rg -n "\.maxTokens" packages/typescript/ai/src/activities/chat/ --type ts -B 2 -A 2 | head -30

Repository: TanStack/ai

Length of output: 1522


🏁 Script executed:

cat -n packages/typescript/ai-gemini/tests/gemini-adapter.test.ts | sed -n '95,130p'

Repository: TanStack/ai

Length of output: 1352


🏁 Script executed:

rg -n "maxTokens.*maxOutputTokens\|maxOutputTokens.*maxTokens" packages/typescript/ai-gemini/ --type ts

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

rg -n "const.*maxOutputTokens\|maxOutputTokens.*=" packages/typescript/ai-gemini/src --type ts -B 2 -A 2

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

cat -n packages/typescript/ai/src/activities/chat/index.ts | sed -n '570,620p'

Repository: TanStack/ai

Length of output: 1964


🏁 Script executed:

rg -n "maxTokens.*maxOutputTokens" packages/typescript/ai-gemini/ -B 5 -A 5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

cat -n packages/typescript/ai-gemini/src/adapters/text.ts | sed -n '800,835p'

Repository: TanStack/ai

Length of output: 1395


🏁 Script executed:

rg -n "temperature\|topP" packages/typescript/ai-gemini/src/adapters/text.ts -B 2 -A 2

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

fd -t f "test*.ts" packages/typescript/ai-gemini | xargs grep -l "maxTokens.*maxOutputTokens" | head -1

Repository: TanStack/ai

Length of output: 37


Add maxOutputTokens to GeminiCommonConfigOptions to support token limiting through modelOptions.

The adapter's mapCommonOptionsToGemini method spreads modelOptions directly into Gemini's config object (line 814). Since temperature and topP are now part of GeminiCommonConfigOptions, the provider-native field maxOutputTokens should be exposed alongside them for type-safe token-limit control through modelOptions.

🔧 Proposed fix
 export interface GeminiCommonConfigOptions {
   /**
    * Controls the randomness of the output. Range: [0.0, 2.0]. Higher values produce more random output.
    */
   temperature?: number
   /**
    * Nucleus sampling probability threshold. Range: (0.0, 1.0). Alter this or temperature, not both.
    */
   topP?: number
+  /**
+   * Maximum number of tokens that can be generated in the response.
+   */
+  maxOutputTokens?: number
   /**
    * The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response.
    */
   stopSequences?: Array<string>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export interface GeminiCommonConfigOptions {
/**
* Configuration options for model generation and outputs.
* Controls the randomness of the output. Range: [0.0, 2.0]. Higher values produce more random output.
*/
temperature?: number
/**
* Nucleus sampling probability threshold. Range: (0.0, 1.0). Alter this or temperature, not both.
*/
topP?: number
export interface GeminiCommonConfigOptions {
/**
* Controls the randomness of the output. Range: [0.0, 2.0]. Higher values produce more random output.
*/
temperature?: number
/**
* Nucleus sampling probability threshold. Range: (0.0, 1.0). Alter this or temperature, not both.
*/
topP?: number
/**
* Maximum number of tokens that can be generated in the response.
*/
maxOutputTokens?: number
/**
* The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response.
*/
stopSequences?: Array<string>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-gemini/src/text/text-provider-options.ts` around lines
25 - 33, The GeminiCommonConfigOptions interface is missing the provider-native
maxOutputTokens field so type-safe token limits passed via modelOptions are not
recognized; add an optional maxOutputTokens?: number property to
GeminiCommonConfigOptions so when mapCommonOptionsToGemini spreads modelOptions
into Gemini's config (mapCommonOptionsToGemini, modelOptions) the token limit is
typed and passed through correctly.

Comment thread terminalOutput Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/typescript/ai-grok/src/adapters/text.ts (1)

449-491: ⚠️ Potential issue | 🔴 Critical

Critical: modelOptions is validated but never forwarded to the Grok request.

After removing the explicit temperature / max_tokens / top_p mappings, modelOptions is read and validated on lines 452–461 but is never spread into the returned ChatCompletionCreateParamsStreaming on lines 484–490. Net effect: temperature, top_p, max_tokens, and any other provider options passed via modelOptions are silently dropped for Grok — the PR's intent (route generation controls through modelOptions) is effectively a no-op here.

Every other adapter in this PR forwards them (OpenAI ...modelOptions, OpenRouter ...restModelOptions, Ollama options: modelOptions, Anthropic ...validProviderOptions). Grok needs the same.

🐛 Proposed fix
     return {
+      ...modelOptions,
       model: options.model,
       messages,
       tools: tools as Array<OpenAI_SDK.Chat.Completions.ChatCompletionTool>,
       stream: true,
       stream_options: { include_usage: true },
     }

Spread order matches OpenRouter: user-provided modelOptions can't clobber the required model / messages / tools / stream fields.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-grok/src/adapters/text.ts` around lines 449 - 491,
mapTextOptionsToGrok currently validates modelOptions but never includes them in
the returned ChatCompletionCreateParamsStreaming, so provider-level settings
like temperature/max_tokens are dropped; update mapTextOptionsToGrok to spread
the validated modelOptions into the returned object (e.g., ...modelOptions)
while ensuring the required fields (model, messages, tools, stream,
stream_options) remain last so user options can't clobber them; reference
function names/values: mapTextOptionsToGrok, modelOptions,
validateTextProviderOptions, messages, tools, and the returned
ChatCompletionCreateParamsStreaming shape.
🧹 Nitpick comments (2)
packages/typescript/ai-anthropic/src/adapters/text.ts (1)

283-333: Changes look correct; minor nit on || 1024.

Adding 'temperature' / 'top_p' to validKeys correctly routes them through validProviderOptions and into requestParams via the spread on line 330. The thinkingBudget / maxTokens interaction still holds.

Nit: modelOptions?.max_tokens || 1024 will also fall back to 1024 when max_tokens is explicitly 0. 0 isn't a sensible request anyway, but using ?? better reflects the intent ("default only when unspecified").

Optional tweak
-    const defaultMaxTokens = modelOptions?.max_tokens || 1024
+    const defaultMaxTokens = modelOptions?.max_tokens ?? 1024
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 283 -
333, The fallback for max tokens should only apply when modelOptions?.max_tokens
is undefined/null, not when it's 0: change the defaultMaxTokens calculation to
use the nullish coalescing operator (use modelOptions?.max_tokens ?? 1024) in
the code that sets defaultMaxTokens so the rest of the logic in maxTokens and
requestParams (variables: defaultMaxTokens, maxTokens, requestParams) behaves as
intended.
packages/typescript/ai-openai/src/adapters/text.ts (1)

847-877: Minor: spread ordering can let modelOptions override model/instructions.

modelOptions is typed with Omit<..., 'tools' | 'metadata' | 'input'> and spread between the explicit model/metadata/instructions and input/tools. metadata, tools, and input are safe (Omit'd and/or re-set after the spread), but model and instructions are not Omit'd, so if a caller (or a downstream internal path) ever puts a model or instructions key into modelOptions, it would silently override options.model / the joined systemPrompts.

Consider either extending the Omit to exclude 'model' | 'instructions', or spreading ...modelOptions first and then letting the explicit fields win:

Proposed adjustment
     const modelOptions = options.modelOptions as
-      | Omit<InternalTextProviderOptions, 'tools' | 'metadata' | 'input'>
+      | Omit<
+          InternalTextProviderOptions,
+          'tools' | 'metadata' | 'input' | 'model' | 'instructions'
+        >
       | undefined

or

     const requestParams: Omit<
       OpenAI_SDK.Responses.ResponseCreateParams,
       'stream'
     > = {
+      ...modelOptions,
       model: options.model,
       metadata: options.metadata,
       instructions: options.systemPrompts?.join('\n'),
-      ...modelOptions,
       input,
       tools,
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 847 - 877,
The mapTextOptionsToOpenAI function can let modelOptions override options.model
and the computed instructions because modelOptions is spread between explicit
fields; update the code so explicit model/instructions always win: either change
the modelOptions type declaration to Omit<InternalTextProviderOptions, 'tools' |
'metadata' | 'input' | 'model' | 'instructions'> or move the spread of
modelOptions earlier (spread ...modelOptions first, then set model:
options.model and instructions: options.systemPrompts?.join('\n') afterwards) so
requestParams cannot be overridden by modelOptions; reference
mapTextOptionsToOpenAI, modelOptions and requestParams when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 449-491: mapTextOptionsToGrok currently validates modelOptions but
never includes them in the returned ChatCompletionCreateParamsStreaming, so
provider-level settings like temperature/max_tokens are dropped; update
mapTextOptionsToGrok to spread the validated modelOptions into the returned
object (e.g., ...modelOptions) while ensuring the required fields (model,
messages, tools, stream, stream_options) remain last so user options can't
clobber them; reference function names/values: mapTextOptionsToGrok,
modelOptions, validateTextProviderOptions, messages, tools, and the returned
ChatCompletionCreateParamsStreaming shape.

---

Nitpick comments:
In `@packages/typescript/ai-anthropic/src/adapters/text.ts`:
- Around line 283-333: The fallback for max tokens should only apply when
modelOptions?.max_tokens is undefined/null, not when it's 0: change the
defaultMaxTokens calculation to use the nullish coalescing operator (use
modelOptions?.max_tokens ?? 1024) in the code that sets defaultMaxTokens so the
rest of the logic in maxTokens and requestParams (variables: defaultMaxTokens,
maxTokens, requestParams) behaves as intended.

In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 847-877: The mapTextOptionsToOpenAI function can let modelOptions
override options.model and the computed instructions because modelOptions is
spread between explicit fields; update the code so explicit model/instructions
always win: either change the modelOptions type declaration to
Omit<InternalTextProviderOptions, 'tools' | 'metadata' | 'input' | 'model' |
'instructions'> or move the spread of modelOptions earlier (spread
...modelOptions first, then set model: options.model and instructions:
options.systemPrompts?.join('\n') afterwards) so requestParams cannot be
overridden by modelOptions; reference mapTextOptionsToOpenAI, modelOptions and
requestParams when making the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7ed86eb6-5112-4841-8445-590c624f64e8

📥 Commits

Reviewing files that changed from the base of the PR and between 61a42c2 and 5f0ee08.

📒 Files selected for processing (9)
  • packages/typescript/ai-anthropic/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
  • packages/typescript/ai-grok/src/adapters/text.ts
  • packages/typescript/ai-groq/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/adapters/text.ts
  • packages/typescript/ai-ollama/src/meta/models-meta.ts
  • packages/typescript/ai-openai/src/adapters/text.ts
  • packages/typescript/ai-openrouter/src/adapters/text.ts
💤 Files with no reviewable changes (1)
  • packages/typescript/ai-gemini/src/adapters/text.ts
✅ Files skipped from review due to trivial changes (1)
  • packages/typescript/ai-groq/src/adapters/text.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant