chore(adapters): move common options to model options#499
chore(adapters): move common options to model options#499harry-whorlow wants to merge 9 commits intomainfrom
Conversation
📝 WalkthroughWalkthroughThis PR refactors how temperature, topP, and maxTokens are handled across multiple AI provider adapters. Instead of being explicitly mapped from top-level TextOptions into request parameters, these values are now treated as model-scoped provider options sourced from modelOptions. A new generated route tree file is also added for smoke test e2e routing. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
61a42c2 to
2663f0f
Compare
🚀 Changeset Version Preview11 package(s) bumped directly, 22 bumped as dependents. 🟥 Major bumps
🟨 Minor bumps
🟩 Patch bumps
|
|
View your CI Pipeline Execution ↗ for commit 5f0ee08
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/typescript/ai-groq/src/adapters/text.ts (1)
409-442:⚠️ Potential issue | 🟠 MajorForward validated
modelOptionsinto the Groq request.
modelOptionsis validated but not spread into the request, so provider options moved undermodelOptionshave no effect.🐛 Proposed fix
return { + ...(modelOptions ?? {}), model: options.model, messages, tools, stream: true,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/adapters/text.ts` around lines 409 - 442, The code validates modelOptions but never forwards them into the Groq request; update the request builder that returns the object containing model, messages, tools, stream to also merge in the validated modelOptions (spread modelOptions into the returned object) so provider-specific options in modelOptions are applied; locate the block using modelOptions, validateTextProviderOptions, convertToolsToProviderFormat, and convertMessageToGroq and add the modelOptions spread to that returned payload.packages/typescript/ai-grok/src/adapters/text.ts (1)
419-457:⚠️ Potential issue | 🟠 MajorForward validated
modelOptionsinto the Grok request.
modelOptionsis validated but never included in the returned params, so moved fields such astemperature,top_p, andmax_tokensare silently dropped.🐛 Proposed fix
return { + ...(modelOptions ?? {}), model: options.model, messages, tools: tools as Array<OpenAI_SDK.Chat.Completions.ChatCompletionTool>, stream: true, stream_options: { include_usage: true },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/src/adapters/text.ts` around lines 419 - 457, The validated modelOptions (variable modelOptions from the text request builder) are not being forwarded into the returned Grok request params, so include/merge the properties of modelOptions (already validated by validateTextProviderOptions) into the returned object for the request (while preserving existing keys like model, messages, tools, stream, and stream_options); ensure you forward only the allowed fields (modelOptions was typed as Omit<InternalTextProviderOptions, 'tools' | 'input'>) and maintain the types expected by the API used with convertToolsToProviderFormat and convertMessageToGrok.packages/typescript/ai-ollama/src/adapters/text.ts (1)
549-558:⚠️ Potential issue | 🟠 MajorAlign
modelOptionswith the payload shape before nesting it underoptions.The adapter now treats
modelOptionsas Ollama runtime options, but the public fallback type is stillChatRequest; callers can pass{ options: { temperature: 0.2 } }, which this sends asoptions.optionsand Ollama will ignore.🐛 Proposed direction
- private mapCommonOptionsToOllama(options: TextOptions): ChatRequest { + private mapCommonOptionsToOllama(options: TextOptions): ChatRequest { const model = options.model - const modelOptions = options.modelOptions as - | OllamaTextProviderOptions - | undefined + const modelOptions = options.modelOptions as Partial<OllamaTextProviderOptions> | undefined return { model, options: modelOptions,Either change
ResolveModelOptions<TModel>/generated Ollama model options to representPartial<Options>directly, or destructure full request-shapedmodelOptionsand pass only its runtimeoptionsfields intoChatRequest.options.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 549 - 558, The adapter is nesting the caller's full ChatRequest-shaped modelOptions under options (resulting in options.options), so update mapCommonOptionsToOllama to extract/align the runtime options before constructing the Ollama ChatRequest: in mapCommonOptionsToOllama (and where OllamaTextProviderOptions is referenced) destructure modelOptions to pull the runtime fields (e.g., const { options: runtimeOptions, ... } = modelOptions || {}) or map ResolveModelOptions<TModel> to Partial<Options>, then pass runtimeOptions as the top-level ChatRequest.options (not the whole modelOptions object) so Ollama receives the correct payload shape.packages/typescript/ai-openai/src/adapters/text.ts (1)
816-843:⚠️ Potential issue | 🟠 MajorBridge top-level generation options that are passed to the adapter but not forwarded to OpenAI.
TextOptionsexposestemperature,topP, andmaxTokens, and the chat activity passes these toadapter.chatStream(), butmapTextOptionsToOpenAI()only forwards provider-shapedmodelOptions. These standard fields are silently dropped. Map them to OpenAI-compatible names and spread beforemodelOptionsso they can be overridden by model-specific options if needed.Example fix
const tools = options.tools ? convertToolsToProviderFormat(options.tools) : undefined + const commonGenerationOptions = { + ...(options.temperature !== undefined + ? { temperature: options.temperature } + : {}), + ...(options.topP !== undefined ? { top_p: options.topP } : {}), + ...(options.maxTokens !== undefined + ? { max_output_tokens: options.maxTokens } + : {}), + } + const requestParams: Omit< OpenAI_SDK.Responses.ResponseCreateParams, 'stream' > = { model: options.model, metadata: options.metadata, instructions: options.systemPrompts?.join('\n'), + ...commonGenerationOptions, ...modelOptions, input, tools, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 816 - 843, mapTextOptionsToOpenAI is dropping top-level TextOptions fields (temperature, topP, maxTokens) so bridge-level settings never reach OpenAI; modify mapTextOptionsToOpenAI to read options.temperature, options.topP and options.maxTokens, build OpenAI-compatible keys (temperature, top_p, max_tokens) and include them in requestParams before spreading modelOptions so model-specific values can override them; keep existing conversion of messages/tools and preserve validateTextProviderOptions and input handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-gemini/src/text/text-provider-options.ts`:
- Around line 25-33: The GeminiCommonConfigOptions interface is missing the
provider-native maxOutputTokens field so type-safe token limits passed via
modelOptions are not recognized; add an optional maxOutputTokens?: number
property to GeminiCommonConfigOptions so when mapCommonOptionsToGemini spreads
modelOptions into Gemini's config (mapCommonOptionsToGemini, modelOptions) the
token limit is typed and passed through correctly.
In `@terminalOutput`:
- Around line 1-48: The committed terminal build log (terminalOutput) is a
non-source artifact and should be removed from the PR; delete the terminalOutput
file and add its name (or a suitable pattern like /terminalOutput or
/dist-logs/) to .gitignore to prevent future commits. Also, to silence the
vite-tsconfig-paths warning from examples/ts-svelte-chat/tsconfig.json either
run svelte-kit sync from the example (add "postinstall": "cd
examples/ts-svelte-chat && svelte-kit sync" to package.json scripts) or
configure the tsconfigPaths plugin to set ignoreConfigErrors: true (look for the
vite-tsconfig-paths plugin configuration in your Vite config).
---
Outside diff comments:
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 419-457: The validated modelOptions (variable modelOptions from
the text request builder) are not being forwarded into the returned Grok request
params, so include/merge the properties of modelOptions (already validated by
validateTextProviderOptions) into the returned object for the request (while
preserving existing keys like model, messages, tools, stream, and
stream_options); ensure you forward only the allowed fields (modelOptions was
typed as Omit<InternalTextProviderOptions, 'tools' | 'input'>) and maintain the
types expected by the API used with convertToolsToProviderFormat and
convertMessageToGrok.
In `@packages/typescript/ai-groq/src/adapters/text.ts`:
- Around line 409-442: The code validates modelOptions but never forwards them
into the Groq request; update the request builder that returns the object
containing model, messages, tools, stream to also merge in the validated
modelOptions (spread modelOptions into the returned object) so provider-specific
options in modelOptions are applied; locate the block using modelOptions,
validateTextProviderOptions, convertToolsToProviderFormat, and
convertMessageToGroq and add the modelOptions spread to that returned payload.
In `@packages/typescript/ai-ollama/src/adapters/text.ts`:
- Around line 549-558: The adapter is nesting the caller's full
ChatRequest-shaped modelOptions under options (resulting in options.options), so
update mapCommonOptionsToOllama to extract/align the runtime options before
constructing the Ollama ChatRequest: in mapCommonOptionsToOllama (and where
OllamaTextProviderOptions is referenced) destructure modelOptions to pull the
runtime fields (e.g., const { options: runtimeOptions, ... } = modelOptions ||
{}) or map ResolveModelOptions<TModel> to Partial<Options>, then pass
runtimeOptions as the top-level ChatRequest.options (not the whole modelOptions
object) so Ollama receives the correct payload shape.
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 816-843: mapTextOptionsToOpenAI is dropping top-level TextOptions
fields (temperature, topP, maxTokens) so bridge-level settings never reach
OpenAI; modify mapTextOptionsToOpenAI to read options.temperature, options.topP
and options.maxTokens, build OpenAI-compatible keys (temperature, top_p,
max_tokens) and include them in requestParams before spreading modelOptions so
model-specific values can override them; keep existing conversion of
messages/tools and preserve validateTextProviderOptions and input handling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f7a2fd92-82c1-4873-8411-c3891d5d92b5
📒 Files selected for processing (11)
packages/typescript/ai-anthropic/src/adapters/text.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.tspackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-ollama/src/adapters/text.tspackages/typescript/ai-ollama/src/meta/models-meta.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openrouter/src/adapters/text.tspackages/typescript/smoke-tests/e2e/src/routeTree.gen.tsterminalOutput
💤 Files with no reviewable changes (1)
- packages/typescript/ai-gemini/src/adapters/text.ts
| export interface GeminiCommonConfigOptions { | ||
| /** | ||
| * Configuration options for model generation and outputs. | ||
| * Controls the randomness of the output. Range: [0.0, 2.0]. Higher values produce more random output. | ||
| */ | ||
| temperature?: number | ||
| /** | ||
| * Nucleus sampling probability threshold. Range: (0.0, 1.0). Alter this or temperature, not both. | ||
| */ | ||
| topP?: number |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Expect: adapter no longer contains a direct maxTokens -> maxOutputTokens mapping,
# and provider option types should expose maxOutputTokens if modelOptions is now the path.
rg -nP --type=ts -C3 '\bmaxTokens\b|\bmaxOutputTokens\b' packages/typescript/ai-gemini packages/typescript/ai/srcRepository: TanStack/ai
Length of output: 11787
🏁 Script executed:
cat -n packages/typescript/ai-gemini/src/text/text-provider-options.ts | head -100Repository: TanStack/ai
Length of output: 5727
🏁 Script executed:
fd -t f "text.ts" packages/typescript/ai-gemini/src/adapters --exec cat -n {} \; | head -150Repository: TanStack/ai
Length of output: 5836
🏁 Script executed:
rg -n "mapCommonOptionsToGemini" packages/typescript/ai-gemini/src --context 20 -A 30Repository: TanStack/ai
Length of output: 8331
🏁 Script executed:
rg -n "ExternalTextProviderOptions" packages/typescript/ai-gemini/srcRepository: TanStack/ai
Length of output: 447
🏁 Script executed:
sed -n '253,260p' packages/typescript/ai-gemini/src/text/text-provider-options.tsRepository: TanStack/ai
Length of output: 299
🏁 Script executed:
cat -n packages/typescript/ai-gemini/src/text/text-provider-options.ts | tail -50Repository: TanStack/ai
Length of output: 1928
🏁 Script executed:
rg -n "maxTokens" packages/typescript/ai-gemini/src/adapters/text.ts -B 2 -A 2Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
grep -n "maxOutputTokens" packages/typescript/ai-gemini/src/text/text-provider-options.tsRepository: TanStack/ai
Length of output: 370
🏁 Script executed:
grep -n "maxTokens" packages/typescript/ai-gemini/tests/gemini-adapter.test.ts -B 3 -A 3Repository: TanStack/ai
Length of output: 492
🏁 Script executed:
rg -n "maxTokens\|maxOutputTokens" packages/typescript/ai-gemini/src/adapters/ -B 2 -A 2Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg -n "maxTokens" packages/typescript/ai-gemini/tests/gemini-adapter.test.ts | grep -A 5 "expect.*maxOutputTokens"Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
cat -n packages/typescript/ai-gemini/tests/gemini-adapter.test.ts | sed -n '104,125p'Repository: TanStack/ai
Length of output: 795
🏁 Script executed:
cat -n packages/typescript/ai-gemini/tests/gemini-adapter.test.ts | sed -n '200,230p'Repository: TanStack/ai
Length of output: 1315
🏁 Script executed:
rg -n "class BaseTextAdapter\|chatStream" packages/typescript/ai/src/adapters -B 2 -A 15 | head -80Repository: TanStack/ai
Length of output: 130
🏁 Script executed:
find packages -name "*.ts" -type f | xargs grep -l "class BaseTextAdapter" | head -5Repository: TanStack/ai
Length of output: 108
🏁 Script executed:
fd -t f "index.ts" packages/typescript/ai/src/adapters | head -10Repository: TanStack/ai
Length of output: 177
🏁 Script executed:
cat -n packages/typescript/ai/src/activities/chat/adapter.ts | head -150Repository: TanStack/ai
Length of output: 5884
🏁 Script executed:
rg -n "maxTokens" packages/typescript/ai/src/activities/chat/adapter.ts -B 3 -A 3Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg -n "mapCommonOptionsToGemini" packages/typescript/ai-gemini/src/adapters/text.ts -B 5 -A 40 | grep -A 40 "private mapCommonOptionsToGemini"Repository: TanStack/ai
Length of output: 1576
🏁 Script executed:
rg -n "temperature\|topP\|maxTokens" packages/typescript/ai/src/activities/chat/adapter.ts | head -20Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg -n "maxTokens" packages/typescript/ai-gemini/src/ --type ts -B 2 -A 2 | grep -v test | grep -v ".test.ts"Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg -n "\.maxTokens" packages/typescript/ai/src/activities/chat/ --type ts -B 2 -A 2 | head -30Repository: TanStack/ai
Length of output: 1522
🏁 Script executed:
cat -n packages/typescript/ai-gemini/tests/gemini-adapter.test.ts | sed -n '95,130p'Repository: TanStack/ai
Length of output: 1352
🏁 Script executed:
rg -n "maxTokens.*maxOutputTokens\|maxOutputTokens.*maxTokens" packages/typescript/ai-gemini/ --type tsRepository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg -n "const.*maxOutputTokens\|maxOutputTokens.*=" packages/typescript/ai-gemini/src --type ts -B 2 -A 2Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
cat -n packages/typescript/ai/src/activities/chat/index.ts | sed -n '570,620p'Repository: TanStack/ai
Length of output: 1964
🏁 Script executed:
rg -n "maxTokens.*maxOutputTokens" packages/typescript/ai-gemini/ -B 5 -A 5Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
cat -n packages/typescript/ai-gemini/src/adapters/text.ts | sed -n '800,835p'Repository: TanStack/ai
Length of output: 1395
🏁 Script executed:
rg -n "temperature\|topP" packages/typescript/ai-gemini/src/adapters/text.ts -B 2 -A 2Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
fd -t f "test*.ts" packages/typescript/ai-gemini | xargs grep -l "maxTokens.*maxOutputTokens" | head -1Repository: TanStack/ai
Length of output: 37
Add maxOutputTokens to GeminiCommonConfigOptions to support token limiting through modelOptions.
The adapter's mapCommonOptionsToGemini method spreads modelOptions directly into Gemini's config object (line 814). Since temperature and topP are now part of GeminiCommonConfigOptions, the provider-native field maxOutputTokens should be exposed alongside them for type-safe token-limit control through modelOptions.
🔧 Proposed fix
export interface GeminiCommonConfigOptions {
/**
* Controls the randomness of the output. Range: [0.0, 2.0]. Higher values produce more random output.
*/
temperature?: number
/**
* Nucleus sampling probability threshold. Range: (0.0, 1.0). Alter this or temperature, not both.
*/
topP?: number
+ /**
+ * Maximum number of tokens that can be generated in the response.
+ */
+ maxOutputTokens?: number
/**
* The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response.
*/
stopSequences?: Array<string>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export interface GeminiCommonConfigOptions { | |
| /** | |
| * Configuration options for model generation and outputs. | |
| * Controls the randomness of the output. Range: [0.0, 2.0]. Higher values produce more random output. | |
| */ | |
| temperature?: number | |
| /** | |
| * Nucleus sampling probability threshold. Range: (0.0, 1.0). Alter this or temperature, not both. | |
| */ | |
| topP?: number | |
| export interface GeminiCommonConfigOptions { | |
| /** | |
| * Controls the randomness of the output. Range: [0.0, 2.0]. Higher values produce more random output. | |
| */ | |
| temperature?: number | |
| /** | |
| * Nucleus sampling probability threshold. Range: (0.0, 1.0). Alter this or temperature, not both. | |
| */ | |
| topP?: number | |
| /** | |
| * Maximum number of tokens that can be generated in the response. | |
| */ | |
| maxOutputTokens?: number | |
| /** | |
| * The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response. | |
| */ | |
| stopSequences?: Array<string> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-gemini/src/text/text-provider-options.ts` around lines
25 - 33, The GeminiCommonConfigOptions interface is missing the provider-native
maxOutputTokens field so type-safe token limits passed via modelOptions are not
recognized; add an optional maxOutputTokens?: number property to
GeminiCommonConfigOptions so when mapCommonOptionsToGemini spreads modelOptions
into Gemini's config (mapCommonOptionsToGemini, modelOptions) the token limit is
typed and passed through correctly.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/typescript/ai-grok/src/adapters/text.ts (1)
449-491:⚠️ Potential issue | 🔴 CriticalCritical:
modelOptionsis validated but never forwarded to the Grok request.After removing the explicit
temperature/max_tokens/top_pmappings,modelOptionsis read and validated on lines 452–461 but is never spread into the returnedChatCompletionCreateParamsStreamingon lines 484–490. Net effect:temperature,top_p,max_tokens, and any other provider options passed viamodelOptionsare silently dropped for Grok — the PR's intent (route generation controls throughmodelOptions) is effectively a no-op here.Every other adapter in this PR forwards them (OpenAI
...modelOptions, OpenRouter...restModelOptions, Ollamaoptions: modelOptions, Anthropic...validProviderOptions). Grok needs the same.🐛 Proposed fix
return { + ...modelOptions, model: options.model, messages, tools: tools as Array<OpenAI_SDK.Chat.Completions.ChatCompletionTool>, stream: true, stream_options: { include_usage: true }, }Spread order matches OpenRouter: user-provided
modelOptionscan't clobber the requiredmodel/messages/tools/streamfields.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/src/adapters/text.ts` around lines 449 - 491, mapTextOptionsToGrok currently validates modelOptions but never includes them in the returned ChatCompletionCreateParamsStreaming, so provider-level settings like temperature/max_tokens are dropped; update mapTextOptionsToGrok to spread the validated modelOptions into the returned object (e.g., ...modelOptions) while ensuring the required fields (model, messages, tools, stream, stream_options) remain last so user options can't clobber them; reference function names/values: mapTextOptionsToGrok, modelOptions, validateTextProviderOptions, messages, tools, and the returned ChatCompletionCreateParamsStreaming shape.
🧹 Nitpick comments (2)
packages/typescript/ai-anthropic/src/adapters/text.ts (1)
283-333: Changes look correct; minor nit on|| 1024.Adding
'temperature'/'top_p'tovalidKeyscorrectly routes them throughvalidProviderOptionsand intorequestParamsvia the spread on line 330. ThethinkingBudget/maxTokensinteraction still holds.Nit:
modelOptions?.max_tokens || 1024will also fall back to 1024 whenmax_tokensis explicitly0.0isn't a sensible request anyway, but using??better reflects the intent ("default only when unspecified").Optional tweak
- const defaultMaxTokens = modelOptions?.max_tokens || 1024 + const defaultMaxTokens = modelOptions?.max_tokens ?? 1024🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 283 - 333, The fallback for max tokens should only apply when modelOptions?.max_tokens is undefined/null, not when it's 0: change the defaultMaxTokens calculation to use the nullish coalescing operator (use modelOptions?.max_tokens ?? 1024) in the code that sets defaultMaxTokens so the rest of the logic in maxTokens and requestParams (variables: defaultMaxTokens, maxTokens, requestParams) behaves as intended.packages/typescript/ai-openai/src/adapters/text.ts (1)
847-877: Minor: spread ordering can letmodelOptionsoverridemodel/instructions.
modelOptionsis typed withOmit<..., 'tools' | 'metadata' | 'input'>and spread between the explicitmodel/metadata/instructionsandinput/tools.metadata,tools, andinputare safe (Omit'd and/or re-set after the spread), butmodelandinstructionsare not Omit'd, so if a caller (or a downstream internal path) ever puts amodelorinstructionskey intomodelOptions, it would silently overrideoptions.model/ the joinedsystemPrompts.Consider either extending the
Omitto exclude'model' | 'instructions', or spreading...modelOptionsfirst and then letting the explicit fields win:Proposed adjustment
const modelOptions = options.modelOptions as - | Omit<InternalTextProviderOptions, 'tools' | 'metadata' | 'input'> + | Omit< + InternalTextProviderOptions, + 'tools' | 'metadata' | 'input' | 'model' | 'instructions' + > | undefinedor
const requestParams: Omit< OpenAI_SDK.Responses.ResponseCreateParams, 'stream' > = { + ...modelOptions, model: options.model, metadata: options.metadata, instructions: options.systemPrompts?.join('\n'), - ...modelOptions, input, tools, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 847 - 877, The mapTextOptionsToOpenAI function can let modelOptions override options.model and the computed instructions because modelOptions is spread between explicit fields; update the code so explicit model/instructions always win: either change the modelOptions type declaration to Omit<InternalTextProviderOptions, 'tools' | 'metadata' | 'input' | 'model' | 'instructions'> or move the spread of modelOptions earlier (spread ...modelOptions first, then set model: options.model and instructions: options.systemPrompts?.join('\n') afterwards) so requestParams cannot be overridden by modelOptions; reference mapTextOptionsToOpenAI, modelOptions and requestParams when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 449-491: mapTextOptionsToGrok currently validates modelOptions but
never includes them in the returned ChatCompletionCreateParamsStreaming, so
provider-level settings like temperature/max_tokens are dropped; update
mapTextOptionsToGrok to spread the validated modelOptions into the returned
object (e.g., ...modelOptions) while ensuring the required fields (model,
messages, tools, stream, stream_options) remain last so user options can't
clobber them; reference function names/values: mapTextOptionsToGrok,
modelOptions, validateTextProviderOptions, messages, tools, and the returned
ChatCompletionCreateParamsStreaming shape.
---
Nitpick comments:
In `@packages/typescript/ai-anthropic/src/adapters/text.ts`:
- Around line 283-333: The fallback for max tokens should only apply when
modelOptions?.max_tokens is undefined/null, not when it's 0: change the
defaultMaxTokens calculation to use the nullish coalescing operator (use
modelOptions?.max_tokens ?? 1024) in the code that sets defaultMaxTokens so the
rest of the logic in maxTokens and requestParams (variables: defaultMaxTokens,
maxTokens, requestParams) behaves as intended.
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 847-877: The mapTextOptionsToOpenAI function can let modelOptions
override options.model and the computed instructions because modelOptions is
spread between explicit fields; update the code so explicit model/instructions
always win: either change the modelOptions type declaration to
Omit<InternalTextProviderOptions, 'tools' | 'metadata' | 'input' | 'model' |
'instructions'> or move the spread of modelOptions earlier (spread
...modelOptions first, then set model: options.model and instructions:
options.systemPrompts?.join('\n') afterwards) so requestParams cannot be
overridden by modelOptions; reference mapTextOptionsToOpenAI, modelOptions and
requestParams when making the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7ed86eb6-5112-4841-8445-590c624f64e8
📒 Files selected for processing (9)
packages/typescript/ai-anthropic/src/adapters/text.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.tspackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-ollama/src/adapters/text.tspackages/typescript/ai-ollama/src/meta/models-meta.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openrouter/src/adapters/text.ts
💤 Files with no reviewable changes (1)
- packages/typescript/ai-gemini/src/adapters/text.ts
✅ Files skipped from review due to trivial changes (1)
- packages/typescript/ai-groq/src/adapters/text.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/typescript/ai-gemini/src/text/text-provider-options.ts
Summary by CodeRabbit
New Features
Refactor
Tests