106 Commits

Author SHA1 Message Date
Xinlu Lai
52cb635512
Merge pull request #135 from RadonX/responses-api-adaptation
feat(openai): Responses API integration with adapter pattern
2025-11-15 20:32:17 +08:00
Radon Co
3d7f81242b test(responses-api): restructure test suite layout 2025-11-11 23:03:31 -08:00
Radon Co
14f9892bb5 feat(responses-api-adapter): Enhanced Tool Call Conversion
WHAT: Enhanced tool call handling in Responses API adapter with better validation, error handling, and test coverage

WHY: The adapter lacked robust tool call parsing and validation, leading to potential issues with malformed tool calls and incomplete test coverage. We needed to improve error handling and add comprehensive tests for real tool call scenarios.

HOW: Enhanced tool call result parsing with defensive null checking; improved assistant tool call parsing with proper validation; enhanced response tool call parsing with better structure and support for multiple tool call types; added validation for streaming tool call handling; updated tests to validate real tool call parsing from API; added multi-turn conversation test with tool result injection

Testing: All 3 integration tests pass with real API calls. Validated tool call parsing and tool result conversion working correctly. Real tool call detected and parsed successfully.
2025-11-11 12:58:30 -08:00
Radon Co
25adc80161 prompt(queryOpenAI): Separate adapter context from API execution
WHAT: Refactored queryOpenAI to prepare adapter context outside withRetry and execute API calls inside withRetry

WHY: The previous implementation mixed adapter preparation and execution, causing type confusion and state management issues

HOW: Created AdapterExecutionContext and QueryResult types, moved adapter context creation before withRetry block, wrapped all API calls (Responses API, Chat Completions, and legacy) inside withRetry with unified return structure, added normalizeUsage() helper to handle token field variations, ensured responseId and content are properly preserved through the unified return path
2025-11-11 00:49:01 -08:00
Radon Co
8288378dbd refactor(claude.ts): Extract adapter path before withRetry for clean separation
Problem: Mixed return types from withRetry callback caused content loss when
adapter returned AssistantMessage but outer code expected ChatCompletion.

Solution: Restructured queryOpenAI to separate adapter and legacy paths:
- Adapter path (responsesAPI): Direct execution, early return, no withRetry
- Legacy path (chat_completions): Uses withRetry for retry logic

Benefits:
 No type confusion - adapter path never enters withRetry
 Clean separation of concerns - adapters handle format, legacy handles retry
 Streaming-ready architecture for future async generator implementation
 Content displays correctly in CLI (fixed empty content bug)
 All 14 tests pass (52 assertions)

Additional changes:
- Added StreamingEvent type to base adapter for future async generators
- Updated UnifiedResponse to support both string and array content
- Added comments explaining architectural decisions and future improvements
- Fixed content loss bug in responses API path
2025-11-10 23:51:09 -08:00
Radon Co
c8ecba04d8 fix: Return AssistantMessage early to prevent content loss
Prevents adapter responses from being overwritten with empty content.
Adds early return check when response.type === 'assistant' to preserve
correctly formatted content from the adapter path.

All tests pass, CLI content now displays correctly.
2025-11-09 23:47:53 -08:00
Radon Co
34cd4e250d feat(responsesAPI): Implement async generator streaming for real-time UI updates
WHAT:
- Refactored ResponsesAPIAdapter to support async generator streaming pattern
- Added parseStreamingResponse() method that yields StreamingEvent incrementally
- Maintained backward compatibility with parseStreamingResponseBuffered() method
- Updated UnifiedResponse type to support both string and array content formats

WHY:
- Aligns Responses API adapter with Kode's three-level streaming architecture (Provider → Query → REPL)
- Enables real-time UI updates with text appearing progressively instead of all at once
- Supports TTFT (Time-To-First-Token) tracking for performance monitoring
- Matches Chat Completions streaming implementation pattern for consistency
- Resolves architectural mismatch between adapter pattern and streaming requirements

HOW:
- responsesAPI.ts: Implemented async *parseStreamingResponse() yielding events (message_start, text_delta, tool_request, usage, message_stop)
- base.ts: Added StreamingEvent type definition and optional parseStreamingResponse() to base class
- modelCapabilities.ts: Updated UnifiedResponse.content to accept string | Array<{type, text?, [key]: any}>
- parseResponse() maintains backward compatibility by calling buffered version
- All 14 tests pass with no regressions
2025-11-09 23:14:16 -08:00
Radon Co
be6477cca7 feat: Fix CLI crash and add OpenAI Responses API integration
WHAT: Fix critical CLI crash with content.filter() error and implement OpenAI Responses API integration with comprehensive testing

WHY: CLI was crashing with 'TypeError: undefined is not an object (evaluating "content.filter")' when using OpenAI models, preventing users from making API calls. Additionally needed proper Responses API support with reasoning tokens.

HOW:
• Fix content extraction from OpenAI response structure in legacy path
• Add JSON/Zod schema detection in responsesAPI adapter
• Create comprehensive test suite for both integration and production scenarios
• Document the new adapter architecture and usage

CRITICAL FIXES:
• claude.ts: Extract content from response.choices[0].message.content instead of undefined response.content
• responsesAPI.ts: Detect if schema is already JSON (has 'type' property) vs Zod schema before conversion

FILES:
• src/services/claude.ts - Critical bug fix for OpenAI response content extraction
• src/services/adapters/responsesAPI.ts - Robust schema detection for tool parameters
• src/test/integration-cli-flow.test.ts - Integration tests for full flow
• src/test/chat-completions-e2e.test.ts - End-to-end Chat Completions compatibility tests
• src/test/production-api-tests.test.ts - Production API tests with environment configuration
• docs/develop/modules/openai-adapters.md - New adapter system documentation
• docs/develop/README.md - Updated development documentation
2025-11-09 18:41:29 -08:00
Radon Co
7069893d14 feat(responses-api): Support OpenAI Responses API with proper parameter mapping
WHAT: Add support for OpenAI Responses API in Kode CLI adapter
WHY: Enable GPT-5 and similar models that require Responses API instead of Chat Completions; fix HTTP 400 errors and schema conversion failures
HOW: Fixed tool format to use flat structure matching API spec; added missing critical parameters (include array, parallel_tool_calls, store, tool_choice); implemented robust schema conversion handling both Zod and pre-built JSON schemas; added array-based content parsing for Anthropic compatibility; created comprehensive integration tests exercising the full claude.ts flow

AFFECTED FILES:
- src/services/adapters/responsesAPI.ts: Complete adapter implementation
- src/services/openai.ts: Simplified request handling
- src/test/integration-cli-flow.test.ts: New integration test suite
- src/test/responses-api-e2e.test.ts: Enhanced with production test capability

VERIFICATION:
- Integration tests pass: bun test src/test/integration-cli-flow.test.ts
- Production tests: PRODUCTION_TEST_MODE=true bun test src/test/responses-api-e2e.test.ts
2025-11-09 14:22:43 -08:00
Radon Co
3c9b0ec9d1 prompt(api): Add OpenAI Responses API support with SSE streaming
WHAT: Implement Responses API adapter with full SSE streaming support to enable Kode CLI working with GPT-5 and other models that require OpenAI Responses API format

WHY: GPT-5 and newer models use OpenAI Responses API (different from Chat Completions) which returns streaming SSE responses. Kode CLI needed a conversion layer to translate between Anthropic API format and OpenAI Responses API format for seamless model integration

HOW: Created ResponsesAPIAdapter that converts Anthropic UnifiedRequestParams to Responses API format (instructions, input array, max_output_tokens, stream=true), added SSE parser to collect streaming chunks and convert back to UnifiedResponse format. Fixed ModelAdapterFactory to properly select Responses API for GPT-5 models. Updated parseResponse to async across all adapters. Added production tests validating end-to-end conversion with actual API calls
2025-11-09 01:29:04 -08:00
Xinlu Lai
a4c3f16c2b
not coding, kode is everything 2025-11-05 03:09:11 +08:00
Xinlu Lai
e50c6f52f6
Merge pull request #107 from majiang213/main
fix: Fixed the issue where the build script would not create cli.js
2025-10-10 01:28:02 +08:00
Xinlu Lai
893112e43c
Merge pull request #110 from Mriris/main
fix: Ollama on Windows
2025-10-10 01:26:37 +08:00
若辰
f934cfa62e
Merge branch 'shareAI-lab:main' into main 2025-10-08 13:14:03 +08:00
Xinlu Lai
1b3b0786ca
Merge pull request #106 from wxhzzsf/wxh/qeury-anthropic-native-mcp-input-schema-fix
fix: add mcp support for anthropic native tool input schema
2025-10-07 13:48:52 +08:00
若辰
f486925c06 fix(PersistentShell): remove login shell arguments for MSYS and ensure correct CWD updates 2025-10-06 14:05:14 +08:00
若辰
ab0d3f26f3 fix(PersistentShell): improve command execution for WSL and MSYS environments 2025-10-06 13:47:28 +08:00
若辰
70f0d6b109 feat(ModelSelector): model context 2025-10-06 13:18:05 +08:00
若辰
451362256c fix(ModelSelector): ollama 2025-10-05 23:59:44 +08:00
MJ
fd1d6e385d fix: Fixed the issue where the build script would not create cli.js 2025-09-29 18:28:28 +08:00
Xiaohan Wang
ce8c8dad63 fix: add mcp support for anthropic native tool input schema 2025-09-29 16:55:43 +08:00
CrazyBoyM
b847352101 refactor: use tsconfig aliases throughout 2025-09-20 15:14:39 +08:00
CrazyBoyM
61a8ce0d22 clean code 2025-09-20 15:14:38 +08:00
CrazyBoyM
59dce97350 fix & remove ugly code 2025-09-20 15:14:38 +08:00
CrazyBoyM
d4abb2abee clean code 2025-09-20 15:14:38 +08:00
Xinlu Lai
78b49355cd
Merge pull request #89 from glide-the/update_build_docker
chore(docker): update Dockerfile to copy built application from dist …
2025-09-14 15:02:03 +08:00
glide-the
fbb2db6963 chore(docker): update Dockerfile to copy built application from dist and set entrypoint correctly\n\n- Adjusted COPY command to reference the new dist directory\n- Updated entrypoint to use the correct path for cli.js 2025-09-13 22:52:39 +08:00
CrazyBoyM
d0d1dca009 upgrade for win & remove some un-use code 2025-09-12 00:44:15 +08:00
CrazyBoyM
6bbaa6c559 fix win 2025-09-11 09:57:38 +08:00
CrazyBoyM
b0d9f58d76 fix(multi-edit): pass structuredPatch to update UI; harden FileEditToolUpdatedMessage against undefined patches\n\n- MultiEditTool: compute diff via getPatch and render standard update message\n- FileEditToolUpdatedMessage: treat missing structuredPatch as empty to avoid crash\n- Preserves existing feature and UI while eliminating reduce-on-undefined 2025-09-10 15:33:58 +08:00
CrazyBoyM
a7af9834ef chore(update): disable auto-update flow; keep version-check banner only\n\n- Remove AutoUpdater UI and NPM prefix/permissions flows\n- Simplify Doctor to passive health-check\n- Keep only getLatestVersion/assertMinVersion/update suggestions\n- Clean REPL/PromptInput to avoid extra renders and flicker\n- No hardcoding; no auto-install; docstrings tidy 2025-09-10 14:22:33 +08:00
Xinlu Lai
53234ba2d9
Merge pull request #67 from xiaomao87/fix-dockerfile
Dockerfile: fix runtime error `Module not found "/app/src/entrypoints/cli.tsx"`
2025-09-03 00:49:12 +08:00
Xinlu Lai
20111a0a26
Update README.zh-CN.md 2025-09-03 00:48:03 +08:00
Xinlu Lai
f857fb9577
Update README.md 2025-09-03 00:47:22 +08:00
dev
da15ca5a5f Dockerfile: fix runtime error Module not found "/app/src/entrypoints/cli.tsx" 2025-09-02 16:22:22 +08:00
Xinlu Lai
f23787f50d
Merge pull request #60 from burncloud/feat/burncloud
 feat: add BurnCloud as new AI model provider
2025-09-01 02:59:33 +08:00
Wei
32a9badecd feat: add new model provider BurnCloud fix default models 2025-08-30 16:01:14 +08:00
Xinlu Lai
5471b2d5fc
Update README.md 2025-08-30 01:38:51 +08:00
Xinlu Lai
97684a3208
Update README.md 2025-08-30 01:29:04 +08:00
CrazyBoyM
612234cdcc docs: add Windows support announcement to README files
Added update log section announcing Windows support via Git Bash, Unix subsystems,
or WSL (Windows Subsystem for Linux) for all Windows users.

- Added Update Log section to English README
- Added 更新日志 section to Chinese README
- Dated announcement as 2025-08-29
2025-08-29 02:22:14 +08:00
CrazyBoyM
92acc3a002 feat: transition from AGPLv3 to Apache 2.0 license
This is a major license change to better encourage open source contributions
and accelerate the global advancement of AI agent development.

Changes:
- Replace AGPLv3 LICENSE file with Apache 2.0 license text
- Update package.json license field from ISC to Apache-2.0
- Add prominent announcement in both README files about the license change
- Update license badges in README files

The Apache 2.0 license allows free use in both open source and commercial projects
with only attribution requirements, removing barriers to innovation and collaboration.
2025-08-29 02:15:44 +08:00
Xinlu Lai
c8fcc90911
Merge pull request #61 from geoffyli/add-web-tools
feat: Add WebSearchTool and URLFetcherTool for web content access
2025-08-29 01:45:21 +08:00
Xinlu Lai
9e21c5f48f
Merge pull request #62 from bing-zhub/fix_custom_claude
fix: claude streaming tool use and add cache control;
2025-08-29 01:05:08 +08:00
Bing Zhu
c45e28a806 feat: claude support cache control; 2025-08-28 23:11:31 +08:00
Bing Zhu
2875af774e fix: claude tool use msg seq; 2025-08-28 23:11:31 +08:00
Bing Zhu
7d2e0e3832 fix: claude streaming tool use; 2025-08-28 23:11:17 +08:00
Yulong Li
e3d903e7bc feat: Add WebSearchTool and URLFetcherTool for web content access
- Add WebSearchTool with DuckDuckGo integration for web search
    - Provides titles, snippets, and links for current information

  - Add URLFetcherTool for AI-powered web content analysis
    - Fetches and converts HTML content to markdown
    - Processes content using AI with user-provided prompts
    - Includes 15-minute caching for efficiency
    - Uses queryQuick for fast content analysis

  - Register both tools in the tools registry
  - Update documentation to reflect new web capabilities
2025-08-28 17:50:02 +08:00
Wei
efe00eee3b feat: add new model provider BurnCloud 2025-08-28 17:04:34 +08:00
Xinlu Lai
fd5ed25230
Merge pull request #55 from MrCatAI/win-usage-issue
feat: enhance PersistentShell to support shell detection
2025-08-28 16:37:27 +08:00
mrcat
0151defb21 feat: enhance PersistentShell with robust PATH splitting for Windows and POSIX
- Added a new function to split PATH entries correctly for both Windows and POSIX environments.
- Improved shell detection logic to utilize the new PATH splitting function, ensuring accurate path handling across different platforms.
2025-08-28 11:24:36 +08:00