Array
4e10b42b30
feat: modular non-intrusive Amazon Bedrock support
2025-03-20 12:15:34 +08:00
Sheng Fan
94e2ab7c86
fix(llm): accept empty choices as valid response and handle that case gracefully
2025-03-19 14:09:46 +08:00
Sheng Fan
4ea7f8e988
merge: main from upstream
2025-03-19 13:34:43 +08:00
liangxinbing
7b38dd7fbc
update format_messages
2025-03-19 13:24:12 +08:00
Sheng Fan
3d5b09222e
Merge branch 'main' of https://github.com/mannaandpoem/OpenManus
2025-03-18 16:28:40 +08:00
Sheng Fan
cf7d6c1207
chore(app): Update error logging to use exception details
2025-03-18 13:36:15 +08:00
Sheng Fan
ca612699ec
refactor(app): explicitly specify LLM request parameters to allow typing
2025-03-18 11:53:47 +08:00
Sheng Fan
aa512fac6e
refactor(app): Complete exception logging in LLM.ask
2025-03-18 11:46:35 +08:00
zyren123
f474290395
Merge branch 'mannaandpoem:main' into main
2025-03-18 09:46:10 +08:00
liangxinbing
91d14a3a47
update llm, schema, BaseTool and BaseAgent
2025-03-18 02:31:39 +08:00
zhiyuanRen
11d1bd7729
format change for precommit purpose
2025-03-17 21:39:36 +08:00
zhiyuanRen
6dcd2ca064
fix: replace chinese comment with english version
2025-03-17 21:36:04 +08:00
liangxinbing
fb0d1c02a6
add TokenCounter and ask_with_images
2025-03-17 21:30:04 +08:00
zhiyuanRen
10ecc91e5e
print the token usage of each step's prompt and completion, as well as the cumulative total consumption up to now, which is useful for analyzing resource usage.
2025-03-16 21:47:46 +08:00
mannaandpoem
337adf011c
Merge pull request #635 from a-holm/avoid-Validation-error-when-using-gemini
...
fix(llm): improve message handling to support LLMs without content/tool_calls
2025-03-15 17:11:19 +08:00
a-holm
60268f1696
reformat with precommit
2025-03-15 09:48:52 +01:00
liangxinbing
65a3898592
format code and remove max_input_tokens for ToolCallAgent
2025-03-15 14:43:07 +08:00
mannaandpoem
3bb8f8fe71
Merge pull request #642 from matengm1/bug/fix-temperature-defaulting
...
Fix temperature using default if 0
2025-03-15 14:16:46 +08:00
Matt Eng
49ccd72815
Reformat
2025-03-14 21:41:43 -07:00
Matt Eng
b17c9d31a9
Fix temperature using default if 0
2025-03-14 20:39:23 -07:00
a-holm
350b0038ee
fix(llm): improve message handling to support LLMs without content/tool_calls
...
This commit improves the message handling in the LLM class to gracefully handle
messages without 'content' or 'tool_calls' fields. Previously, the system would
raise a ValueError when encountering such messages, causing crashes when working
with models like Google's Gemini that sometimes return messages with different
structures.
Key changes:
- Reordered message processing to check for Message objects first
- Changed validation approach to silently skip malformed messages instead of crashing
- Removed the strict ValueError when content/tool_calls are missing
This change maintains compatibility with correctly formatted messages while
improving robustness when working with various LLM providers.
2025-03-14 21:01:13 +01:00
zhengshuli
9b0b69a5e1
Use the max_input_tokens configuration to constrain the agent’s token usage.
2025-03-14 12:35:26 +08:00
liangxinbing
9c7834eff2
update readme; format code; update config.example.toml
2025-03-14 12:20:59 +08:00
mannaandpoem
e844dfca34
Merge pull request #510 from the0807/feature/o3-mini
...
Support OpenAI Reasoning Models (o1, o3-mini)
2025-03-14 11:47:34 +08:00
Isaac
6b64b98b12
Merge branch 'main' into refactor/standardize-tool-choice-literals
2025-03-12 20:55:52 +08:00
liangxinbing
e6e31a2c13
update timeout to 300
2025-03-12 20:09:23 +08:00
Matt Eng
eac3a6e24e
Standardize literals for role and tool choice type definitions
2025-03-12 00:15:31 -07:00
the0807
983e8f0d4b
Support OpenAI Reasoning Models (o1, o3-mini)
2025-03-12 14:33:32 +09:00
liangxinbing
dc28e9187b
adjust code format
2025-03-08 15:54:23 +08:00
Aria F
ecac3382ec
# feat: AzureOpenaiAPI support
2025-03-07 20:55:02 +08:00
liangxinbing
d028e64a98
init project
2025-03-06 22:57:07 +08:00