Merge branch 'main' into refactor/web-search-tool

This commit is contained in:
Caique Minhare [Cake] 2025-03-14 15:04:55 -03:00 committed by GitHub
commit 6ea5a4d1ef
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 184 additions and 115 deletions

View File

@ -127,6 +127,8 @@ We welcome any friendly suggestions and helpful contributions! Just create issue
Or contact @mannaandpoem via 📧email: mannaandpoem@gmail.com
**Note**: Before submitting a pull request, please use the pre-commit tool to check your changes. Run `pre-commit run --all-files` to execute the checks.
## Community Group
Join our networking group on Feishu and share your experience with other developers!
@ -143,7 +145,7 @@ Join our networking group on Feishu and share your experience with other develop
Thanks to [anthropic-computer-use](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo)
and [browser-use](https://github.com/browser-use/browser-use) for providing basic support for this project!
Additionally, we are grateful to [AAAJ](https://github.com/metauto-ai/agent-as-a-judge), [MetaGPT](https://github.com/geekan/MetaGPT) and [OpenHands](https://github.com/All-Hands-AI/OpenHands).
Additionally, we are grateful to [AAAJ](https://github.com/metauto-ai/agent-as-a-judge), [MetaGPT](https://github.com/geekan/MetaGPT), [OpenHands](https://github.com/All-Hands-AI/OpenHands) and [SWE-agent](https://github.com/SWE-agent/SWE-agent).
OpenManus is built by contributors from MetaGPT. Huge thanks to this agent community!

View File

@ -128,6 +128,8 @@ python run_flow.py
または @mannaandpoem に📧メールでご連絡くださいmannaandpoem@gmail.com
**注意**: プルリクエストを送信する前に、pre-commitツールを使用して変更を確認してください。`pre-commit run --all-files`を実行してチェックを実行します。
## コミュニティグループ
Feishuのネットワーキンググループに参加して、他の開発者と経験を共有しましょう
@ -144,7 +146,7 @@ Feishuのネットワーキンググループに参加して、他の開発者
このプロジェクトの基本的なサポートを提供してくれた[anthropic-computer-use](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo)
と[browser-use](https://github.com/browser-use/browser-use)に感謝します!
さらに、[AAAJ](https://github.com/metauto-ai/agent-as-a-judge)、[MetaGPT](https://github.com/geekan/MetaGPT)、[OpenHands](https://github.com/All-Hands-AI/OpenHands)にも感謝します。
さらに、[AAAJ](https://github.com/metauto-ai/agent-as-a-judge)、[MetaGPT](https://github.com/geekan/MetaGPT)、[OpenHands](https://github.com/All-Hands-AI/OpenHands)、[SWE-agent](https://github.com/SWE-agent/SWE-agent)にも感謝します。
OpenManusはMetaGPTのコントリビューターによって構築されました。このエージェントコミュニティに大きな感謝を

View File

@ -128,6 +128,8 @@ python run_flow.py
또는 📧 메일로 연락주세요. @mannaandpoem : mannaandpoem@gmail.com
**참고**: pull request를 제출하기 전에 pre-commit 도구를 사용하여 변경 사항을 확인하십시오. `pre-commit run --all-files`를 실행하여 검사를 실행합니다.
## 커뮤니티 그룹
Feishu 네트워킹 그룹에 참여하여 다른 개발자들과 경험을 공유하세요!
@ -144,7 +146,7 @@ Feishu 네트워킹 그룹에 참여하여 다른 개발자들과 경험을 공
이 프로젝트에 기본적인 지원을 제공해 주신 [anthropic-computer-use](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo)와
[browser-use](https://github.com/browser-use/browser-use)에게 감사드립니다!
또한, [AAAJ](https://github.com/metauto-ai/agent-as-a-judge), [MetaGPT](https://github.com/geekan/MetaGPT), [OpenHands](https://github.com/All-Hands-AI/OpenHands)에 깊은 감사를 드립니다.
또한, [AAAJ](https://github.com/metauto-ai/agent-as-a-judge), [MetaGPT](https://github.com/geekan/MetaGPT), [OpenHands](https://github.com/All-Hands-AI/OpenHands), [SWE-agent](https://github.com/SWE-agent/SWE-agent)에 깊은 감사를 드립니다.
OpenManus는 MetaGPT 기여자들에 의해 개발되었습니다. 이 에이전트 커뮤니티에 깊은 감사를 전합니다!

View File

@ -119,7 +119,7 @@ python main.py
然后通过终端输入你的创意!
如需体验开发版本,可运行:
如需体验不稳定的开发版本,可运行:
```bash
python run_flow.py
@ -131,6 +131,8 @@ python run_flow.py
或通过 📧 邮件联系 @mannaandpoemmannaandpoem@gmail.com
**注意**: 在提交 pull request 之前,请使用 pre-commit 工具检查您的更改。运行 `pre-commit run --all-files` 来执行检查。
## 交流群
加入我们的飞书交流群,与其他开发者分享经验!
@ -148,7 +150,7 @@ python run_flow.py
特别感谢 [anthropic-computer-use](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo)
和 [browser-use](https://github.com/browser-use/browser-use) 为本项目提供的基础支持!
此外,我们感谢 [AAAJ](https://github.com/metauto-ai/agent-as-a-judge)[MetaGPT](https://github.com/geekan/MetaGPT)[OpenHands](https://github.com/All-Hands-AI/OpenHands).
此外,我们感谢 [AAAJ](https://github.com/metauto-ai/agent-as-a-judge)[MetaGPT](https://github.com/geekan/MetaGPT)[OpenHands](https://github.com/All-Hands-AI/OpenHands) 和 [SWE-agent](https://github.com/SWE-agent/SWE-agent).
OpenManus 由 MetaGPT 社区的贡献者共同构建,感谢这个充满活力的智能体开发者社区!

View File

@ -6,7 +6,7 @@ from pydantic import BaseModel, Field, model_validator
from app.llm import LLM
from app.logger import logger
from app.schema import AgentState, Memory, Message, ROLE_TYPE
from app.schema import ROLE_TYPE, AgentState, Memory, Message
class BaseAgent(BaseModel, ABC):
@ -82,7 +82,7 @@ class BaseAgent(BaseModel, ABC):
def update_memory(
self,
role: ROLE_TYPE, # type: ignore
role: ROLE_TYPE, # type: ignore
content: str,
**kwargs,
) -> None:

View File

@ -7,9 +7,8 @@ from app.prompt.manus import NEXT_STEP_PROMPT, SYSTEM_PROMPT
from app.tool import Terminate, ToolCollection
from app.tool.browser_use_tool import BrowserUseTool
from app.tool.file_saver import FileSaver
from app.tool.web_search import WebSearch
from app.tool.python_execute import PythonExecute
from app.config import config
from app.tool.web_search import WebSearch
class Manus(ToolCallAgent):
@ -40,5 +39,8 @@ class Manus(ToolCallAgent):
)
async def _handle_special_tool(self, name: str, result: Any, **kwargs):
await self.available_tools.get_tool(BrowserUseTool().name).cleanup()
await super()._handle_special_tool(name, result, **kwargs)
if not self._is_special_tool(name):
return
else:
await self.available_tools.get_tool(BrowserUseTool().name).cleanup()
await super()._handle_special_tool(name, result, **kwargs)

View File

@ -6,7 +6,7 @@ from pydantic import Field, model_validator
from app.agent.toolcall import ToolCallAgent
from app.logger import logger
from app.prompt.planning import NEXT_STEP_PROMPT, PLANNING_SYSTEM_PROMPT
from app.schema import Message, TOOL_CHOICE_TYPE, ToolCall, ToolChoice
from app.schema import TOOL_CHOICE_TYPE, Message, ToolCall, ToolChoice
from app.tool import PlanningTool, Terminate, ToolCollection
@ -27,7 +27,7 @@ class PlanningAgent(ToolCallAgent):
available_tools: ToolCollection = Field(
default_factory=lambda: ToolCollection(PlanningTool(), Terminate())
)
tool_choices: TOOL_CHOICE_TYPE = ToolChoice.AUTO # type: ignore
tool_choices: TOOL_CHOICE_TYPE = ToolChoice.AUTO # type: ignore
special_tool_names: List[str] = Field(default_factory=lambda: [Terminate().name])
tool_calls: List[ToolCall] = Field(default_factory=list)
@ -212,7 +212,7 @@ class PlanningAgent(ToolCallAgent):
messages=messages,
system_msgs=[Message.system_message(self.system_prompt)],
tools=self.available_tools.to_params(),
tool_choice=ToolChoice.REQUIRED,
tool_choice=ToolChoice.AUTO,
)
assistant_msg = Message.from_tool_calls(
content=response.content, tool_calls=response.tool_calls

View File

@ -1,13 +1,12 @@
import json
from typing import Any, List, Literal, Optional, Union
from typing import Any, List, Optional, Union
from pydantic import Field
from app.agent.react import ReActAgent
from app.logger import logger
from app.prompt.toolcall import NEXT_STEP_PROMPT, SYSTEM_PROMPT
from app.schema import AgentState, Message, ToolCall, TOOL_CHOICE_TYPE, ToolChoice
from app.schema import TOOL_CHOICE_TYPE, AgentState, Message, ToolCall, ToolChoice
from app.tool import CreateChatCompletion, Terminate, ToolCollection
@ -26,7 +25,7 @@ class ToolCallAgent(ReActAgent):
available_tools: ToolCollection = ToolCollection(
CreateChatCompletion(), Terminate()
)
tool_choices: TOOL_CHOICE_TYPE = ToolChoice.AUTO # type: ignore
tool_choices: TOOL_CHOICE_TYPE = ToolChoice.AUTO # type: ignore
special_tool_names: List[str] = Field(default_factory=lambda: [Terminate().name])
tool_calls: List[ToolCall] = Field(default_factory=list)

View File

@ -30,8 +30,10 @@ class ProxySettings(BaseModel):
username: Optional[str] = Field(None, description="Proxy username")
password: Optional[str] = Field(None, description="Proxy password")
class SearchSettings(BaseModel):
engine: str = Field(default='Google', description="Search engine the llm to use")
engine: str = Field(default="Google", description="Search engine the llm to use")
class BrowserSettings(BaseModel):
headless: bool = Field(False, description="Whether to run browser in headless mode")
@ -180,7 +182,7 @@ class Config:
@property
def browser_config(self) -> Optional[BrowserSettings]:
return self._config.browser_config
@property
def search_config(self) -> Optional[SearchSettings]:
return self._config.search_config

View File

@ -124,7 +124,7 @@ class PlanningFlow(BaseFlow):
messages=[user_message],
system_msgs=[system_message],
tools=[self.planning_tool.to_param()],
tool_choice=ToolChoice.REQUIRED,
tool_choice=ToolChoice.AUTO,
)
# Process tool calls if present

View File

@ -12,7 +12,16 @@ from tenacity import retry, stop_after_attempt, wait_random_exponential
from app.config import LLMSettings, config
from app.logger import logger # Assuming a logger is set up in your app
from app.schema import Message, TOOL_CHOICE_TYPE, ROLE_VALUES, TOOL_CHOICE_VALUES, ToolChoice
from app.schema import (
ROLE_VALUES,
TOOL_CHOICE_TYPE,
TOOL_CHOICE_VALUES,
Message,
ToolChoice,
)
REASONING_MODELS = ["o1", "o3-mini"]
class LLM:
@ -133,27 +142,30 @@ class LLM:
else:
messages = self.format_messages(messages)
params = {
"model": self.model,
"messages": messages,
}
if self.model in REASONING_MODELS:
params["max_completion_tokens"] = self.max_tokens
else:
params["max_tokens"] = self.max_tokens
params["temperature"] = temperature or self.temperature
if not stream:
# Non-streaming request
response = await self.client.chat.completions.create(
model=self.model,
messages=messages,
max_tokens=self.max_tokens,
temperature=temperature or self.temperature,
stream=False,
)
params["stream"] = False
response = await self.client.chat.completions.create(**params)
if not response.choices or not response.choices[0].message.content:
raise ValueError("Empty or invalid response from LLM")
return response.choices[0].message.content
# Streaming request
response = await self.client.chat.completions.create(
model=self.model,
messages=messages,
max_tokens=self.max_tokens,
temperature=temperature or self.temperature,
stream=True,
)
params["stream"] = True
response = await self.client.chat.completions.create(**params)
collected_messages = []
async for chunk in response:
@ -187,7 +199,7 @@ class LLM:
system_msgs: Optional[List[Union[dict, Message]]] = None,
timeout: int = 300,
tools: Optional[List[dict]] = None,
tool_choice: TOOL_CHOICE_TYPE = ToolChoice.AUTO, # type: ignore
tool_choice: TOOL_CHOICE_TYPE = ToolChoice.AUTO, # type: ignore
temperature: Optional[float] = None,
**kwargs,
):
@ -230,16 +242,22 @@ class LLM:
raise ValueError("Each tool must be a dict with 'type' field")
# Set up the completion request
response = await self.client.chat.completions.create(
model=self.model,
messages=messages,
temperature=temperature or self.temperature,
max_tokens=self.max_tokens,
tools=tools,
tool_choice=tool_choice,
timeout=timeout,
params = {
"model": self.model,
"messages": messages,
"tools": tools,
"tool_choice": tool_choice,
"timeout": timeout,
**kwargs,
)
}
if self.model in REASONING_MODELS:
params["max_completion_tokens"] = self.max_tokens
else:
params["max_tokens"] = self.max_tokens
params["temperature"] = temperature or self.temperature
response = await self.client.chat.completions.create(**params)
# Check if response is valid
if not response.choices or not response.choices[0].message:

View File

@ -3,25 +3,32 @@ from typing import Any, List, Literal, Optional, Union
from pydantic import BaseModel, Field
class Role(str, Enum):
"""Message role options"""
SYSTEM = "system"
USER = "user"
ASSISTANT = "assistant"
ASSISTANT = "assistant"
TOOL = "tool"
ROLE_VALUES = tuple(role.value for role in Role)
ROLE_TYPE = Literal[ROLE_VALUES] # type: ignore
class ToolChoice(str, Enum):
"""Tool choice options"""
NONE = "none"
AUTO = "auto"
REQUIRED = "required"
TOOL_CHOICE_VALUES = tuple(choice.value for choice in ToolChoice)
TOOL_CHOICE_TYPE = Literal[TOOL_CHOICE_VALUES] # type: ignore
class AgentState(str, Enum):
"""Agent execution states"""
@ -47,7 +54,7 @@ class ToolCall(BaseModel):
class Message(BaseModel):
"""Represents a chat message in the conversation"""
role: ROLE_TYPE = Field(...) # type: ignore
role: ROLE_TYPE = Field(...) # type: ignore
content: Optional[str] = Field(default=None)
tool_calls: Optional[List[ToolCall]] = Field(default=None)
name: Optional[str] = Field(default=None)
@ -104,7 +111,9 @@ class Message(BaseModel):
@classmethod
def tool_message(cls, content: str, name, tool_call_id: str) -> "Message":
"""Create a tool message"""
return cls(role=Role.TOOL, content=content, name=name, tool_call_id=tool_call_id)
return cls(
role=Role.TOOL, content=content, name=name, tool_call_id=tool_call_id
)
@classmethod
def from_tool_calls(

View File

@ -106,7 +106,7 @@ class BrowserUseTool(BaseTool):
async def _ensure_browser_initialized(self) -> BrowserContext:
"""Ensure browser and context are initialized."""
if self.browser is None:
browser_config_kwargs = {"headless": False}
browser_config_kwargs = {"headless": False, "disable_security": True}
if config.browser_config:
from browser_use.browser.browser import ProxySettings

View File

@ -1,4 +1,6 @@
import threading
import multiprocessing
import sys
from io import StringIO
from typing import Dict
from app.tool.base import BaseTool
@ -20,6 +22,20 @@ class PythonExecute(BaseTool):
"required": ["code"],
}
def _run_code(self, code: str, result_dict: dict, safe_globals: dict) -> None:
original_stdout = sys.stdout
try:
output_buffer = StringIO()
sys.stdout = output_buffer
exec(code, safe_globals, safe_globals)
result_dict["observation"] = output_buffer.getvalue()
result_dict["success"] = True
except Exception as e:
result_dict["observation"] = str(e)
result_dict["success"] = False
finally:
sys.stdout = original_stdout
async def execute(
self,
code: str,
@ -35,36 +51,25 @@ class PythonExecute(BaseTool):
Returns:
Dict: Contains 'output' with execution output or error message and 'success' status.
"""
result = {"observation": ""}
def run_code():
try:
safe_globals = {"__builtins__": dict(__builtins__)}
with multiprocessing.Manager() as manager:
result = manager.dict({"observation": "", "success": False})
if isinstance(__builtins__, dict):
safe_globals = {"__builtins__": __builtins__}
else:
safe_globals = {"__builtins__": __builtins__.__dict__.copy()}
proc = multiprocessing.Process(
target=self._run_code, args=(code, result, safe_globals)
)
proc.start()
proc.join(timeout)
import sys
from io import StringIO
output_buffer = StringIO()
sys.stdout = output_buffer
exec(code, safe_globals, {})
sys.stdout = sys.__stdout__
result["observation"] = output_buffer.getvalue()
except Exception as e:
result["observation"] = str(e)
result["success"] = False
thread = threading.Thread(target=run_code)
thread.start()
thread.join(timeout)
if thread.is_alive():
return {
"observation": f"Execution timeout after {timeout} seconds",
"success": False,
}
return result
# timeout process
if proc.is_alive():
proc.terminate()
proc.join(1)
return {
"observation": f"Execution timeout after {timeout} seconds",
"success": False,
}
return dict(result)

View File

@ -1,5 +1,5 @@
from app.tool.search.base import WebSearchEngine
from app.tool.search.baidu_search import BaiduSearchEngine
from app.tool.search.base import WebSearchEngine
from app.tool.search.duckduckgo_search import DuckDuckGoSearchEngine
from app.tool.search.google_search import GoogleSearchEngine
@ -9,4 +9,4 @@ __all__ = [
"BaiduSearchEngine",
"DuckDuckGoSearchEngine",
"GoogleSearchEngine",
]
]

View File

@ -1,9 +1,9 @@
from baidusearch.baidusearch import search
from app.tool.search.base import WebSearchEngine
class BaiduSearchEngine(WebSearchEngine):
def perform_search(self, query, num_results = 10, *args, **kwargs):
def perform_search(self, query, num_results=10, *args, **kwargs):
"""Baidu search engine."""
return search(query, num_results=num_results)

View File

@ -1,5 +1,7 @@
class WebSearchEngine(object):
def perform_search(self, query: str, num_results: int = 10, *args, **kwargs) -> list[dict]:
def perform_search(
self, query: str, num_results: int = 10, *args, **kwargs
) -> list[dict]:
"""
Perform a web search and return a list of URLs.
@ -12,4 +14,4 @@ class WebSearchEngine(object):
Returns:
List: A list of dict matching the search query.
"""
raise NotImplementedError
raise NotImplementedError

View File

@ -1,9 +1,9 @@
from duckduckgo_search import DDGS
from app.tool.search.base import WebSearchEngine
class DuckDuckGoSearchEngine(WebSearchEngine):
async def perform_search(self, query, num_results = 10, *args, **kwargs):
async def perform_search(self, query, num_results=10, *args, **kwargs):
"""DuckDuckGo search engine."""
return DDGS.text(query, num_results=num_results)

View File

@ -1,8 +1,9 @@
from app.tool.search.base import WebSearchEngine
from googlesearch import search
from app.tool.search.base import WebSearchEngine
class GoogleSearchEngine(WebSearchEngine):
def perform_search(self, query, num_results = 10, *args, **kwargs):
def perform_search(self, query, num_results=10, *args, **kwargs):
"""Google search engine."""
return search(query, num_results=num_results)

View File

@ -40,7 +40,7 @@ Note: You MUST append a `sleep 0.05` to the end of the command for commands that
str: The output, and error of the command execution.
"""
# Split the command by & to handle multiple commands
commands = [cmd.strip() for cmd in command.split('&') if cmd.strip()]
commands = [cmd.strip() for cmd in command.split("&") if cmd.strip()]
final_output = CLIResult(output="", error="")
for cmd in commands:
@ -61,7 +61,7 @@ Note: You MUST append a `sleep 0.05` to the end of the command for commands that
stdout, stderr = await self.process.communicate()
result = CLIResult(
output=stdout.decode().strip(),
error=stderr.decode().strip()
error=stderr.decode().strip(),
)
except Exception as e:
result = CLIResult(output="", error=str(e))
@ -70,9 +70,13 @@ Note: You MUST append a `sleep 0.05` to the end of the command for commands that
# Combine outputs
if result.output:
final_output.output += (result.output + "\n") if final_output.output else result.output
final_output.output += (
(result.output + "\n") if final_output.output else result.output
)
if result.error:
final_output.error += (result.error + "\n") if final_output.error else result.error
final_output.error += (
(result.error + "\n") if final_output.error else result.error
)
# Remove trailing newlines
final_output.output = final_output.output.rstrip()
@ -124,14 +128,10 @@ Note: You MUST append a `sleep 0.05` to the end of the command for commands that
if os.path.isdir(new_path):
self.current_path = new_path
return CLIResult(
output=f"Changed directory to {self.current_path}",
error=""
output=f"Changed directory to {self.current_path}", error=""
)
else:
return CLIResult(
output="",
error=f"No such directory: {new_path}"
)
return CLIResult(output="", error=f"No such directory: {new_path}")
except Exception as e:
return CLIResult(output="", error=str(e))
@ -152,7 +152,7 @@ Note: You MUST append a `sleep 0.05` to the end of the command for commands that
parts = shlex.split(command)
if any(cmd in dangerous_commands for cmd in parts):
raise ValueError("Use of dangerous commands is restricted.")
except Exception as e:
except Exception:
# If shlex.split fails, try basic string comparison
if any(cmd in command for cmd in dangerous_commands):
raise ValueError("Use of dangerous commands is restricted.")

View File

@ -1,10 +1,15 @@
import asyncio
from typing import List
from app.tool.base import BaseTool
from app.config import config
from app.tool.search import WebSearchEngine, BaiduSearchEngine, GoogleSearchEngine, DuckDuckGoSearchEngine
from tenacity import retry, stop_after_attempt, wait_exponential
from app.tool.base import BaseTool
from app.tool.search import (
BaiduSearchEngine,
DuckDuckGoSearchEngine,
GoogleSearchEngine,
WebSearchEngine,
)
class WebSearch(BaseTool):
name: str = "web_search"

View File

@ -1,10 +1,10 @@
# Global LLM configuration
[llm]
model = "claude-3-5-sonnet"
base_url = "https://api.openai.com/v1"
api_key = "sk-..."
max_tokens = 4096
temperature = 0.0
model = "claude-3-7-sonnet" # The LLM model to use
base_url = "https://api.openai.com/v1" # API endpoint URL
api_key = "sk-..." # Your API key
max_tokens = 8192 # Maximum number of tokens in the response
temperature = 0.0 # Controls randomness
# [llm] #AZURE OPENAI:
# api_type= 'azure'
@ -15,11 +15,29 @@ temperature = 0.0
# temperature = 0.0
# api_version="AZURE API VERSION" #"2024-08-01-preview"
# [llm] #OLLAMA:
# api_type = 'ollama'
# model = "llama3.2"
# base_url = "http://localhost:11434/v1"
# api_key = "ollama"
# max_tokens = 4096
# temperature = 0.0
# Optional configuration for specific LLM models
[llm.vision]
model = "claude-3-5-sonnet"
base_url = "https://api.openai.com/v1"
api_key = "sk-..."
model = "claude-3-7-sonnet" # The vision model to use
base_url = "https://api.openai.com/v1" # API endpoint URL for vision model
api_key = "sk-..." # Your API key for vision model
max_tokens = 8192 # Maximum number of tokens in the response
temperature = 0.0 # Controls randomness for vision model
# [llm.vision] #OLLAMA VISION:
# api_type = 'ollama'
# model = "llama3.2-vision"
# base_url = "http://localhost:11434/v1"
# api_key = "ollama"
# max_tokens = 4096
# temperature = 0.0
# Optional configuration for specific browser configuration
# [browser]
@ -46,4 +64,4 @@ api_key = "sk-..."
# Optional configuration, Search settings.
# [search]
# Search engine for agent to use. Default is "Google", can be set to "Baidu" or "DuckDuckGo".
#engine = "Google"
#engine = "Google"