diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 2802891..ad6e44a 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -72,12 +72,13 @@ Agentic Coding should be a collaboration between Human System Design and Agent I - Example utility implementation: ```python # utils/call_llm.py + import os from openai import OpenAI - def call_llm(prompt): - client = OpenAI(api_key="YOUR_API_KEY_HERE") + def call_llm(prompt): + client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) r = client.chat.completions.create( - model="gpt-4o", + model=os.environ.get("OPENAI_MODEL", "gpt-4o"), messages=[{"role": "user", "content": prompt}] ) return r.choices[0].message.content @@ -151,15 +152,24 @@ my_project/ │ ├── __init__.py │ ├── call_llm.py │ └── search_web.py +├── .env ├── requirements.txt └── docs/ └── design.md ``` +- **`.env`**: Stores API keys and configuration. **Never commit this file to version control.** + ``` + OPENAI_API_KEY=your-api-key-here + # GEMINI_API_KEY=your-gemini-key-here + # ANTHROPIC_API_KEY=your-anthropic-key-here + ``` + - **`requirements.txt`**: Lists the Python dependencies for the project. ``` PyYAML pocketflow + python-dotenv ``` - **`docs/design.md`**: Contains project documentation for each step above. This should be *high-level* and *no-code*. @@ -249,24 +259,22 @@ my_project/ - It's recommended to dedicate one Python file to each API call, for example `call_llm.py` or `search_web.py`. - Each file should also include a `main()` function to try that API call ```python - from google import genai import os + from openai import OpenAI def call_llm(prompt: str) -> str: - client = genai.Client( - api_key=os.getenv("GEMINI_API_KEY", ""), + client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) + r = client.chat.completions.create( + model=os.environ.get("OPENAI_MODEL", "gpt-4o"), + messages=[{"role": "user", "content": prompt}] ) - model = os.getenv("GEMINI_MODEL", "gemini-2.5-flash") - response = client.models.generate_content(model=model, contents=[prompt]) - return response.text + return r.choices[0].message.content if __name__ == "__main__": test_prompt = "Hello, how are you?" - - # First call - should hit the API print("Making call...") - response1 = call_llm(test_prompt, use_cache=False) - print(f"Response: {response1}") + response = call_llm(test_prompt) + print(f"Response: {response}") ``` - **`nodes.py`**: Contains all the node definitions. @@ -320,6 +328,9 @@ my_project/ - **`main.py`**: Serves as the project's entry point. ```python # main.py + from dotenv import load_dotenv + load_dotenv() + from flow import create_qa_flow # Example main function @@ -387,6 +398,15 @@ From there, it’s easy to implement popular design patterns: - [Structured Output](./design_pattern/structure.md) formats outputs consistently. - [(Advanced) Multi-Agents](./design_pattern/multi_agent.md) coordinate multiple agents. +Additional patterns (see [cookbook examples](https://github.com/The-Pocket/PocketFlow#how-does-pocket-flow-work)): + +- **Streaming**: Real-time token-by-token LLM output with user interrupt capability. +- **MCP (Model Context Protocol)**: Integrate external tool servers as agent actions. +- **Memory**: Short-term and long-term memory for persistent conversations. +- **Supervisor**: Add a supervision layer over unreliable agents. +- **Human-in-the-Loop (HITL)**: Pause flows for human review and feedback. +- **Majority Vote**: Improve reasoning accuracy by aggregating multiple attempts. +