96 Commits

Author SHA1 Message Date
00a6cf74d7 Merge branch 'main' of github.com:Dark-Alex-17/loki
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-03-09 14:58:23 -06:00
d35ca352ca chore: Added the new gemini-3.1-pro-preview model to gemini and vertex models
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-03-09 14:57:39 -06:00
57dc1cb252 docs: created an authorship policy and PR template that requires disclosure of AI assistance in contributions 2026-02-24 17:46:07 -07:00
101a9cdd6e style: Applied formatting to MCP module
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-20 15:28:21 -07:00
c5f52e1efb docs: Updated sisyphus README to always include the execute_command.sh tool
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-20 15:06:57 -07:00
470149b606 docs: Updated the sisyphus system docs to have a pro-tip of configuring an IDE MCP server to improve performance 2026-02-20 15:01:08 -07:00
02062c5a50 docs: Created README docs for the CodeRabbit-style Code reviewer agents 2026-02-20 15:00:32 -07:00
e6e99b6926 feat: Improved MCP server spinup and spindown when switching contexts or settings in the REPL: Modify existing config rather than stopping all servers always and re-initializing if unnecessary 2026-02-20 14:36:34 -07:00
15a293204f fix: Improved sub-agent stdout and stderr output for users to follow 2026-02-20 13:47:28 -07:00
ecf3780aed Update models.yaml with latest OpenRouter data 2026-02-20 12:08:00 -07:00
e798747135 Add script to update models.yaml from OpenRouter 2026-02-20 12:07:59 -07:00
60493728a0 fix: Inject agent variables into environment variables for global tool calls when invoked from agents to modify global tool behavior 2026-02-20 11:38:24 -07:00
25d6370b20 feat: Allow the explore agent to run search queries for understanding docs or API specs
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-19 14:29:02 -07:00
d67f845af5 feat: Allow the oracle to perform web searches for deeper research
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-19 14:26:07 -07:00
920a14cabe fix: Removed the unnecessary execute_commands tool from the oracle agent
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-19 14:18:16 -07:00
58bdd2e584 fix: Added auto_confirm to the coder agent so sub-agent spawning doesn't freeze 2026-02-19 14:15:42 -07:00
ce6f53ad05 feat: Added web search support to the main sisyphus agent to answer user queries
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-19 12:29:07 -07:00
96f8007d53 refactor: Changed the default session name for Sisyphus to temp (to require users to explicitly name sessions they wish to save)
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-19 10:26:52 -07:00
32a55652fe fix: Fixed a bug in the new supervisor and todo built-ins that was causing errors with OpenAI models
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-18 14:52:57 -07:00
2b92e6c98b fix: Added condition to sisyphus to always output a summary to clearly indicate completion 2026-02-18 13:57:51 -07:00
cfa654bcd8 fix: Updated the sisyphus prompt to explicitly tell it to delegate to the coder agent when it wants to write any code at all except for trivial changes
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-18 13:51:43 -07:00
d0f5ae39e2 fix: Added back in the auto_confirm variable into sisyphus
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-18 13:42:39 -07:00
2bb8cf5f73 fix: Removed the now unnecessary is_stale_response that was breaking auto-continuing with parallel agents
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-18 13:36:25 -07:00
fbac446859 style: Applied formatting to the function module
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-18 13:20:18 -07:00
f91cf2e346 build: Upgraded to the most recent version of rmcp
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-18 12:28:52 -07:00
b6b33ab7e3 refactor: Updated the sisyphus agent to use the built-in user interaction tools instead of custom bash-based tools 2026-02-18 12:17:35 -07:00
c1902a69d1 feat: Created a CodeRabbit-style code-reviewer agent 2026-02-18 12:16:59 -07:00
812a8e101c docs: Updated the docs to include details on the new agent spawning system and built-in user interaction tools 2026-02-18 12:16:29 -07:00
655ee2a599 fix: Bypassed enabled_tools for user interaction tools so if function calling is enabled at all, the LLM has access to the user interaction tools when in REPL mode 2026-02-18 11:25:25 -07:00
128a8f9a9c feat: Added configuration option in agents to indicate the timeout for user input before proceeding (defaults to 5 minutes) 2026-02-18 11:24:47 -07:00
b1be9443e7 feat: Added support for sub-agents to escalate user interaction requests from any depth to the parent agents for user interactions 2026-02-18 11:06:15 -07:00
7b12c69ebf feat: built-in user interaction tools to remove the need for the list/confirm/etc prompts in prompt tools and to enhance user interactions in Loki 2026-02-18 11:05:43 -07:00
69ad584137 fix: When parallel agents run, only write to stdout from the parent and only display the parent's throbber 2026-02-18 09:59:24 -07:00
313058e70a refactor: Cleaned up some left-over implementation stubs 2026-02-18 09:13:39 -07:00
ea96d9ba3d fix: Forgot to implement support for failing a task and keep all dependents blocked 2026-02-18 09:13:11 -07:00
7884adc7c1 fix: Clean up orphaned sub-agents when the parent agent 2026-02-18 09:12:32 -07:00
948466d771 fix: Fixed the bash prompt utils so that they correctly show output when being run by a tool invocation 2026-02-17 17:19:42 -07:00
3894c98b5b feat: Experimental update to sisyphus to use the new parallel agent spawning system 2026-02-17 16:33:08 -07:00
5e9c31595e fix: Forgot to automatically add the bidirectional communication back up to parent agents from sub-agents (i.e. need to be able to check inbox and send messages) 2026-02-17 16:11:35 -07:00
39d9b25e47 feat: Added an agent configuration property that allows auto-injecting sub-agent spawning instructions (when using the built-in sub-agent spawning system) 2026-02-17 15:49:40 -07:00
b86f76ddb9 feat: Auto-dispatch support of sub-agents and support for the teammate pattern between subagents 2026-02-17 15:18:27 -07:00
7f267a10a1 docs: Initial documentation cleanup of parallel agent MVP 2026-02-17 14:30:28 -07:00
cdafdff281 fix: Agent delegation tools were not being passed into the {{__tools__}} placeholder so agents weren't delegating to subagents 2026-02-17 14:19:22 -07:00
60ad83d6d9 feat: Full passive task queue integration for parallelization of subagents 2026-02-17 13:42:53 -07:00
44c03ccf4f feat: Implemented initial scaffolding for built-in sub-agent spawning tool call operations 2026-02-17 11:48:31 -07:00
af933bbb29 feat: Initial models for agent parallelization 2026-02-17 11:27:55 -07:00
1f127ee990 docs: Fixed typos in the Sisyphus documentation 2026-02-16 14:05:51 -07:00
88a9a7709f feat: Added interactive prompting between the LLM and the user in Sisyphus using the built-in Bash utils scripts
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-16 13:57:04 -07:00
github-actions[bot]
e8d92d1b01 chore: bump Cargo.toml to 0.2.0
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-14 01:41:41 +00:00
github-actions[bot]
ddbfd03e75 bump: version 0.1.3 → 0.2.0 [skip ci] 2026-02-14 01:41:29 +00:00
d1c7f09015 feat: Simplified sisyphus prompt to improve functionality
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-13 18:36:10 -07:00
d2f8f995f0 feat: Supported the injection of RAG sources into the prompt, not just via the .sources rag command in the REPL so models can directly reference the documents that supported their responses
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-13 17:45:56 -07:00
5ef9a397ca docs: updated the tools documentation to mention the new fs_read, fs_grep, and fs_glob tools
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-13 16:53:00 -07:00
325ab1f45e docs: updated the default configuration example to have the new fs_read, fs_glob, fs_grep global functions
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-13 16:23:49 -07:00
4cfaa2dc77 docs: Updated the docs to mention the new agents
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-13 15:42:28 -07:00
6abe2c5536 feat: Created the Sisyphus agent to make Loki function like Claude Code, Gemini, Codex, etc. 2026-02-13 15:42:10 -07:00
03cfd59962 feat: Created the Oracle agent to handle high-level architectural decisions and design questions about a given codebase 2026-02-13 15:41:44 -07:00
4d7d5e5e53 feat: Updated the coder agent to be much more task-focused and to be delegated to by Sisyphus 2026-02-13 15:41:11 -07:00
3779b940ae feat: Created the explore agent for exploring codebases to help answer questions 2026-02-13 15:40:46 -07:00
d2e541c5c0 docs: Updated todo-system docs 2026-02-13 15:13:37 -07:00
621c90427c feat: Use the official atlassian MCP server for the jira-helper agent 2026-02-13 14:56:42 -07:00
486001ee85 feat: Created fs_glob to enable more targeted file exploration utilities 2026-02-13 13:31:50 -07:00
c7a2ec084f feat: Created a new tool 'fs_grep' to search a given file's contents for relevant lines to reduce token usage for smaller models 2026-02-13 13:31:20 -07:00
d4e0d48198 feat: Created the new fs_read tool to enable controlled reading of a file 2026-02-13 13:30:53 -07:00
07f23bab5e feat: Let agent level variables be defined to bypass guard protections for tool invocations 2026-02-09 16:45:11 -07:00
b11797ea1c fix: Improved continuation prompt to not make broad todo-items 2026-02-09 15:36:57 -07:00
70c2d411ae fix: Allow auto-continuation to work in agents after a session is compressed and if there's still unfinish items in the to-do list 2026-02-09 15:21:39 -07:00
f82c9aff40 fix: fs_ls and fs_cat outputs should always redirect to "$LLM_OUTPUT" including on errors. 2026-02-09 14:56:55 -07:00
a935add2a7 feat: Implemented a built-in task management system to help smaller LLMs complete larger multistep tasks and minimize context drift 2026-02-09 12:49:06 -07:00
8a37a88ffd feat: Improved tool and MCP invocation error handling by returning stderr to the model when it is available 2026-02-04 12:00:21 -07:00
8f66cac680 feat: Added variable interpolation for conversation starters in agents
CI / All (ubuntu-latest) (push) Failing after 5m27s
CI / All (macos-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-04 10:51:59 -07:00
0a40ddd2e4 build: Upgraded to the most recent version of gman to fix vault vulnerabilities
CI / All (ubuntu-latest) (push) Failing after 5m27s
CI / All (macos-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-03 09:24:53 -07:00
d5e0728532 feat: Implemented retry logic for failed tool invocations so the LLM can learn from the result and try again; Also implemented chain loop detection to prevent loops
CI / All (ubuntu-latest) (push) Failing after 5m27s
CI / All (macos-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-02-01 17:06:16 -07:00
25c0885dcc fix: Claude tool calls work incorrectly when tool doesn't require any arguments or flags; would provide an empty JSON object or error on no args 2026-02-01 17:05:36 -07:00
f56ed7d005 feat: Added gemini-3-pro to the supported vertexai models 2026-01-30 19:03:41 -07:00
d79e4b9dff Fixed some typos in tool call error messages 2026-01-30 12:25:57 -07:00
cdd829199f build: Created justfile to make life easier
CI / All (ubuntu-latest) (push) Failing after 5m26s
CI / All (macos-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-01-27 13:49:36 -07:00
e3c644b8ca docs: Created a CREDITS file to document the history and origins of Loki from the original AIChat project
CI / All (ubuntu-latest) (push) Failing after 5m28s
CI / All (macos-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-01-27 13:15:20 -07:00
5cb8070da1 build: Support Claude Opus 4.5
CI / All (ubuntu-latest) (push) Failing after 5m26s
CI / All (macos-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-01-26 12:40:06 -07:00
66801b5d07 feat: Added an environment variable that lets users bypass guard operations in bash scripts. This is useful for agent routing
CI / All (ubuntu-latest) (push) Failing after 5m29s
CI / All (macos-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-01-23 14:18:52 -07:00
f2de196e22 fix: Fixed a bug where --agent-variable values were not being passed to the agents 2026-01-23 14:15:59 -07:00
2eba530895 feat: Added support for thought-signatures for Gemini 3+ models
CI / All (ubuntu-latest) (push) Failing after 5m25s
CI / All (macos-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2026-01-21 15:11:55 -07:00
3baa3102a3 style: Cleaned up an anyhow error
CI / All (macos-latest) (push) Has been cancelled
CI / All (ubuntu-latest) (push) Has been cancelled
CI / All (windows-latest) (push) Has been cancelled
2025-12-16 14:51:35 -07:00
github-actions[bot]
2d4fad596c bump: version 0.1.2 → 0.1.3 [skip ci] 2025-12-13 20:57:37 +00:00
7259e59d2a ci: Prep for 0.1.3 release 2025-12-13 13:38:09 -07:00
cec04c4597 style: Improved error message for un-fully configured MCP configuration 2025-12-13 13:37:01 -07:00
github-actions[bot]
a7f5677195 chore: bump Cargo.toml to 0.1.3 2025-12-13 20:28:10 +00:00
github-actions[bot]
6075f0a190 bump: version 0.1.2 → 0.1.3 [skip ci] 2025-12-13 20:27:58 +00:00
15310a9e2c chore: Updated the models 2025-12-11 09:05:41 -07:00
f7df54f2f7 docs: Removed the warning about MCP token usage since that has been fixed 2025-12-05 12:38:15 -07:00
212d4bace4 docs: Fixed an unclosed backtick typo in the Environment Variables docs 2025-12-05 12:37:59 -07:00
f4b3267c89 docs: Fixed typo in vault readme 2025-12-05 11:05:14 -07:00
9eeeb11871 style: Applied formatting 2025-12-03 15:06:50 -07:00
b8db3f689d Merge branch 'main' of github.com:Dark-Alex-17/loki 2025-12-03 14:57:03 -07:00
3b21ce2aa5 feat: Improved MCP implementation to minimize the tokens needed to utilize it so it doesn't quickly overwhelm the token space for a given model 2025-12-03 12:12:51 -07:00
Alex Clarke
9bf4fcd943 ci: Updated the README to be a bit more clear in some sections 2025-11-26 15:53:54 -07:00
77 changed files with 8742 additions and 1579 deletions
@@ -0,0 +1,11 @@
### AI assistance (if any):
- List tools here and files touched by them
### Authorship & Understanding
- [ ] I wrote or heavily modified this code myself
- [ ] I understand how it works end-to-end
- [ ] I can maintain this code in the future
- [ ] No undisclosed AI-generated code was used
- [ ] If AI assistance was used, it is documented below
+37
View File
@@ -1,3 +1,40 @@
## v0.2.0 (2026-02-14)
### Feat
- Simplified sisyphus prompt to improve functionality
- Supported the injection of RAG sources into the prompt, not just via the `.sources rag` command in the REPL so models can directly reference the documents that supported their responses
- Created the Sisyphus agent to make Loki function like Claude Code, Gemini, Codex, etc.
- Created the Oracle agent to handle high-level architectural decisions and design questions about a given codebase
- Updated the coder agent to be much more task-focused and to be delegated to by Sisyphus
- Created the explore agent for exploring codebases to help answer questions
- Use the official atlassian MCP server for the jira-helper agent
- Created fs_glob to enable more targeted file exploration utilities
- Created a new tool 'fs_grep' to search a given file's contents for relevant lines to reduce token usage for smaller models
- Created the new fs_read tool to enable controlled reading of a file
- Let agent level variables be defined to bypass guard protections for tool invocations
- Implemented a built-in task management system to help smaller LLMs complete larger multistep tasks and minimize context drift
- Improved tool and MCP invocation error handling by returning stderr to the model when it is available
- Added variable interpolation for conversation starters in agents
- Implemented retry logic for failed tool invocations so the LLM can learn from the result and try again; Also implemented chain loop detection to prevent loops
- Added gemini-3-pro to the supported vertexai models
- Added an environment variable that lets users bypass guard operations in bash scripts. This is useful for agent routing
- Added support for thought-signatures for Gemini 3+ models
### Fix
- Improved continuation prompt to not make broad todo-items
- Allow auto-continuation to work in agents after a session is compressed and if there's still unfinish items in the to-do list
- fs_ls and fs_cat outputs should always redirect to "$LLM_OUTPUT" including on errors.
- Claude tool calls work incorrectly when tool doesn't require any arguments or flags; would provide an empty JSON object or error on no args
- Fixed a bug where --agent-variable values were not being passed to the agents
## v0.1.3 (2025-12-13)
### Feat
- Improved MCP implementation to minimize the tokens needed to utilize it so it doesn't quickly overwhelm the token space for a given model
## v0.1.2 (2025-11-08)
### Refactor
+9 -1
View File
@@ -48,7 +48,8 @@ cz commit
1. Clone this repo
2. Run `cargo test` to set up hooks
3. Make changes
4. Run the application using `make run` or `cargo run`
4. Run the application using `just run` or `just run`
- Install `just` (`cargo install just`) if you haven't already to use the [justfile](./justfile) in this project.
5. Commit changes. This will trigger pre-commit hooks that will run format, test and lint. If there are errors or
warnings from Clippy, please fix them.
6. Push your code to a new branch named after the feature/bug/etc. you're adding. This will trigger pre-push hooks that
@@ -75,6 +76,13 @@ Then, you can run workflows locally without having to commit and see if the GitH
act -W .github/workflows/release.yml --input_type bump=minor
```
## Authorship Policy
All code in this repository is written and reviewed by humans. AI-generated code (e.g., Copilot, ChatGPT,
Claude, etc.) is not permitted unless explicitly disclosed and approved.
Submissions must certify that the contributor understands and can maintain the code they submit.
## Questions? Reach out to me!
If you encounter any questions while developing Loki, please don't hesitate to reach out to me at
alex.j.tusa@gmail.com. I'm happy to help contributors in any way I can, regardless of if they're new or experienced!
+31
View File
@@ -0,0 +1,31 @@
# Credits
## AIChat
Loki originally started as a fork of the fantastic
[AIChat CLI](https://github.com/sigoden/aichat). The initial goal was simply
to fix a bug in how MCP servers worked with AIChat, allowing different MCP
servers to be specified per agent. Since then, Loki has evolved far beyond
its original scope and grown into a passion project with a life of its own.
Today, Loki includes first-class MCP server support (for both local and remote
servers), a built-in vault for interpolating secrets in configuration files,
built-in agents and macros, dynamic tab completions, integrated custom
functions (no external `argc` dependency), improved documentation, and much
more with many more ideas planned for the future.
Loki is now developed and maintained as an independent project. Full credit
for the original foundation goes to the developers of the wonderful
AIChat project.
This project is not affiliated with or endorsed by the AIChat maintainers.
## AIChat
Loki originally began as a fork of [AIChat CLI](https://github.com/sigoden/aichat),
created and maintained by the AIChat contributors.
While Loki has since diverged significantly and is now developed as an
independent project, its early foundation and inspiration came from the
AIChat project.
AIChat is licensed under the MIT License.
Generated
+1026 -657
View File
File diff suppressed because it is too large Load Diff
+3 -3
View File
@@ -1,6 +1,6 @@
[package]
name = "loki-ai"
version = "0.1.2"
version = "0.2.0"
edition = "2024"
authors = ["Alex Clarke <alex.j.tusa@gmail.com>"]
description = "An all-in-one, batteries included LLM CLI Tool"
@@ -88,13 +88,13 @@ duct = "1.0.0"
argc = "1.23.0"
strum_macros = "0.27.2"
indoc = "2.0.6"
rmcp = { version = "0.6.1", features = ["client", "transport-child-process"] }
rmcp = { version = "0.16.0", features = ["client", "transport-child-process"] }
num_cpus = "1.17.0"
rustpython-parser = "0.4.0"
rustpython-ast = "0.4.0"
colored = "3.0.0"
clap_complete = { version = "4.5.58", features = ["unstable-dynamic"] }
gman = "0.2.3"
gman = "0.3.0"
clap_complete_nushell = "4.5.9"
[dependencies.reqwest]
+14 -18
View File
@@ -19,7 +19,6 @@ Coming from [AIChat](https://github.com/sigoden/aichat)? Follow the [migration g
## Quick Links
* [AIChat Migration Guide](./docs/AICHAT-MIGRATION.md): Coming from AIChat? Follow the migration guide to get started.
* [History](#history): A history of how Loki came to be.
* [Installation](#install): Install Loki
* [Getting Started](#getting-started): Get started with Loki by doing first-run setup steps.
* [REPL](./docs/REPL.md): Interactive Read-Eval-Print Loop for conversational interactions with LLMs and Loki.
@@ -36,32 +35,19 @@ Coming from [AIChat](https://github.com/sigoden/aichat)? Follow the [migration g
* [RAG](./docs/RAG.md): Retrieval-Augmented Generation for enhanced information retrieval and generation.
* [Sessions](/docs/SESSIONS.md): Manage and persist conversational contexts and settings across multiple interactions.
* [Roles](./docs/ROLES.md): Customize model behavior for specific tasks or domains.
* [Agents](/docs/AGENTS.md): Leverage AI agents to perform complex tasks and workflows.
* [Agents](/docs/AGENTS.md): Leverage AI agents to perform complex tasks and workflows, including sub-agent spawning, teammate messaging, and user interaction tools.
* [Todo System](./docs/TODO-SYSTEM.md): Built-in task tracking for improved agent reliability with smaller models.
* [Environment Variables](./docs/ENVIRONMENT-VARIABLES.md): Override and customize your Loki configuration at runtime with environment variables.
* [Client Configurations](./docs/clients/CLIENTS.md): Configuration instructions for various LLM providers.
* [Patching API Requests](./docs/clients/PATCHES.md): Learn how to patch API requests for advanced customization.
* [Custom Themes](./docs/THEMES.md): Change the look and feel of Loki to your preferences with custom themes.
---
## History
Loki originally started as a fork of the fantastic [AIChat CLI](https://github.com/sigoden/aichat). The purpose was to
simply fix a bug in how MCP servers worked with AIChat so that I could specify different ones for agents. However, it
has since evolved far beyond that and become a passion project with a life of its own!
Loki now has first class MCP server support (with support for local and remote servers alike), a built-in vault for
interpolating secrets in configuration files, built-in agents, built-in macros, dynamic tab completions, integrated
custom functions (no `argc` dependency), improved documentation, and much more with many more plans for the future!
The original kudos goes out to all the developers of the wonderful AIChat project!
---
* [History](#history): A history of how Loki came to be.
## Prerequisites
Loki requires the following tools to be installed on your system:
* [jq](https://github.com/jqlang/jq)
* `brew install jq`
* [jira (optional)](https://github.com/ankitpokhrel/jira-cli/wiki/Installation) (For the `jira-helper` agent)
* [jira (optional)](https://github.com/ankitpokhrel/jira-cli/wiki/Installation) (For the `query_jira_issues` tool)
* `brew tap ankitpokhrel/jira-cli && brew install jira-cli`
* You'll need to [create a JIRA API token](https://id.atlassian.com/manage-profile/security/api-tokens) for authentication
* Then, save it as an environment variable to your shell profile:
@@ -257,5 +243,15 @@ The appearance of Loki can be modified using the following settings:
| `user_agent` | `null` | The name of the `User-Agent` that should be passed in the `User-Agent` header on all requests to model providers |
| `save_shell_history` | `true` | Enables or disables REPL command history |
---
## History
Loki began as a fork of [AIChat CLI](https://github.com/sigoden/aichat) and has since evolved into an independent project.
See [CREDITS.md](./CREDITS.md) for full attribution and background.
---
## Creator
* [Alex Clarke](https://github.com/Dark-Alex-17)
+447
View File
@@ -0,0 +1,447 @@
#!/usr/bin/env bash
# Shared Agent Utilities - Minimal, focused helper functions
set -euo pipefail
#############################
## CONTEXT FILE MANAGEMENT ##
#############################
get_context_file() {
local project_dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
echo "${project_dir}/.loki-context"
}
# Initialize context file for a new task
# Usage: init_context "Task description"
init_context() {
local task="$1"
local project_dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
local context_file
context_file=$(get_context_file)
cat > "${context_file}" <<EOF
## Project: ${project_dir}
## Task: ${task}
## Started: $(date -Iseconds)
### Prior Findings
EOF
}
# Append findings to the context file
# Usage: append_context "agent_name" "finding summary
append_context() {
local agent="$1"
local finding="$2"
local context_file
context_file=$(get_context_file)
if [[ -f "${context_file}" ]]; then
{
echo ""
echo "[${agent}]:"
echo "${finding}"
} >> "${context_file}"
fi
}
# Read the current context (returns empty string if no context)
# Usage: context=$(read_context)
read_context() {
local context_file
context_file=$(get_context_file)
if [[ -f "${context_file}" ]]; then
cat "${context_file}"
fi
}
# Clear the context file
clear_context() {
local context_file
context_file=$(get_context_file)
rm -f "${context_file}"
}
#######################
## PROJECT DETECTION ##
#######################
# Cache file name for detected project info
_LOKI_PROJECT_CACHE=".loki-project.json"
# Read cached project detection if valid
# Usage: _read_project_cache "/path/to/project"
# Returns: cached JSON on stdout (exit 0) or nothing (exit 1)
_read_project_cache() {
local dir="$1"
local cache_file="${dir}/${_LOKI_PROJECT_CACHE}"
if [[ -f "${cache_file}" ]]; then
local cached
cached=$(cat "${cache_file}" 2>/dev/null) || return 1
if echo "${cached}" | jq -e '.type and .build != null and .test != null and .check != null' &>/dev/null; then
echo "${cached}"
return 0
fi
fi
return 1
}
# Write project detection result to cache
# Usage: _write_project_cache "/path/to/project" '{"type":"rust",...}'
_write_project_cache() {
local dir="$1"
local json="$2"
local cache_file="${dir}/${_LOKI_PROJECT_CACHE}"
echo "${json}" > "${cache_file}" 2>/dev/null || true
}
_detect_heuristic() {
local dir="$1"
# Rust
if [[ -f "${dir}/Cargo.toml" ]]; then
echo '{"type":"rust","build":"cargo build","test":"cargo test","check":"cargo check"}'
return 0
fi
# Go
if [[ -f "${dir}/go.mod" ]]; then
echo '{"type":"go","build":"go build ./...","test":"go test ./...","check":"go vet ./..."}'
return 0
fi
# Node.JS/Deno/Bun
if [[ -f "${dir}/deno.json" ]] || [[ -f "${dir}/deno.jsonc" ]]; then
echo '{"type":"deno","build":"deno task build","test":"deno test","check":"deno lint"}'
return 0
fi
if [[ -f "${dir}/package.json" ]]; then
local pm="npm"
[[ -f "${dir}/bun.lockb" ]] || [[ -f "${dir}/bun.lock" ]] && pm="bun"
[[ -f "${dir}/pnpm-lock.yaml" ]] && pm="pnpm"
[[ -f "${dir}/yarn.lock" ]] && pm="yarn"
echo "{\"type\":\"nodejs\",\"build\":\"${pm} run build\",\"test\":\"${pm} test\",\"check\":\"${pm} run lint\"}"
return 0
fi
# Python
if [[ -f "${dir}/pyproject.toml" ]] || [[ -f "${dir}/setup.py" ]] || [[ -f "${dir}/setup.cfg" ]]; then
local test_cmd="pytest"
local check_cmd="ruff check ."
if [[ -f "${dir}/poetry.lock" ]]; then
test_cmd="poetry run pytest"
check_cmd="poetry run ruff check ."
elif [[ -f "${dir}/uv.lock" ]]; then
test_cmd="uv run pytest"
check_cmd="uv run ruff check ."
fi
echo "{\"type\":\"python\",\"build\":\"\",\"test\":\"${test_cmd}\",\"check\":\"${check_cmd}\"}"
return 0
fi
# JVM (Maven)
if [[ -f "${dir}/pom.xml" ]]; then
echo '{"type":"java","build":"mvn compile","test":"mvn test","check":"mvn verify"}'
return 0
fi
# JVM (Gradle)
if [[ -f "${dir}/build.gradle" ]] || [[ -f "${dir}/build.gradle.kts" ]]; then
local gw="gradle"
[[ -f "${dir}/gradlew" ]] && gw="./gradlew"
echo "{\"type\":\"java\",\"build\":\"${gw} build\",\"test\":\"${gw} test\",\"check\":\"${gw} check\"}"
return 0
fi
# .NET / C#
if compgen -G "${dir}/*.sln" &>/dev/null || compgen -G "${dir}/*.csproj" &>/dev/null; then
echo '{"type":"dotnet","build":"dotnet build","test":"dotnet test","check":"dotnet build --warnaserrors"}'
return 0
fi
# C/C++ (CMake)
if [[ -f "${dir}/CMakeLists.txt" ]]; then
echo '{"type":"cmake","build":"cmake --build build","test":"ctest --test-dir build","check":"cmake --build build"}'
return 0
fi
# Ruby
if [[ -f "${dir}/Gemfile" ]]; then
local test_cmd="bundle exec rake test"
[[ -f "${dir}/Rakefile" ]] && grep -q "rspec" "${dir}/Gemfile" 2>/dev/null && test_cmd="bundle exec rspec"
echo "{\"type\":\"ruby\",\"build\":\"\",\"test\":\"${test_cmd}\",\"check\":\"bundle exec rubocop\"}"
return 0
fi
# Elixir
if [[ -f "${dir}/mix.exs" ]]; then
echo '{"type":"elixir","build":"mix compile","test":"mix test","check":"mix credo"}'
return 0
fi
# PHP
if [[ -f "${dir}/composer.json" ]]; then
echo '{"type":"php","build":"","test":"./vendor/bin/phpunit","check":"./vendor/bin/phpstan analyse"}'
return 0
fi
# Swift
if [[ -f "${dir}/Package.swift" ]]; then
echo '{"type":"swift","build":"swift build","test":"swift test","check":"swift build"}'
return 0
fi
# Zig
if [[ -f "${dir}/build.zig" ]]; then
echo '{"type":"zig","build":"zig build","test":"zig build test","check":"zig build"}'
return 0
fi
# Generic build systems (last resort before LLM)
if [[ -f "${dir}/justfile" ]] || [[ -f "${dir}/Justfile" ]]; then
echo '{"type":"just","build":"just build","test":"just test","check":"just lint"}'
return 0
fi
if [[ -f "${dir}/Makefile" ]] || [[ -f "${dir}/makefile" ]] || [[ -f "${dir}/GNUmakefile" ]]; then
echo '{"type":"make","build":"make build","test":"make test","check":"make lint"}'
return 0
fi
return 1
}
# Gather lightweight evidence about a project for LLM analysis
# Usage: _gather_project_evidence "/path/to/project"
# Returns: evidence string on stdout
_gather_project_evidence() {
local dir="$1"
local evidence=""
evidence+="Root files and directories:"$'\n'
evidence+=$(ls -1 "${dir}" 2>/dev/null | head -50)
evidence+=$'\n\n'
evidence+="File extension counts:"$'\n'
evidence+=$(find "${dir}" -type f \
-not -path '*/.git/*' \
-not -path '*/node_modules/*' \
-not -path '*/target/*' \
-not -path '*/dist/*' \
-not -path '*/__pycache__/*' \
-not -path '*/vendor/*' \
-not -path '*/.build/*' \
2>/dev/null \
| sed 's/.*\.//' | sort | uniq -c | sort -rn | head -10)
evidence+=$'\n\n'
local config_patterns=("*.toml" "*.yaml" "*.yml" "*.json" "*.xml" "*.gradle" "*.gradle.kts" "*.cabal" "*.pro" "Makefile" "justfile" "Justfile" "Dockerfile" "Taskfile*" "BUILD" "WORKSPACE" "flake.nix" "shell.nix" "default.nix")
local found_configs=0
for pattern in "${config_patterns[@]}"; do
if [[ ${found_configs} -ge 5 ]]; then
break
fi
local files
files=$(find "${dir}" -maxdepth 1 -name "${pattern}" -type f 2>/dev/null)
while IFS= read -r f; do
if [[ -n "${f}" && ${found_configs} -lt 5 ]]; then
local basename
basename=$(basename "${f}")
evidence+="--- ${basename} (first 30 lines) ---"$'\n'
evidence+=$(head -30 "${f}" 2>/dev/null)
evidence+=$'\n\n'
found_configs=$((found_configs + 1))
fi
done <<< "${files}"
done
echo "${evidence}"
}
# LLM-based project detection fallback
# Usage: _detect_with_llm "/path/to/project"
# Returns: JSON on stdout or empty (exit 1)
_detect_with_llm() {
local dir="$1"
local evidence
evidence=$(_gather_project_evidence "${dir}")
local prompt
prompt=$(cat <<-EOF
Analyze this project directory and determine the project type, primary language, and the correct shell commands to build, test, and check (lint/typecheck) it.
EOF
)
prompt+=$'\n'"${evidence}"$'\n'
prompt+=$(cat <<-EOF
Respond with ONLY a valid JSON object. No markdown fences, no explanation, no extra text.
The JSON must have exactly these 4 keys:
{"type":"<language>","build":"<build command>","test":"<test command>","check":"<lint or typecheck command>"}
Rules:
- "type" must be a single lowercase word (e.g. rust, go, python, nodejs, java, ruby, elixir, cpp, c, zig, haskell, scala, kotlin, dart, swift, php, dotnet, etc.)
- If a command doesn't apply to this project, use an empty string, ""
- Use the most standard/common commands for the detected ecosystem
- If you detect a package manager lockfile, use that package manager (e.g. pnpm over npm)
EOF
)
local llm_response
llm_response=$(loki --no-stream "${prompt}" 2>/dev/null) || return 1
llm_response=$(echo "${llm_response}" | sed 's/^```json//;s/^```//;s/```$//' | tr -d '\n' | sed 's/^[[:space:]]*//')
llm_response=$(echo "${llm_response}" | grep -o '{[^}]*}' | head -1)
if echo "${llm_response}" | jq -e '.type and .build != null and .test != null and .check != null' &>/dev/null; then
echo "${llm_response}" | jq -c '{type: (.type // "unknown"), build: (.build // ""), test: (.test // ""), check: (.check // "")}'
return 0
fi
return 1
}
# Detect project type and return build/test commands
# Uses: cached result -> fast heuristics -> LLM fallback
detect_project() {
local dir="${1:-.}"
local cached
if cached=$(_read_project_cache "${dir}"); then
echo "${cached}" | jq -c '{type, build, test, check}'
return 0
fi
local result
if result=$(_detect_heuristic "${dir}"); then
local enriched
enriched=$(echo "${result}" | jq -c '. + {"_detected_by":"heuristic","_cached_at":"'"$(date -Iseconds)"'"}')
_write_project_cache "${dir}" "${enriched}"
echo "${result}"
return 0
fi
if result=$(_detect_with_llm "${dir}"); then
local enriched
enriched=$(echo "${result}" | jq -c '. + {"_detected_by":"llm","_cached_at":"'"$(date -Iseconds)"'"}')
_write_project_cache "${dir}" "${enriched}"
echo "${result}"
return 0
fi
echo '{"type":"unknown","build":"","test":"","check":""}'
}
######################
## AGENT INVOCATION ##
######################
# Invoke a subagent with optional context injection
# Usage: invoke_agent <agent_name> <prompt> [extra_args...]
invoke_agent() {
local agent="$1"
local prompt="$2"
shift 2
local context
context=$(read_context)
local full_prompt
if [[ -n "${context}" ]]; then
full_prompt="## Orchestrator Context
The orchestrator (sisyphus) has gathered this context from prior work:
<context>
${context}
</context>
## Your Task
${prompt}"
else
full_prompt="${prompt}"
fi
env AUTO_CONFIRM=true loki --agent "${agent}" "$@" "${full_prompt}" 2>&1
}
# Invoke a subagent and capture a summary of its findings
# Usage: result=$(invoke_agent_with_summary "explore" "find auth patterns")
invoke_agent_with_summary() {
local agent="$1"
local prompt="$2"
shift 2
local output
output=$(invoke_agent "${agent}" "${prompt}" "$@")
local summary=""
if echo "${output}" | grep -q "FINDINGS:"; then
summary=$(echo "${output}" | sed -n '/FINDINGS:/,/^[A-Z_]*COMPLETE/p' | grep "^- " | sed 's/^- / - /')
elif echo "${output}" | grep -q "CODER_COMPLETE:"; then
summary=$(echo "${output}" | grep "CODER_COMPLETE:" | sed 's/CODER_COMPLETE: *//')
elif echo "${output}" | grep -q "ORACLE_COMPLETE"; then
summary=$(echo "${output}" | sed -n '/^## Recommendation/,/^## /{/^## Recommendation/d;/^## /d;p}' | sed '/^$/d' | head -10)
fi
# Failsafe: extract up to 5 meaningful lines if no markers found
if [[ -z "${summary}" ]]; then
summary=$(echo "${output}" | grep -v "^$" | grep -v "^#" | grep -v "^\-\-\-" | tail -10 | head -5)
fi
if [[ -n "${summary}" ]]; then
append_context "${agent}" "${summary}"
fi
echo "${output}"
}
###########################
## FILE SEARCH UTILITIES ##
###########################
search_files() {
local pattern="$1"
local dir="${2:-.}"
find "${dir}" -type f -name "${pattern}" \
-not -path '*/target/*' \
-not -path '*/node_modules/*' \
-not -path '*/.git/*' \
-not -path '*/dist/*' \
-not -path '*/__pycache__/*' \
2>/dev/null | head -25
}
get_tree() {
local dir="${1:-.}"
local depth="${2:-3}"
if command -v tree &>/dev/null; then
tree -L "${depth}" --noreport -I 'node_modules|target|dist|.git|__pycache__|*.pyc' "${dir}" 2>/dev/null || find "${dir}" -maxdepth "${depth}" -type f | head -50
else
find "${dir}" -maxdepth "${depth}" -type f \
-not -path '*/target/*' \
-not -path '*/node_modules/*' \
-not -path '*/.git/*' \
2>/dev/null | head -50
fi
}
+36
View File
@@ -0,0 +1,36 @@
# Code Reviewer
A CodeRabbit-style code review orchestrator that coordinates per-file reviews and synthesizes findings into a unified
report.
This agent acts as the manager for the review process, delegating actual file analysis to **[File Reviewer](../file-reviewer/README.md)**
agents while handling coordination and final reporting.
## Features
- 🤖 **Orchestration**: Spawns parallel reviewers for each changed file.
- 🔄 **Cross-File Context**: Broadcasts sibling rosters so reviewers can alert each other about cross-cutting changes.
- 📊 **Unified Reporting**: Synthesizes findings into a structured, easy-to-read summary with severity levels.
-**Parallel Execution**: Runs reviews concurrently for maximum speed.
## Pro-Tip: Use an IDE MCP Server for Improved Performance
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
them), and modify the agent definition to look like this:
```yaml
# ...
mcp_servers:
- jetbrains # The name of your configured IDE MCP server
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
# - execute_command.sh
# ...
```
+125
View File
@@ -0,0 +1,125 @@
name: code-reviewer
description: CodeRabbit-style code reviewer - spawns per-file reviewers, synthesizes findings
version: 1.0.0
temperature: 0.1
top_p: 0.95
auto_continue: true
max_auto_continues: 20
inject_todo_instructions: true
can_spawn_agents: true
max_concurrent_agents: 10
max_agent_depth: 2
variables:
- name: project_dir
description: Project directory to review
default: '.'
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
- execute_command.sh
instructions: |
You are a code review orchestrator, similar to CodeRabbit. You coordinate per-file reviews and produce a unified report.
## Workflow
1. **Get the diff:** Run `get_diff` to get the git diff (defaults to staged changes, falls back to unstaged)
2. **Parse changed files:** Extract the list of files from the diff
3. **Create todos:** One todo per phase (get diff, spawn reviewers, collect results, synthesize report)
4. **Spawn file-reviewers:** One `file-reviewer` agent per changed file, in parallel
5. **Broadcast sibling roster:** Send each file-reviewer a message with all sibling IDs and their file assignments
6. **Collect all results:** Wait for each file-reviewer to complete
7. **Synthesize:** Combine all findings into a CodeRabbit-style report
## Spawning File Reviewers
For each changed file, spawn a file-reviewer with a prompt containing:
- The file path
- The relevant diff hunk(s) for that file
- Instructions to review it
```
agent__spawn --agent file-reviewer --prompt "Review the following diff for <file_path>:
<diff content for this file>
Focus on bugs, security issues, logic errors, and style. Use the severity format (🔴🟡🟢💡).
End with REVIEW_COMPLETE."
```
## Sibling Roster Broadcast
After spawning ALL file-reviewers (collecting their IDs), send each one a message with the roster:
```
agent__send_message --to <agent_id> --message "SIBLING_ROSTER:
- <agent_id_1>: reviewing <file_1>
- <agent_id_2>: reviewing <file_2>
...
Send cross-cutting alerts to relevant siblings if your changes affect their files."
```
## Diff Parsing
Split the diff by file. Each file's diff starts with `diff --git a/<path> b/<path>`. Extract:
- The file path (from the `+++ b/<path>` line)
- All hunks for that file (from `@@` markers to the next `diff --git` or end)
Skip binary files and files with only whitespace changes.
## Final Report Format
After collecting all file-reviewer results, synthesize into:
```
# Code Review Summary
## Walkthrough
<2-3 sentence overview of what the changes do as a whole>
## Changes
| File | Changes | Findings |
|------|---------|----------|
| `path/to/file1.rs` | <brief description> | 🔴 1 🟡 2 🟢 1 |
| `path/to/file2.rs` | <brief description> | 🟢 2 💡 1 |
## Detailed Findings
### `path/to/file1.rs`
<paste file-reviewer's findings here, cleaned up>
### `path/to/file2.rs`
<paste file-reviewer's findings here, cleaned up>
## Cross-File Concerns
<any cross-cutting issues identified by the teammate pattern>
---
*Reviewed N files, found X critical, Y warnings, Z suggestions, W nitpicks*
```
## Edge Cases
- **Single file changed:** Still spawn one file-reviewer (for consistency), skip roster broadcast
- **Too many files (>10):** Group small files (< 20 lines changed) and review them together
- **No changes found:** Report "No changes to review" and exit
- **Binary files:** Skip with a note in the summary
## Rules
1. **Always use `get_diff` first:** Don't assume what changed
2. **Spawn in parallel:** All file-reviewers should be spawned before collecting any
3. **Don't review code yourself:** Delegate ALL review work to file-reviewers
4. **Preserve severity tags:** Don't downgrade or remove severity from file-reviewer findings
5. **Include ALL findings:** Don't summarize away specific issues
## Context
- Project: {{project_dir}}
- CWD: {{__cwd__}}
- Shell: {{__shell__}}
+478
View File
@@ -0,0 +1,478 @@
#!/usr/bin/env bash
set -eo pipefail
# shellcheck disable=SC1090
source "$LLM_PROMPT_UTILS_FILE"
source "$LLM_ROOT_DIR/agents/.shared/utils.sh"
# @env LLM_OUTPUT=/dev/stdout
# @env LLM_AGENT_VAR_PROJECT_DIR=.
# @describe Code review orchestrator tools
_project_dir() {
local dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
}
# @cmd Get git diff for code review. Returns staged changes, or unstaged if nothing is staged, or HEAD~1 diff if working tree is clean.
# @option --base Optional base ref to diff against (e.g., "main", "HEAD~3", a commit SHA)
get_diff() {
local project_dir
project_dir=$(_project_dir)
# shellcheck disable=SC2154
local base="${argc_base:-}"
local diff_output=""
if [[ -n "${base}" ]]; then
diff_output=$(cd "${project_dir}" && git diff "${base}" 2>&1) || true
else
diff_output=$(cd "${project_dir}" && git diff --cached 2>&1) || true
if [[ -z "${diff_output}" ]]; then
diff_output=$(cd "${project_dir}" && git diff 2>&1) || true
fi
if [[ -z "${diff_output}" ]]; then
diff_output=$(cd "${project_dir}" && git diff HEAD~1 2>&1) || true
fi
fi
if [[ -z "${diff_output}" ]]; then
warn "No changes found to review" >> "$LLM_OUTPUT"
return 0
fi
local file_count
file_count=$(echo "${diff_output}" | grep -c '^diff --git' || true)
{
info "Diff contains changes to ${file_count} file(s)"
echo ""
echo "${diff_output}"
} >> "$LLM_OUTPUT"
}
# @cmd Get list of changed files with stats
# @option --base Optional base ref to diff against
get_changed_files() {
local project_dir
project_dir=$(_project_dir)
local base="${argc_base:-}"
local stat_output=""
if [[ -n "${base}" ]]; then
stat_output=$(cd "${project_dir}" && git diff --stat "${base}" 2>&1) || true
else
stat_output=$(cd "${project_dir}" && git diff --cached --stat 2>&1) || true
if [[ -z "${stat_output}" ]]; then
stat_output=$(cd "${project_dir}" && git diff --stat 2>&1) || true
fi
if [[ -z "${stat_output}" ]]; then
stat_output=$(cd "${project_dir}" && git diff --stat HEAD~1 2>&1) || true
fi
fi
if [[ -z "${stat_output}" ]]; then
warn "No changes found" >> "$LLM_OUTPUT"
return 0
fi
{
info "Changed files:"
echo ""
echo "${stat_output}"
} >> "$LLM_OUTPUT"
}
# @cmd Get project structure and type information
get_project_info() {
local project_dir
project_dir=$(_project_dir)
local project_info
project_info=$(detect_project "${project_dir}")
{
info "Project: ${project_dir}"
echo "Type: $(echo "${project_info}" | jq -r '.type')"
echo ""
get_tree "${project_dir}" 2
} >> "$LLM_OUTPUT"
}
# ARGC-BUILD {
# This block was generated by argc (https://github.com/sigoden/argc).
# Modifying it manually is not recommended
_argc_run() {
if [[ "${1:-}" == "___internal___" ]]; then
_argc_die "error: unsupported ___internal___ command"
fi
if [[ "${OS:-}" == "Windows_NT" ]] && [[ -n "${MSYSTEM:-}" ]]; then
set -o igncr
fi
argc__args=("$(basename "$0" .sh)" "$@")
argc__positionals=()
_argc_index=1
_argc_len="${#argc__args[@]}"
_argc_tools=()
_argc_parse
if [ -n "${argc__fn:-}" ]; then
$argc__fn "${argc__positionals[@]}"
fi
}
_argc_usage() {
cat <<-'EOF'
Code review orchestrator tools
USAGE: <COMMAND>
COMMANDS:
get_diff Get git diff for code review. Returns staged changes, or unstaged if nothing is staged, or HEAD~1 diff if working tree is clean. [aliases: get-diff]
get_changed_files Get list of changed files with stats [aliases: get-changed-files]
get_project_info Get project structure and type information [aliases: get-project-info]
ENVIRONMENTS:
LLM_OUTPUT [default: /dev/stdout]
LLM_AGENT_VAR_PROJECT_DIR [default: .]
EOF
exit
}
_argc_version() {
echo 0.0.0
exit
}
_argc_parse() {
local _argc_key _argc_action
local _argc_subcmds="get_diff, get-diff, get_changed_files, get-changed-files, get_project_info, get-project-info"
while [[ $_argc_index -lt $_argc_len ]]; do
_argc_item="${argc__args[_argc_index]}"
_argc_key="${_argc_item%%=*}"
case "$_argc_key" in
--help | -help | -h)
_argc_usage
;;
--version | -version | -V)
_argc_version
;;
--)
_argc_dash="${#argc__positionals[@]}"
argc__positionals+=("${argc__args[@]:$((_argc_index + 1))}")
_argc_index=$_argc_len
break
;;
get_diff | get-diff)
_argc_index=$((_argc_index + 1))
_argc_action=_argc_parse_get_diff
break
;;
get_changed_files | get-changed-files)
_argc_index=$((_argc_index + 1))
_argc_action=_argc_parse_get_changed_files
break
;;
get_project_info | get-project-info)
_argc_index=$((_argc_index + 1))
_argc_action=_argc_parse_get_project_info
break
;;
help)
local help_arg="${argc__args[$((_argc_index + 1))]:-}"
case "$help_arg" in
get_diff | get-diff)
_argc_usage_get_diff
;;
get_changed_files | get-changed-files)
_argc_usage_get_changed_files
;;
get_project_info | get-project-info)
_argc_usage_get_project_info
;;
"")
_argc_usage
;;
*)
_argc_die "error: invalid value \`$help_arg\` for \`<command>\`"$'\n'" [possible values: $_argc_subcmds]"
;;
esac
;;
*)
_argc_die "error: \`\` requires a subcommand but one was not provided"$'\n'" [subcommands: $_argc_subcmds]"
;;
esac
done
if [[ -n "${_argc_action:-}" ]]; then
$_argc_action
else
_argc_usage
fi
}
_argc_usage_get_diff() {
cat <<-'EOF'
Get git diff for code review. Returns staged changes, or unstaged if nothing is staged, or HEAD~1 diff if working tree is clean.
USAGE: get_diff [OPTIONS]
OPTIONS:
--base <BASE> Optional base ref to diff against (e.g., "main", "HEAD~3", a commit SHA)
-h, --help Print help
ENVIRONMENTS:
LLM_OUTPUT [default: /dev/stdout]
LLM_AGENT_VAR_PROJECT_DIR [default: .]
EOF
exit
}
_argc_parse_get_diff() {
local _argc_key _argc_action
local _argc_subcmds=""
while [[ $_argc_index -lt $_argc_len ]]; do
_argc_item="${argc__args[_argc_index]}"
_argc_key="${_argc_item%%=*}"
case "$_argc_key" in
--help | -help | -h)
_argc_usage_get_diff
;;
--)
_argc_dash="${#argc__positionals[@]}"
argc__positionals+=("${argc__args[@]:$((_argc_index + 1))}")
_argc_index=$_argc_len
break
;;
--base)
_argc_take_args "--base <BASE>" 1 1 "-" ""
_argc_index=$((_argc_index + _argc_take_args_len + 1))
if [[ -z "${argc_base:-}" ]]; then
argc_base="${_argc_take_args_values[0]:-}"
else
_argc_die "error: the argument \`--base\` cannot be used multiple times"
fi
;;
*)
if _argc_maybe_flag_option "-" "$_argc_item"; then
_argc_die "error: unexpected argument \`$_argc_key\` found"
fi
argc__positionals+=("$_argc_item")
_argc_index=$((_argc_index + 1))
;;
esac
done
if [[ -n "${_argc_action:-}" ]]; then
$_argc_action
else
argc__fn=get_diff
if [[ "${argc__positionals[0]:-}" == "help" ]] && [[ "${#argc__positionals[@]}" -eq 1 ]]; then
_argc_usage_get_diff
fi
if [[ -z "${LLM_OUTPUT:-}" ]]; then
export LLM_OUTPUT=/dev/stdout
fi
if [[ -z "${LLM_AGENT_VAR_PROJECT_DIR:-}" ]]; then
export LLM_AGENT_VAR_PROJECT_DIR=.
fi
fi
}
_argc_usage_get_changed_files() {
cat <<-'EOF'
Get list of changed files with stats
USAGE: get_changed_files [OPTIONS]
OPTIONS:
--base <BASE> Optional base ref to diff against
-h, --help Print help
ENVIRONMENTS:
LLM_OUTPUT [default: /dev/stdout]
LLM_AGENT_VAR_PROJECT_DIR [default: .]
EOF
exit
}
_argc_parse_get_changed_files() {
local _argc_key _argc_action
local _argc_subcmds=""
while [[ $_argc_index -lt $_argc_len ]]; do
_argc_item="${argc__args[_argc_index]}"
_argc_key="${_argc_item%%=*}"
case "$_argc_key" in
--help | -help | -h)
_argc_usage_get_changed_files
;;
--)
_argc_dash="${#argc__positionals[@]}"
argc__positionals+=("${argc__args[@]:$((_argc_index + 1))}")
_argc_index=$_argc_len
break
;;
--base)
_argc_take_args "--base <BASE>" 1 1 "-" ""
_argc_index=$((_argc_index + _argc_take_args_len + 1))
if [[ -z "${argc_base:-}" ]]; then
argc_base="${_argc_take_args_values[0]:-}"
else
_argc_die "error: the argument \`--base\` cannot be used multiple times"
fi
;;
*)
if _argc_maybe_flag_option "-" "$_argc_item"; then
_argc_die "error: unexpected argument \`$_argc_key\` found"
fi
argc__positionals+=("$_argc_item")
_argc_index=$((_argc_index + 1))
;;
esac
done
if [[ -n "${_argc_action:-}" ]]; then
$_argc_action
else
argc__fn=get_changed_files
if [[ "${argc__positionals[0]:-}" == "help" ]] && [[ "${#argc__positionals[@]}" -eq 1 ]]; then
_argc_usage_get_changed_files
fi
if [[ -z "${LLM_OUTPUT:-}" ]]; then
export LLM_OUTPUT=/dev/stdout
fi
if [[ -z "${LLM_AGENT_VAR_PROJECT_DIR:-}" ]]; then
export LLM_AGENT_VAR_PROJECT_DIR=.
fi
fi
}
_argc_usage_get_project_info() {
cat <<-'EOF'
Get project structure and type information
USAGE: get_project_info
ENVIRONMENTS:
LLM_OUTPUT [default: /dev/stdout]
LLM_AGENT_VAR_PROJECT_DIR [default: .]
EOF
exit
}
_argc_parse_get_project_info() {
local _argc_key _argc_action
local _argc_subcmds=""
while [[ $_argc_index -lt $_argc_len ]]; do
_argc_item="${argc__args[_argc_index]}"
_argc_key="${_argc_item%%=*}"
case "$_argc_key" in
--help | -help | -h)
_argc_usage_get_project_info
;;
--)
_argc_dash="${#argc__positionals[@]}"
argc__positionals+=("${argc__args[@]:$((_argc_index + 1))}")
_argc_index=$_argc_len
break
;;
*)
argc__positionals+=("$_argc_item")
_argc_index=$((_argc_index + 1))
;;
esac
done
if [[ -n "${_argc_action:-}" ]]; then
$_argc_action
else
argc__fn=get_project_info
if [[ "${argc__positionals[0]:-}" == "help" ]] && [[ "${#argc__positionals[@]}" -eq 1 ]]; then
_argc_usage_get_project_info
fi
if [[ -z "${LLM_OUTPUT:-}" ]]; then
export LLM_OUTPUT=/dev/stdout
fi
if [[ -z "${LLM_AGENT_VAR_PROJECT_DIR:-}" ]]; then
export LLM_AGENT_VAR_PROJECT_DIR=.
fi
fi
}
_argc_take_args() {
_argc_take_args_values=()
_argc_take_args_len=0
local param="$1" min="$2" max="$3" signs="$4" delimiter="$5"
if [[ "$min" -eq 0 ]] && [[ "$max" -eq 0 ]]; then
return
fi
local _argc_take_index=$((_argc_index + 1)) _argc_take_value
if [[ "$_argc_item" == *=* ]]; then
_argc_take_args_values=("${_argc_item##*=}")
else
while [[ $_argc_take_index -lt $_argc_len ]]; do
_argc_take_value="${argc__args[_argc_take_index]}"
if _argc_maybe_flag_option "$signs" "$_argc_take_value"; then
if [[ "${#_argc_take_value}" -gt 1 ]]; then
break
fi
fi
_argc_take_args_values+=("$_argc_take_value")
_argc_take_args_len=$((_argc_take_args_len + 1))
if [[ "$_argc_take_args_len" -ge "$max" ]]; then
break
fi
_argc_take_index=$((_argc_take_index + 1))
done
fi
if [[ "${#_argc_take_args_values[@]}" -lt "$min" ]]; then
_argc_die "error: incorrect number of values for \`$param\`"
fi
if [[ -n "$delimiter" ]] && [[ "${#_argc_take_args_values[@]}" -gt 0 ]]; then
local item values arr=()
for item in "${_argc_take_args_values[@]}"; do
IFS="$delimiter" read -r -a values <<<"$item"
arr+=("${values[@]}")
done
_argc_take_args_values=("${arr[@]}")
fi
}
_argc_maybe_flag_option() {
local signs="$1" arg="$2"
if [[ -z "$signs" ]]; then
return 1
fi
local cond=false
if [[ "$signs" == *"+"* ]]; then
if [[ "$arg" =~ ^\+[^+].* ]]; then
cond=true
fi
elif [[ "$arg" == -* ]]; then
if (( ${#arg} < 3 )) || [[ ! "$arg" =~ ^---.* ]]; then
cond=true
fi
fi
if [[ "$cond" == "false" ]]; then
return 1
fi
local value="${arg%%=*}"
if [[ "$value" =~ [[:space:]] ]]; then
return 1
fi
return 0
}
_argc_die() {
if [[ $# -eq 0 ]]; then
cat
else
echo "$*" >&2
fi
exit 1
}
_argc_run "$@"
# ARGC-BUILD }
+27 -3
View File
@@ -2,6 +2,9 @@
An AI agent that assists you with your coding tasks.
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent to implement code specifications. Sisyphus
acts as the coordinator/architect, while Coder handles the implementation details.
## Features
- 🏗️ Intelligent project structure creation and management
@@ -10,7 +13,28 @@ An AI agent that assists you with your coding tasks.
- 🧐 Advanced code analysis and improvement suggestions
- 📊 Precise diff-based file editing for controlled code modifications
## Similar Projects
It can also be used as a standalone tool for direct coding assistance.
- https://github.com/Doriandarko/claude-engineer
- https://github.com/paul-gauthier/aider
## Pro-Tip: Use an IDE MCP Server for Improved Performance
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
them), and modify the agent definition to look like this:
```yaml
# ...
mcp_servers:
- jetbrains # The name of your configured IDE MCP server
global_tools:
# Keep useful read-only tools for reading files in other non-project directories
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
# - fs_write.sh
# - fs_patch.sh
- execute_command.sh
# ...
```
+95 -40
View File
@@ -1,53 +1,108 @@
name: Coder
description: An AI agent that assists you with your coding tasks
version: 0.1.0
name: coder
description: Implementation agent - writes code, follows patterns, verifies with builds
version: 1.0.0
temperature: 0.1
top_p: 0.95
auto_continue: true
max_auto_continues: 15
inject_todo_instructions: true
variables:
- name: project_dir
description: Project directory to work in
default: '.'
- name: auto_confirm
description: Auto-confirm command execution
default: '1'
global_tools:
- fs_mkdir.sh
- fs_ls.sh
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
- fs_write.sh
- fs_patch.sh
- fs_cat.sh
- execute_command.sh
instructions: |
You are an exceptional software developer with vast knowledge across multiple programming languages, frameworks, and best practices.
Your capabilities include:
You are a senior engineer. You write code that works on the first try.
1. Creating and managing project structures
2. Writing, debugging, and improving code across multiple languages
3. Providing architectural insights and applying design patterns
4. Staying current with the latest technologies and best practices
5. Analyzing and manipulating files within the project directory
## Your Mission
Available tools and their optimal use cases:
Given an implementation task:
1. Understand what to build (from context provided)
2. Study existing patterns (read 1-2 similar files)
3. Write the code (using tools, NOT chat output)
4. Verify it compiles/builds
5. Signal completion
1. fs_mkdir: Create new directories in the project structure.
2. fs_create: Generate new files with specified contents.
3. fs_patch: Examine and modify existing files.
4. fs_cat: View the contents of existing files without making changes.
5. fs_ls: Understand the current project structure or locate specific files.
## Todo System
Tool Usage Guidelines:
- Always use the most appropriate tool for the task at hand.
- For file modifications, use fs_patch. Read the file first, then apply changes if needed.
- After making changes, always review the diff output to ensure accuracy.
For multi-file changes:
1. `todo__init` with the implementation goal
2. `todo__add` for each file to create/modify
3. Implement each, calling `todo__done` immediately after
Project Creation and Management:
1. Start by creating a root folder for new projects.
2. Create necessary subdirectories and files within the root folder.
3. Organize the project structure logically, following best practices for the specific project type.
## Writing Code
Code Editing Best Practices:
1. Always read the file content before making changes.
2. Analyze the code and determine necessary modifications.
3. Pay close attention to existing code structure to avoid unintended alterations.
4. Review changes thoroughly after each modification.
**CRITICAL**: Write code using `write_file` tool, NEVER paste code in chat.
Always strive for accuracy, clarity, and efficiency in your responses and actions.
Correct:
```
write_file --path "src/user.rs" --content "pub struct User { ... }"
```
Answer the user's request using relevant tools (if they are available). Before calling a tool, do some analysis within <thinking></thinking> tags. First, think about which of the provided tools is the relevant tool to answer the user's request. Second, go through each of the required parameters of the relevant tool and determine if the user has directly provided or given enough information to infer a value. When deciding if the parameter can be inferred, carefully consider all the context to see if it supports a specific value. If all of the required parameters are present or can be reasonably inferred, close the thinking tag and proceed with the tool call. BUT, if one of the values for a required parameter is missing, DO NOT invoke the function (not even with fillers for the missing params) and instead, ask the user to provide the missing parameters. DO NOT ask for more information on optional parameters if it is not provided.
Wrong:
```
Here's the implementation:
\`\`\`rust
pub struct User { ... }
\`\`\`
```
Do not reflect on the quality of the returned search results in your response.
## File Reading Strategy (IMPORTANT - minimize token usage)
1. **Use grep to find relevant code** - `fs_grep --pattern "fn handle_request" --include "*.rs"` finds where things are
2. **Read only what you need** - `fs_read --path "src/main.rs" --offset 50 --limit 30` reads lines 50-79
3. **Never cat entire large files** - If 500+ lines, read the relevant section after grepping for it
4. **Use glob to find files** - `fs_glob --pattern "*.rs" --path src/` discovers files by name
## Pattern Matching
Before writing ANY file:
1. Find a similar existing file (use `fs_grep` to locate, then `fs_read` to examine)
2. Match its style: imports, naming, structure
3. Follow the same patterns exactly
## Verification
After writing files:
1. Run `verify_build` to check compilation
2. If it fails, fix the error (minimal change)
3. Don't move on until build passes
## Completion Signal
End with:
```
CODER_COMPLETE: [summary of what was implemented]
```
Or if failed:
```
CODER_FAILED: [what went wrong]
```
## Rules
1. **Write code via tools** - Never output code to chat
2. **Follow patterns** - Read existing files first
3. **Verify builds** - Don't finish without checking
4. **Minimal fixes** - If build fails, fix precisely
5. **No refactoring** - Only implement what's asked
## Context
- Project: {{project_dir}}
- CWD: {{__cwd__}}
- Shell: {{__shell__}}
conversation_starters:
- 'Create a new Python project structure for a web application'
- 'Explain the code in file.py and suggest improvements'
- 'Search for the latest best practices in React development'
- 'Help me debug this error: [paste your error message]'
+190 -12
View File
@@ -1,18 +1,196 @@
#!/usr/bin/env bash
set -e
# @env LLM_OUTPUT=/dev/stdout The output path
set -eo pipefail
# shellcheck disable=SC1090
source "$LLM_PROMPT_UTILS_FILE"
source "$LLM_ROOT_DIR/agents/.shared/utils.sh"
# @cmd Create a new file at the specified path with the given contents.
# @option --path! The path where the file should be created
# @option --contents! The contents of the file
# shellcheck disable=SC2154
fs_create() {
guard_path "$argc_path" "Create '$argc_path'?"
mkdir -p "$(dirname "$argc_path")"
printf "%s" "$argc_contents" > "$argc_path"
echo "File created: $argc_path" >> "$LLM_OUTPUT"
# @env LLM_OUTPUT=/dev/stdout
# @env LLM_AGENT_VAR_PROJECT_DIR=.
# @describe Coder agent tools for implementing code changes
_project_dir() {
local dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
}
# @cmd Read a file's contents before modifying
# @option --path! Path to the file (relative to project root)
read_file() {
# shellcheck disable=SC2154
local file_path="${argc_path}"
local project_dir
project_dir=$(_project_dir)
local full_path="${project_dir}/${file_path}"
if [[ ! -f "${full_path}" ]]; then
warn "File not found: ${file_path}" >> "$LLM_OUTPUT"
return 0
fi
{
info "Reading: ${file_path}"
echo ""
cat "${full_path}"
} >> "$LLM_OUTPUT"
}
# @cmd Write complete file contents
# @option --path! Path for the file (relative to project root)
# @option --content! Complete file contents to write
write_file() {
local file_path="${argc_path}"
# shellcheck disable=SC2154
local content="${argc_content}"
local project_dir
project_dir=$(_project_dir)
local full_path="${project_dir}/${file_path}"
mkdir -p "$(dirname "${full_path}")"
echo "${content}" > "${full_path}"
green "Wrote: ${file_path}" >> "$LLM_OUTPUT"
}
# @cmd Find files similar to a given path (for pattern matching)
# @option --path! Path to find similar files for
find_similar_files() {
local file_path="${argc_path}"
local project_dir
project_dir=$(_project_dir)
local ext="${file_path##*.}"
local dir
dir=$(dirname "${file_path}")
info "Similar files to: ${file_path}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local results
results=$(find "${project_dir}/${dir}" -maxdepth 1 -type f -name "*.${ext}" \
! -name "$(basename "${file_path}")" \
! -name "*test*" \
! -name "*spec*" \
2>/dev/null | head -3)
if [[ -z "${results}" ]]; then
results=$(find "${project_dir}/src" -type f -name "*.${ext}" \
! -name "*test*" \
! -name "*spec*" \
-not -path '*/target/*' \
2>/dev/null | head -3)
fi
if [[ -n "${results}" ]]; then
echo "${results}" >> "$LLM_OUTPUT"
else
warn "No similar files found" >> "$LLM_OUTPUT"
fi
}
# @cmd Verify the project builds successfully
verify_build() {
local project_dir
project_dir=$(_project_dir)
local project_info
project_info=$(detect_project "${project_dir}")
local build_cmd
build_cmd=$(echo "${project_info}" | jq -r '.check // .build')
if [[ -z "${build_cmd}" ]] || [[ "${build_cmd}" == "null" ]]; then
warn "No build command detected" >> "$LLM_OUTPUT"
return 0
fi
info "Running: ${build_cmd}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local output exit_code=0
output=$(cd "${project_dir}" && eval "${build_cmd}" 2>&1) || exit_code=$?
echo "${output}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
if [[ ${exit_code} -eq 0 ]]; then
green "BUILD SUCCESS" >> "$LLM_OUTPUT"
return 0
else
error "BUILD FAILED (exit code: ${exit_code})" >> "$LLM_OUTPUT"
return 1
fi
}
# @cmd Run project tests
run_tests() {
local project_dir
project_dir=$(_project_dir)
local project_info
project_info=$(detect_project "${project_dir}")
local test_cmd
test_cmd=$(echo "${project_info}" | jq -r '.test')
if [[ -z "${test_cmd}" ]] || [[ "${test_cmd}" == "null" ]]; then
warn "No test command detected" >> "$LLM_OUTPUT"
return 0
fi
info "Running: ${test_cmd}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local output exit_code=0
output=$(cd "${project_dir}" && eval "${test_cmd}" 2>&1) || exit_code=$?
echo "${output}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
if [[ ${exit_code} -eq 0 ]]; then
green "TESTS PASSED" >> "$LLM_OUTPUT"
return 0
else
error "TESTS FAILED (exit code: ${exit_code})" >> "$LLM_OUTPUT"
return 1
fi
}
# @cmd Get project structure for context
get_project_structure() {
local project_dir
project_dir=$(_project_dir)
local project_info
project_info=$(detect_project "${project_dir}")
{
info "Project: $(echo "${project_info}" | jq -r '.type')"
echo ""
get_tree "${project_dir}" 2
} >> "$LLM_OUTPUT"
}
# @cmd Search for content in the codebase
# @option --pattern! Pattern to search for
search_code() {
# shellcheck disable=SC2154
local pattern="${argc_pattern}"
local project_dir
project_dir=$(_project_dir)
info "Searching: ${pattern}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local results
results=$(grep -rn "${pattern}" "${project_dir}" 2>/dev/null | \
grep -v '/target/' | \
grep -v '/node_modules/' | \
grep -v '/.git/' | \
head -20) || true
if [[ -n "${results}" ]]; then
echo "${results}" >> "$LLM_OUTPUT"
else
warn "No matches" >> "$LLM_OUTPUT"
fi
}
+37
View File
@@ -0,0 +1,37 @@
# Explore
An AI agent specialized in exploring codebases, finding patterns, and understanding project structures.
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent to gather information and context. Sisyphus
acts as the coordinator/architect, while Explore handles the research and discovery phase.
It can also be used as a standalone tool for understanding codebases and finding specific information.
## Features
- 🔍 Deep codebase exploration and pattern matching
- 📂 File system navigation and content analysis
- 🧠 Context gathering for complex tasks
- 🛡️ Read-only operations for safe investigation
## Pro-Tip: Use an IDE MCP Server for Improved Performance
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
them), and modify the agent definition to look like this:
```yaml
# ...
mcp_servers:
- jetbrains # The name of your configured IDE MCP server
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
- fs_ls.sh
- web_search_loki.sh
# ...
```
+75
View File
@@ -0,0 +1,75 @@
name: explore
description: Fast codebase exploration agent - finds patterns, structures, and relevant files
version: 1.0.0
temperature: 0.1
top_p: 0.95
variables:
- name: project_dir
description: Project directory to explore
default: '.'
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
- fs_ls.sh
- web_search_loki.sh
instructions: |
You are a codebase explorer. Your job: Search, find, report. Nothing else.
## Your Mission
Given a search task, you:
1. Search for relevant files and patterns
2. Read key files to understand structure
3. Report findings concisely
4. Signal completion with EXPLORE_COMPLETE
## File Reading Strategy (IMPORTANT - minimize token usage)
1. **Find first, read second** - Never read a file without knowing why
2. **Use grep to locate** - `fs_grep --pattern "struct User" --include "*.rs"` finds exactly where things are
3. **Use glob to discover** - `fs_glob --pattern "*.rs" --path src/` finds files by name
4. **Read targeted sections** - `fs_read --path "src/main.rs" --offset 50 --limit 30` reads only lines 50-79
5. **Never read entire large files** - If a file is 500+ lines, read the relevant section only
## Available Actions
- `fs_grep --pattern "struct User" --include "*.rs"` - Find content across files
- `fs_glob --pattern "*.rs" --path src/` - Find files by name pattern
- `fs_read --path "src/main.rs"` - Read a file (with line numbers)
- `fs_read --path "src/main.rs" --offset 100 --limit 50` - Read lines 100-149 only
- `get_structure` - See project layout
- `search_content --pattern "struct User"` - Agent-level content search
## Output Format
Always end your response with a findings summary:
```
FINDINGS:
- [Key finding 1]
- [Key finding 2]
- Relevant files: [list]
EXPLORE_COMPLETE
```
## Rules
1. **Be fast** - Don't read every file, read representative ones
2. **Be focused** - Answer the specific question asked
3. **Be concise** - Report findings, not your process
4. **Never modify files** - You are read-only
5. **Limit reads** - Max 5 file reads per exploration
## Context
- Project: {{project_dir}}
- CWD: {{__cwd__}}
conversation_starters:
- 'Find how authentication is implemented'
- 'What patterns are used for API endpoints'
- 'Show me the project structure'
+157
View File
@@ -0,0 +1,157 @@
#!/usr/bin/env bash
set -eo pipefail
# shellcheck disable=SC1090
source "$LLM_PROMPT_UTILS_FILE"
source "$LLM_ROOT_DIR/agents/.shared/utils.sh"
# @env LLM_OUTPUT=/dev/stdout
# @env LLM_AGENT_VAR_PROJECT_DIR=.
# @describe Explore agent tools for codebase search and analysis
_project_dir() {
local dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
}
# @cmd Get project structure and layout
get_structure() {
local project_dir
project_dir=$(_project_dir)
info "Project structure:" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local project_info
project_info=$(detect_project "${project_dir}")
{
echo "Type: $(echo "${project_info}" | jq -r '.type')"
echo ""
get_tree "${project_dir}" 3
} >> "$LLM_OUTPUT"
}
# @cmd Search for files by name pattern
# @option --pattern! File name pattern (e.g., "*.rs", "config*", "*test*")
search_files() {
# shellcheck disable=SC2154
local pattern="${argc_pattern}"
local project_dir
project_dir=$(_project_dir)
info "Files matching: ${pattern}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local results
results=$(search_files "${pattern}" "${project_dir}")
if [[ -n "${results}" ]]; then
echo "${results}" >> "$LLM_OUTPUT"
else
warn "No files found" >> "$LLM_OUTPUT"
fi
}
# @cmd Search for content in files
# @option --pattern! Text or regex pattern to search for
# @option --file-type Filter by file extension (e.g., "rs", "py", "ts")
search_content() {
local pattern="${argc_pattern}"
local file_type="${argc_file_type:-}"
local project_dir
project_dir=$(_project_dir)
info "Searching: ${pattern}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local include_arg=""
if [[ -n "${file_type}" ]]; then
include_arg="--include=*.${file_type}"
fi
local results
# shellcheck disable=SC2086
results=$(grep -rn ${include_arg} "${pattern}" "${project_dir}" 2>/dev/null | \
grep -v '/target/' | \
grep -v '/node_modules/' | \
grep -v '/.git/' | \
grep -v '/dist/' | \
head -30) || true
if [[ -n "${results}" ]]; then
echo "${results}" >> "$LLM_OUTPUT"
else
warn "No matches found" >> "$LLM_OUTPUT"
fi
}
# @cmd Read a file's contents
# @option --path! Path to the file (relative to project root)
# @option --lines Maximum lines to read (default: 200)
read_file() {
# shellcheck disable=SC2154
local file_path="${argc_path}"
local max_lines="${argc_lines:-200}"
local project_dir
project_dir=$(_project_dir)
local full_path="${project_dir}/${file_path}"
if [[ ! -f "${full_path}" ]]; then
error "File not found: ${file_path}" >> "$LLM_OUTPUT"
return 1
fi
{
info "File: ${file_path}"
echo ""
} >> "$LLM_OUTPUT"
head -n "${max_lines}" "${full_path}" >> "$LLM_OUTPUT"
local total_lines
total_lines=$(wc -l < "${full_path}")
if [[ "${total_lines}" -gt "${max_lines}" ]]; then
echo "" >> "$LLM_OUTPUT"
warn "... truncated (${total_lines} total lines)" >> "$LLM_OUTPUT"
fi
}
# @cmd Find similar files to a given file (for pattern matching)
# @option --path! Path to the reference file
find_similar() {
local file_path="${argc_path}"
local project_dir
project_dir=$(_project_dir)
local ext="${file_path##*.}"
local dir
dir=$(dirname "${file_path}")
info "Files similar to: ${file_path}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local results
results=$(find "${project_dir}/${dir}" -maxdepth 1 -type f -name "*.${ext}" \
! -name "$(basename "${file_path}")" \
! -name "*test*" \
! -name "*spec*" \
2>/dev/null | head -5)
if [[ -n "${results}" ]]; then
echo "${results}" >> "$LLM_OUTPUT"
else
results=$(find "${project_dir}" -type f -name "*.${ext}" \
! -name "$(basename "${file_path}")" \
! -name "*test*" \
-not -path '*/target/*' \
2>/dev/null | head -5)
if [[ -n "${results}" ]]; then
echo "${results}" >> "$LLM_OUTPUT"
else
warn "No similar files found" >> "$LLM_OUTPUT"
fi
fi
}
+35
View File
@@ -0,0 +1,35 @@
# File Reviewer
A specialized worker agent that reviews a single file's diff for bugs, style issues, and cross-cutting concerns.
This agent is designed to be spawned by the **[Code Reviewer](../code-reviewer/README.md)** agent. It focuses deeply on
one file while communicating with sibling agents to catch issues that span multiple files.
## Features
- 🔍 **Deep Analysis**: Focuses on bugs, logic errors, security issues, and style problems in a single file.
- 🗣️ **Teammate Communication**: Sends and receives alerts to/from sibling reviewers about interface or dependency
changes.
- 🎯 **Targeted Reading**: Reads only relevant context around changed lines to stay efficient.
- 🏷️ **Structured Findings**: Categorizes issues by severity (🔴 Critical, 🟡 Warning, 🟢 Suggestion, 💡 Nitpick).
## Pro-Tip: Use an IDE MCP Server for Improved Performance
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
them), and modify the agent definition to look like this:
```yaml
# ...
mcp_servers:
- jetbrains # The name of your configured IDE MCP server
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
# ...
```
+111
View File
@@ -0,0 +1,111 @@
name: file-reviewer
description: Reviews a single file's diff for bugs, style issues, and cross-cutting concerns
version: 1.0.0
temperature: 0.1
top_p: 0.95
variables:
- name: project_dir
description: Project directory for context
default: '.'
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
instructions: |
You are a precise code reviewer. You review ONE file's diff and produce structured findings.
## Your Mission
You receive a git diff for a single file. Your job:
1. Analyze the diff for bugs, logic errors, security issues, and style problems
2. Read surrounding code for context (use `fs_read` with targeted offsets)
3. Check your inbox for cross-cutting alerts from sibling reviewers
4. Send alerts to siblings if you spot cross-file issues
5. Return structured findings
## Input
You receive:
- The file path being reviewed
- The git diff for that file
- A sibling roster (other file-reviewers and which files they're reviewing)
## Cross-Cutting Alerts (Teammate Pattern)
After analyzing your file, check if changes might affect sibling files:
- **Interface changes**: If a function signature changed, alert siblings reviewing callers
- **Type changes**: If a type/struct changed, alert siblings reviewing files that use it
- **Import changes**: If exports changed, alert siblings reviewing importers
- **Config changes**: Alert all siblings if config format changed
To alert a sibling:
```
agent__send_message --to <sibling_agent_id> --message "ALERT: <description of cross-file concern>"
```
Check your inbox periodically for alerts from siblings:
```
agent__check_inbox
```
If you receive an alert, incorporate it into your findings under a "Cross-File Concerns" section.
## File Reading Strategy
1. **Read changed lines' context:** Use `fs_read --path "file" --offset <start> --limit 50` to see surrounding code
2. **Grep for usage:** `fs_grep --pattern "function_name" --include "*.rs"` to find callers
3. **Never read entire large files:** Target the changed regions only
4. **Max 5 file reads:** Be efficient
## Output Format
Structure your response EXACTLY as:
```
## File: <file_path>
### Summary
<1-2 sentence summary of the changes>
### Findings
#### <finding_title>
- **Severity**: 🔴 CRITICAL | 🟡 WARNING | 🟢 SUGGESTION | 💡 NITPICK
- **Lines**: <start_line>-<end_line>
- **Description**: <clear explanation of the issue>
- **Suggestion**: <how to fix it>
#### <next_finding_title>
...
### Cross-File Concerns
<any issues received from siblings or that you alerted siblings about>
<"None" if no cross-file concerns>
REVIEW_COMPLETE
```
## Severity Guide
| Severity | When to use |
|----------|------------|
| 🔴 CRITICAL | Bugs, security vulnerabilities, data loss risks, crashes |
| 🟡 WARNING | Logic errors, performance issues, missing error handling, race conditions |
| 🟢 SUGGESTION | Better patterns, improved readability, missing docs for public APIs |
| 💡 NITPICK | Style preferences, minor naming issues, formatting |
## Rules
1. **Be specific:** Reference exact line numbers and code
2. **Be actionable:** Every finding must have a suggestion
3. **Don't nitpick formatting:** If a formatter/linter exists (check for .rustfmt.toml, .prettierrc, etc.)
4. **Focus on the diff:** Don't review unchanged code unless it's directly affected
5. **Never modify files:** You are read-only
6. **Always end with REVIEW_COMPLETE**
## Context
- Project: {{project_dir}}
- CWD: {{__cwd__}}
+33
View File
@@ -0,0 +1,33 @@
#!/usr/bin/env bash
set -eo pipefail
# shellcheck disable=SC1090
source "$LLM_PROMPT_UTILS_FILE"
source "$LLM_ROOT_DIR/agents/.shared/utils.sh"
# @env LLM_OUTPUT=/dev/stdout
# @env LLM_AGENT_VAR_PROJECT_DIR=.
# @describe File reviewer tools for single-file code review
_project_dir() {
local dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
}
# @cmd Get project structure to understand codebase layout
get_structure() {
local project_dir
project_dir=$(_project_dir)
info "Project structure:" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local project_info
project_info=$(detect_project "${project_dir}")
{
echo "Type: $(echo "${project_info}" | jq -r '.type')"
echo ""
get_tree "${project_dir}" 2
} >> "$LLM_OUTPUT"
}
+7 -25
View File
@@ -2,31 +2,13 @@
## Overview
The Jira AI Agent is designed to assist with managing tasks within Jira projects, providing capabilities such as creating, searching, updating, assigning, linking, and commenting on issues. Its primary purpose is to help software engineers seamlessly integrate Jira into their workflows through an AI-driven interface.
The Jira AI Agent is designed to assist with managing tasks within Jira projects, providing capabilities such as
creating, searching, updating, assigning, linking, and commenting on issues. Its primary purpose is to help software
engineers seamlessly integrate Jira into their workflows through an AI-driven interface.
## Configuration
This agent uses the official [Atlassian MCP Server](https://github.com/atlassian/atlassian-mcp-server). To use it,
ensure you have Node.js v18+ installed to run the local MCP proxy (`mcp-remote`).
### Variables
This agent accepts the following variables:
- **config**: Specifies the configuration file for the Jira CLI. This configuration should be located at `~/.config/.jira/<config_name>.yml`. Example: `work`.
- **project**: The Jira project key where operations are executed. Example: `PAN`.
### Customization
#### For a User's Specific Jira Instance
1. **Config File Setup**:
- Users must ensure there is a configuration file for their Jira instance located at `~/.config/.jira/`. The filename should match the `config` variable value provided to the agent (e.g., for `config` set to `work`, ensure a `work.yml` exists).
2. **State, Issue Type, and Priority Customization**:
- Modify the functions `_issue_type_choice` and `_issue_state_choice` in `tools.sh` to reflect the specific issue types and states used in your Jira instance.
- The `priority` for new issues can be modified directly through the `create_issue()` function in `tools.sh` with options set to the values available in your Jira instance (e.g., Medium, Highest, etc.).
## How the Agent Works
The agent works by utilizing provided variables to interact with Jira CLI commands through `tools.sh`. The `config` variable links directly to a `.yml` configuration file that contains connections settings for a Jira instance, enabling the agent to perform operations such as issue creation or status updates.
- **Configuration Linkage**: The `config` parameters specified during the execution must have a corresponding `.yml` configuration file at `~/.config/.jira/`, which contains the required Jira server details like login credentials and server URL.
- **Jira Command Execution**: The agent uses predefined functions within `tools.sh` to execute Jira operations. These functions rely on the configuration and project variable inputs to construct and execute the appropriate Jira CLI commands.
The server uses OAuth 2.0 so it will automatically open your browser for you to sign in to your account. No manual
configuration is necessary!
+22 -10
View File
@@ -2,22 +2,34 @@ name: Jira Agent
description: An AI agent that can assist with Jira tasks such as creating issues, searching for issues, and updating issues.
version: 0.1.0
agent_session: temp
mcp_servers:
- atlassian
instructions: |
You are a AI agent designed to assist with managing Jira tasks and helping software engineers
utilize and integrate Jira into their workflows. You can create, search, update, assign, link, and comment on issues in Jira.
When you create issues, the general format of the issues is broken into two sections: Description, and User Acceptance Criteria. The Description section gives context and details about the issue, and the User Acceptance Criteria section provides bullet points that function like a checklist of all the things that must be completed in order for the issue to be considered done.
You are a AI agent designed to assist with managing Jira tasks and helping software engineers utilize and integrate
Jira into their workflows. You can create, search, update, assign, link, and comment on issues in Jira.
Create issues under the {{project}} Jira project.
## Create Issue (MANDATORY when creating a issue)
When a user prompts you to create a Jira issue:
1. Prompt the user for what Jira project they want the ticket created in
2. If the ticket type requires a parent issue:
a. Query Jira for potentially relevant parents
b. Prompt user for which parent to use, displaying the suggested list of parent issues
3. Create the issue with the following format:
```markdown
**Description:**
This section gives context and details about the issue.
**User Acceptance Criteria:**
# This section provides bullet points that function like a checklist of all the things that must be completed in
# order for the issue to be considered done.
* Example criteria one
* Example criteria two
```
4. Ask the user if the issue should be assigned to them
a. If yes, then assign the user to the newly created issue
Available tools:
{{__tools__}}
variables:
- name: config
description: The configuration to use for the Jira CLI; e.g. work
- name: project
description: The Jira project to operate on; e.g. PAN
conversation_starters:
- What are the latest issues in my Jira project?
- Can you create a new Jira issue for me?
-259
View File
@@ -1,259 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2154
# shellcheck disable=SC2046
set -e
# @meta require-tools jira
# @env LLM_OUTPUT=/dev/stdout The output path
# @env LLM_AGENT_VAR_CONFIG! The configuration to use for the Jira CLI; e.g. work
# @env LLM_AGENT_VAR_PROJECT! The Jira project to operate on; e.g. PAN
# @cmd Fetch my Jira username
get_jira_username() {
declare config_file="$HOME/.config/.jira/${LLM_AGENT_VAR_CONFIG}.yml"
jira me -c "$config_file" >> "$LLM_OUTPUT"
}
# @cmd Query for jira issues using a Jira Query Language (JQL) query
# @option --jql-query! The Jira Query Language query to execute
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
query_jira_issues() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
jira issue ls \
--project "$argc_project" \
-q "$argc_jql_query" \
--plain \
-c "$config_file" >> "$LLM_OUTPUT"
}
# @cmd Assign a Jira issue to the specified user
# @option --issue-key! The Jira issue key, e.g. ISSUE-1
# @option --assignee! The email or display name of the user to assign the issue to
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
assign_jira_issue() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
jira issue assign \
--project "$argc_project" \
"$argc_issue_key" "$argc_assignee" \
-c "$config_file" >> "$LLM_OUTPUT"
}
# @cmd View a Jira issue
# @option --issue-key! The Jira issue key, e.g. ISSUE-1
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
view_issue() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
jira issue view \
"$argc_issue_key" \
--project "$argc_project" \
--comments 20 \
--plain \
-c "$config_file" >> "$LLM_OUTPUT"
}
# @cmd Transition a Jira issue to a different state
# @option --issue-key! The Jira issue key, e.g. ISSUE-1
# @option --state![`_issue_state_choice`] The Jira state of the issue
# @option --comment Add a comment to the issue
# @option --resolution Set resolution
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
transition_issue() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
declare -a flags=()
if [[ -n $argc_comment ]]; then
flags+=("--comment '${argc_comment}'")
fi
if [[ -n $argc_resolution ]]; then
flags+=("--resolution ${argc_resolution}")
fi
jira issue move \
--project "$argc_project" \
"$argc_issue_key" "$argc_state" "$(echo "${flags[*]}" | xargs)" \
-c "$config_file" >> "$LLM_OUTPUT"
}
# @cmd Create a new Jira issue
# @option --type![`_issue_type_choice`]
# @option --summary! Issue summary or title
# @option --description! Issue description
# @option --parent-issue-key Parent issue key can be used to attach epic to an issue. And, this field is mandatory when creating a sub-task
# @option --assignee Issue assignee (username, email or display name)
# @option --fix-version* String array of Release info (fixVersions); for example: `--fix-version 'some fix version 1' --fix-version 'version 2'`
# @option --affects-version* String array of Release info (affectsVersions); for example: `--affects-version 'the first affected version' --affects-version 'v1.2.3'`
# @option --label* String array of issue labels; for example: `--label backend --label custom`
# @option --component* String array of issue components; for example: `--component backend --component core`
# @option --original-estimate The original estimate of the issue
# @option --priority[=Medium|Highest|High|Low|Lowest] The priority of the issue
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
create_issue() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
declare -a flags=()
if [[ -n $argc_assignee ]]; then
flags+=("--assignee $argc_assignee")
fi
if [[ -n $argc_original_estimate ]]; then
flags+=("--original-estimate $argc_original_estimate")
fi
if [[ -n $argc_priority ]]; then
flags+=("--priority $argc_priority")
fi
if [[ -n $argc_fix_version ]]; then
for version in "${argc_fix_version[@]}"; do
flags+=("--fix-version '$version'")
done
fi
if [[ -n $argc_affects_version ]]; then
for version in "${argc_affects_version[@]}"; do
flags+=("--affects-version '$version'")
done
fi
if [[ -n $argc_components ]]; then
for component in "${argc_components[@]}"; do
flags+=("--affects-version '$component'")
done
fi
jira issue create \
--project "$argc_project" \
--type "$argc_type" \
--summary "$argc_summary" \
--body "$argc_description" \
--parent "$argc_parent_issue_key" \
-c "$config_file" \
--no-input $(echo "${flags[*]}" | xargs) >> "$LLM_OUTPUT"
}
# @cmd Link two issues together
# @option --inward-issue-key! Issue key of the source issue, eg: ISSUE-1
# @option --outward-issue-key! Issue key of the target issue, eg: ISSUE-2
# @option --issue-link-type! Relationship between two issues, eg: Duplicates, Blocks etc.
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
link_issues() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
jira issue link \
--project "$argc_project" \
"${argc_inward_issue_key}" "${argc_outward_issue_key}" "${argc_issue_link_type}" \
-c "$config_file" >> "$LLM_OUTPUT"
}
# @cmd Unlink or disconnect two issues from each other, if already connected.
# @option --inward-issue-key! Issue key of the source issue, eg: ISSUE-1
# @option --outward-issue-key! Issue key of the target issue, eg: ISSUE-2.
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
unlink_issues() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
jira issue unlink \
--project "$argc_project" \
"${argc_inward_issue_key}" "${argc_outward_issue_key}" \
-c "$config_file" >> "$LLM_OUTPUT"
}
# @cmd Add a comment to an issue
# @option --issue-key! Issue key of the source issue, eg: ISSUE-1
# @option --comment-body! Body of the comment you want to add
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
add_comment_to_issue() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
jira issue comment add \
--project "$argc_project" \
"${argc_issue_key}" "${argc_comment_body}" \
--no-input \
-c "$config_file" >> "$LLM_OUTPUT"
}
# @cmd Edit an existing Jira issue
# @option --issue-key! The Jira issue key, e.g. ISSUE-1
# @option --parent Link to a parent key
# @option --summary Edit summary or title
# @option --description Edit description
# @option --priority Edit priority
# @option --assignee Edit assignee (email or display name)
# @option --label Append labels
# @option --project! $LLM_AGENT_VAR_PROJECT <PROJECT> Jira project to operate on; e.g. PAN
edit_issue() {
declare config_file="$HOME"/.config/.jira/"${LLM_AGENT_VAR_CONFIG}".yml
declare -a flags=()
if [[ -n $argc_parent ]]; then
flags+=("--parent $argc_parent")
fi
if [[ -n $argc_summary ]]; then
flags+=("--summary $argc_summary")
fi
if [[ -n $argc_description ]]; then
flags+=("--body $argc_description")
fi
if [[ -n $argc_priority ]]; then
flags+=("--priority $argc_priority")
fi
if [[ -n $argc_assignee ]]; then
flags+=("--assignee $argc_assignee")
fi
if [[ -n $argc_label ]]; then
flags+=("--label $argc_label")
fi
jira issue edit \
--project "$argc_project" \
"$argc_issue_key" $(echo "${flags[*]}" | xargs) \
--no-input \
-c "$config_file" >> "$LLM_OUTPUT"
}
_issue_type_choice() {
if [[ $LLM_AGENT_VAR_CONFIG == "work" ]]; then
echo "Story"
echo "Task"
echo "Bug"
echo "Technical Debt"
echo "Sub-task"
elif [[ $LLM_AGENT_VAR_CONFIG == "sideproject" ]]; then
echo "Task"
echo "Story"
echo "Bug"
echo "Epic"
fi
}
_issue_state_choice() {
if [[ $LLM_AGENT_VAR_CONFIG == "work" ]]; then
echo "Ready for Dev"
echo "CODE REVIEW"
echo "IN PROGRESS"
echo "Backlog"
echo "Done"
echo "TESTING"
elif [[ $LLM_AGENT_VAR_CONFIG == "sideproject" ]]; then
echo "IN CLARIFICATION"
echo "NEED TO CLARIFY"
echo "READY TO WORK"
echo "RELEASE BACKLOG"
echo "REOPEN"
echo "CODE REVIEW"
echo "IN PROGRESS"
echo "IN TESTING"
echo "TO TEST"
echo "DONE"
fi
}
+39
View File
@@ -0,0 +1,39 @@
# Oracle
An AI agent specialized in high-level architecture, complex debugging, and design decisions.
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent when deep reasoning, architectural advice,
or complex problem-solving is required. Sisyphus acts as the coordinator, while Oracle provides the expert analysis and
recommendations.
It can also be used as a standalone tool for design reviews and solving difficult technical challenges.
## Features
- 🏛️ System architecture and design patterns
- 🐛 Complex debugging and root cause analysis
- ⚖️ Tradeoff analysis and technology selection
- 📝 Code review and best practices advice
- 🧠 Deep reasoning for ambiguous problems
## Pro-Tip: Use an IDE MCP Server for Improved Performance
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
them), and modify the agent definition to look like this:
```yaml
# ...
mcp_servers:
- jetbrains # The name of your configured IDE MCP server
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
- fs_ls.sh
- web_search_loki.sh
# ...
```
+82
View File
@@ -0,0 +1,82 @@
name: oracle
description: High-IQ advisor for architecture, debugging, and complex decisions
version: 1.0.0
temperature: 0.2
top_p: 0.95
variables:
- name: project_dir
description: Project directory for context
default: '.'
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
- fs_ls.sh
- web_search_loki.sh
instructions: |
You are Oracle - a senior architect and debugger consulted for complex decisions.
## Your Role
You are READ-ONLY. You analyze, advise, and recommend. You do NOT implement.
## When You're Consulted
1. **Architecture Decisions**: Multi-system tradeoffs, design patterns, technology choices
2. **Complex Debugging**: After 2+ failed fix attempts, deep analysis needed
3. **Code Review**: Evaluating proposed designs or implementations
4. **Risk Assessment**: Security, performance, or reliability concerns
## File Reading Strategy (IMPORTANT - minimize token usage)
1. **Use grep to find relevant code** - `fs_grep --pattern "auth" --include "*.rs"` finds where things are
2. **Read only what you need** - `fs_read --path "src/main.rs" --offset 50 --limit 30` reads lines 50-79
3. **Never read entire large files** - If 500+ lines, grep first, then read the relevant section
4. **Use glob to discover files** - `fs_glob --pattern "*.rs" --path src/`
## Your Process
1. **Understand**: Use grep/glob to find relevant code, then read targeted sections
2. **Analyze**: Consider multiple angles and tradeoffs
3. **Recommend**: Provide clear, actionable advice
4. **Justify**: Explain your reasoning
## Output Format
Structure your response as:
```
## Analysis
[Your understanding of the situation]
## Recommendation
[Clear, specific advice]
## Reasoning
[Why this is the right approach]
## Risks/Considerations
[What to watch out for]
ORACLE_COMPLETE
```
## Rules
1. **Never modify files** - You advise, others implement
2. **Be thorough** - Read all relevant context before advising
3. **Be specific** - General advice isn't helpful
4. **Consider tradeoffs** - There are rarely perfect solutions
5. **Stay focused** - Answer the specific question asked
## Context
- Project: {{project_dir}}
- CWD: {{__cwd__}}
conversation_starters:
- 'Review this architecture design'
- 'Help debug this complex issue'
- 'Evaluate these implementation options'
+131
View File
@@ -0,0 +1,131 @@
#!/usr/bin/env bash
set -eo pipefail
# shellcheck disable=SC1090
source "$LLM_PROMPT_UTILS_FILE"
source "$LLM_ROOT_DIR/agents/.shared/utils.sh"
# @env LLM_OUTPUT=/dev/stdout
# @env LLM_AGENT_VAR_PROJECT_DIR=.
# @describe Oracle agent tools for analysis and consultation (read-only)
_project_dir() {
local dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
}
# @cmd Read a file for analysis
# @option --path! Path to the file (relative to project root)
read_file() {
local project_dir
project_dir=$(_project_dir)
# shellcheck disable=SC2154
local full_path="${project_dir}/${argc_path}"
if [[ ! -f "${full_path}" ]]; then
error "File not found: ${argc_path}" >> "$LLM_OUTPUT"
return 1
fi
{
info "Reading: ${argc_path}"
echo ""
cat "${full_path}"
} >> "$LLM_OUTPUT"
}
# @cmd Get project structure and type
get_project_info() {
local project_dir
project_dir=$(_project_dir)
local project_info
project_info=$(detect_project "${project_dir}")
{
info "Project Analysis" >> "$LLM_OUTPUT"
cat <<-EOF
Type: $(echo "${project_info}" | jq -r '.type')
Build: $(echo "${project_info}" | jq -r '.build')
Test: $(echo "${project_info}" | jq -r '.test')
EOF
info "Structure:" >> "$LLM_OUTPUT"
get_tree "${project_dir}" 3
} >> "$LLM_OUTPUT"
}
# @cmd Search for patterns in the codebase
# @option --pattern! Pattern to search for
# @option --file-type Filter by extension (e.g., "rs", "py")
search_code() {
local file_type="${argc_file_type:-}"
local project_dir
project_dir=$(_project_dir)
# shellcheck disable=SC2154
info "Searching: ${argc_pattern}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local include_arg=""
if [[ -n "${file_type}" ]]; then
include_arg="--include=*.${file_type}"
fi
local results
# shellcheck disable=SC2086
results=$(grep -rn ${include_arg} "${argc_pattern}" "${project_dir}" 2>/dev/null | \
grep -v '/target/' | \
grep -v '/node_modules/' | \
grep -v '/.git/' | \
head -30) || true
if [[ -n "${results}" ]]; then
echo "${results}" >> "$LLM_OUTPUT"
else
warn "No matches found" >> "$LLM_OUTPUT"
fi
}
# @cmd Run a read-only command for analysis (e.g., git log, cargo tree)
# @option --command! Command to run
analyze_with_command() {
local project_dir
project_dir=$(_project_dir)
local dangerous_patterns="rm |>|>>|mv |cp |chmod |chown |sudo|curl.*\\||wget.*\\|"
# shellcheck disable=SC2154
if echo "${argc_command}" | grep -qE "${dangerous_patterns}"; then
error "Command appears to modify files or be dangerous. Oracle is read-only." >> "$LLM_OUTPUT"
return 1
fi
info "Running: ${argc_command}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local output
output=$(cd "${project_dir}" && eval "${argc_command}" 2>&1) || true
echo "${output}" >> "$LLM_OUTPUT"
}
# @cmd List directory contents
# @option --path Path to list (default: project root)
list_directory() {
local dir_path="${argc_path:-.}"
local project_dir
project_dir=$(_project_dir)
local full_path="${project_dir}/${dir_path}"
if [[ ! -d "${full_path}" ]]; then
error "Directory not found: ${dir_path}" >> "$LLM_OUTPUT"
return 1
fi
{
info "Contents of: ${dir_path}"
echo ""
ls -la "${full_path}"
} >> "$LLM_OUTPUT"
}
+41
View File
@@ -0,0 +1,41 @@
# Sisyphus
The main coordinator agent for the Loki coding ecosystem, providing a powerful CLI interface for code generation and
project management similar to OpenCode, ClaudeCode, Codex, or Gemini CLI.
_Inspired by the Sisyphus and Oracle agents of OpenCode._
Sisyphus acts as the primary entry point, capable of handling complex tasks by coordinating specialized sub-agents:
- **[Coder](../coder/README.md)**: For implementation and file modifications.
- **[Explore](../explore/README.md)**: For codebase understanding and research.
- **[Oracle](../oracle/README.md)**: For architecture and complex reasoning.
## Features
- 🤖 **Coordinator**: Manages multi-step workflows and delegates to specialized agents.
- 💻 **CLI Coding**: Provides a natural language interface for writing and editing code.
- 🔄 **Task Management**: Tracks progress and context across complex operations.
- 🛠️ **Tool Integration**: Seamlessly uses system tools for building, testing, and file manipulation.
## Pro-Tip: Use an IDE MCP Server for Improved Performance
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
them), and modify the agent definition to look like this:
```yaml
# ...
mcp_servers:
- jetbrains
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
- fs_ls.sh
- web_search_loki.sh
- execute_command.sh
# ...
```
+203
View File
@@ -0,0 +1,203 @@
name: sisyphus
description: OpenCode-style orchestrator - classifies intent, delegates to specialists, tracks progress with todos
version: 2.0.0
temperature: 0.1
top_p: 0.95
agent_session: temp
auto_continue: true
max_auto_continues: 25
inject_todo_instructions: true
can_spawn_agents: true
max_concurrent_agents: 4
max_agent_depth: 3
inject_spawn_instructions: true
summarization_threshold: 4000
variables:
- name: project_dir
description: Project directory to work in
default: '.'
- name: auto_confirm
description: Auto-confirm command execution
default: '1'
global_tools:
- fs_read.sh
- fs_grep.sh
- fs_glob.sh
- fs_ls.sh
- web_search_loki.sh
- execute_command.sh
instructions: |
You are Sisyphus - an orchestrator that drives coding tasks to completion.
Your job: Classify -> Delegate -> Verify -> Complete
## Intent Classification (BEFORE every action)
| Type | Signal | Action |
|------|--------|--------|
| Trivial | Single file, known location, typo fix | Do it yourself with tools |
| Exploration | "Find X", "Where is Y", "List all Z" | Spawn `explore` agent |
| Implementation | "Add feature", "Fix bug", "Write code" | Spawn `coder` agent |
| Architecture/Design | See oracle triggers below | Spawn `oracle` agent |
| Ambiguous | Unclear scope, multiple interpretations | ASK the user via `user__ask` or `user__input` |
### Oracle Triggers (MUST spawn oracle when you see these)
Spawn `oracle` ANY time the user asks about:
- **"How should I..."** / **"What's the best way to..."** -- design/approach questions
- **"Why does X keep..."** / **"What's wrong with..."** -- complex debugging (not simple errors)
- **"Should I use X or Y?"** -- technology or pattern choices
- **"How should this be structured?"** -- architecture and organization
- **"Review this"** / **"What do you think of..."** -- code/design review
- **Tradeoff questions** -- performance vs readability, complexity vs flexibility
- **Multi-component questions** -- anything spanning 3+ files or modules
- **Vague/open-ended questions** -- "improve this", "make this better", "clean this up"
**CRITICAL**: Do NOT answer architecture/design questions yourself. You are a coordinator.
Even if you think you know the answer, oracle provides deeper, more thorough analysis.
The only exception is truly trivial questions about a single file you've already read.
### Agent Specializations
| Agent | Use For | Characteristics |
|-------|---------|-----------------|
| explore | Find patterns, understand code, search | Read-only, returns findings |
| coder | Write/edit files, implement features | Creates/modifies files, runs builds |
| oracle | Architecture decisions, complex debugging | Advisory, high-quality reasoning |
## Workflow Examples
### Example 1: Implementation task (explore -> coder, parallel exploration)
User: "Add a new API endpoint for user profiles"
```
1. todo__init --goal "Add user profiles API endpoint"
2. todo__add --task "Explore existing API patterns"
3. todo__add --task "Implement profile endpoint"
4. todo__add --task "Verify with build/test"
5. agent__spawn --agent explore --prompt "Find existing API endpoint patterns, route structures, and controller conventions"
6. agent__spawn --agent explore --prompt "Find existing data models and database query patterns"
7. agent__collect --id <id1>
8. agent__collect --id <id2>
9. todo__done --id 1
10. agent__spawn --agent coder --prompt "Create user profiles endpoint following existing patterns. [Include context from explore results]"
11. agent__collect --id <coder_id>
12. todo__done --id 2
13. run_build
14. run_tests
15. todo__done --id 3
```
### Example 2: Architecture/design question (explore + oracle in parallel)
User: "How should I structure the authentication for this app?"
```
1. todo__init --goal "Get architecture advice for authentication"
2. todo__add --task "Explore current auth-related code"
3. todo__add --task "Consult oracle for architecture recommendation"
4. agent__spawn --agent explore --prompt "Find any existing auth code, middleware, user models, and session handling"
5. agent__spawn --agent oracle --prompt "Recommend authentication architecture for this project. Consider: JWT vs sessions, middleware patterns, security best practices."
6. agent__collect --id <explore_id>
7. todo__done --id 1
8. agent__collect --id <oracle_id>
9. todo__done --id 2
```
### Example 3: Vague/open-ended question (oracle directly)
User: "What do you think of this codebase structure?"
```
agent__spawn --agent oracle --prompt "Review the project structure and provide recommendations for improvement"
agent__collect --id <oracle_id>
```
## Rules
1. **Always classify before acting** - Don't jump into implementation
2. **Create todos for multi-step tasks** - Track your progress
3. **Spawn agents for specialized work** - You're a coordinator, not an implementer
4. **Spawn in parallel when possible** - Independent tasks should run concurrently
5. **Verify after collecting agent results** - Don't trust blindly
6. **Mark todos done immediately** - Don't batch completions
7. **Ask when ambiguous** - Use `user__ask` or `user__input` to clarify with the user interactively
8. **Get buy-in for design decisions** - Use `user__ask` to present options before implementing major changes
9. **Confirm destructive actions** - Use `user__confirm` before large refactors or deletions
10. **Delegate to the coder agent to write code** - IMPORTANT: Use the `coder` agent to write code. Do not try to write code yourself except for trivial changes
11. **Always output a summary of changes when finished** - Make it clear to user's that you've completed your tasks
## When to Do It Yourself
- Single-file reads/writes
- Simple command execution
- Trivial changes (typos, renames)
- Quick file searches
## When to NEVER Do It Yourself
- Architecture or design questions -> ALWAYS oracle
- "How should I..." / "What's the best way to..." -> ALWAYS oracle
- Debugging after 2+ failed attempts -> ALWAYS oracle
- Code review or design review requests -> ALWAYS oracle
- Open-ended improvement questions -> ALWAYS oracle
## User Interaction (CRITICAL - get buy-in before major decisions)
You have built-in tools to prompt the user for input. Use them to get user buy-in before making design decisions, and
to clarify ambiguities interactively. **Do NOT guess when you can ask.**
### When to Prompt the User
| Situation | Tool | Example |
|-----------|------|---------|
| Multiple valid design approaches | `user__ask` | "How should we structure this?" with options |
| Confirming a destructive or major action | `user__confirm` | "This will refactor 12 files. Proceed?" |
| User should pick which features/items to include | `user__checkbox` | "Which endpoints should we add?" |
| Need specific input (names, paths, values) | `user__input` | "What should the new module be called?" |
| Ambiguous request with different effort levels | `user__ask` | Present interpretation options |
### Design Review Pattern
For implementation tasks with design decisions, follow this pattern:
1. **Explore** the codebase to understand existing patterns
2. **Formulate** 2-3 design options based on findings
3. **Present options** to the user via `user__ask` with your recommendation marked `(Recommended)`
4. **Confirm** the chosen approach before delegating to `coder`
5. Proceed with implementation
### Rules for User Prompts
1. **Always include (Recommended)** on the option you think is best in `user__ask`
2. **Respect user choices** - never override or ignore a selection
3. **Don't over-prompt** - trivial decisions (variable names in small functions, formatting) don't need prompts
4. **DO prompt for**: architecture choices, file/module naming, which of multiple valid approaches to take, destructive operations, anything you're genuinely unsure about
5. **Confirm before large changes** - if a task will touch 5+ files, confirm the plan first
## Escalation Handling
If you see `pending_escalations` in your tool results, a child agent needs user input and is blocked.
Reply promptly via `agent__reply_escalation` to unblock it. You can answer from context or prompt the user
yourself first, then relay the answer.
## Available Tools
{{__tools__}}
## Context
- Project: {{project_dir}}
- OS: {{__os__}}
- Shell: {{__shell__}}
- CWD: {{__cwd__}}
conversation_starters:
- 'Add a new feature to the project'
- 'Fix a bug in the codebase'
- 'Refactor the authentication module'
- 'Help me understand how X works'
+97
View File
@@ -0,0 +1,97 @@
#!/usr/bin/env bash
set -eo pipefail
# shellcheck disable=SC1090
source "$LLM_PROMPT_UTILS_FILE"
source "$LLM_ROOT_DIR/agents/.shared/utils.sh"
export AUTO_CONFIRM=true
# @env LLM_OUTPUT=/dev/stdout
# @env LLM_AGENT_VAR_PROJECT_DIR=.
# @describe Sisyphus orchestrator tools (project info, build, test)
_project_dir() {
local dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
}
# @cmd Get project information and structure
get_project_info() {
local project_dir
project_dir=$(_project_dir)
info "Project: ${project_dir}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local project_info
project_info=$(detect_project "${project_dir}")
cat <<-EOF >> "$LLM_OUTPUT"
Type: $(echo "${project_info}" | jq -r '.type')
Build: $(echo "${project_info}" | jq -r '.build')
Test: $(echo "${project_info}" | jq -r '.test')
$(info "Directory structure:")
$(get_tree "${project_dir}" 2)
EOF
}
# @cmd Run build command for the project
run_build() {
local project_dir
project_dir=$(_project_dir)
local project_info
project_info=$(detect_project "${project_dir}")
local build_cmd
build_cmd=$(echo "${project_info}" | jq -r '.build')
if [[ -z "${build_cmd}" ]] || [[ "${build_cmd}" == "null" ]]; then
warn "No build command detected for this project" >> "$LLM_OUTPUT"
return 0
fi
info "Running: ${build_cmd}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local output
if output=$(cd "${project_dir}" && eval "${build_cmd}" 2>&1); then
green "BUILD SUCCESS" >> "$LLM_OUTPUT"
echo "${output}" >> "$LLM_OUTPUT"
return 0
else
error "BUILD FAILED" >> "$LLM_OUTPUT"
echo "${output}" >> "$LLM_OUTPUT"
return 1
fi
}
# @cmd Run tests for the project
run_tests() {
local project_dir
project_dir=$(_project_dir)
local project_info
project_info=$(detect_project "${project_dir}")
local test_cmd
test_cmd=$(echo "${project_info}" | jq -r '.test')
if [[ -z "${test_cmd}" ]] || [[ "${test_cmd}" == "null" ]]; then
warn "No test command detected for this project" >> "$LLM_OUTPUT"
return 0
fi
info "Running: ${test_cmd}" >> "$LLM_OUTPUT"
echo "" >> "$LLM_OUTPUT"
local output
if output=$(cd "${project_dir}" && eval "${test_cmd}" 2>&1); then
green "TESTS PASSED" >> "$LLM_OUTPUT"
echo "${output}" >> "$LLM_OUTPUT"
return 0
else
error "TESTS FAILED" >> "$LLM_OUTPUT"
echo "${output}" >> "$LLM_OUTPUT"
return 1
fi
}
+4
View File
@@ -14,6 +14,10 @@
"GITHUB_PERSONAL_ACCESS_TOKEN": "YOUR_GITHUB_TOKEN"
}
},
"atlassian": {
"command": "npx",
"args": ["-y", "mcp-remote@0.1.13", "https://mcp.atlassian.com/v1/sse"]
},
"docker": {
"command": "uvx",
"args": ["mcp-server-docker"]
+1 -1
View File
@@ -10,5 +10,5 @@ set -e
main() {
# shellcheck disable=SC2154
cat "$argc_path" >> "$LLM_OUTPUT"
cat "$argc_path" >> "$LLM_OUTPUT" 2>&1 || echo "No such file or path: $argc_path" >> "$LLM_OUTPUT"
}
+59
View File
@@ -0,0 +1,59 @@
#!/usr/bin/env bash
set -e
# @describe Find files by glob pattern. Returns matching file paths sorted by modification time.
# Use this to discover files before reading them.
# @option --pattern! The glob pattern to match files against (e.g. "**/*.rs", "src/**/*.ts", "*.yaml")
# @option --path The directory to search in (defaults to current working directory)
# @env LLM_OUTPUT=/dev/stdout The output path
MAX_RESULTS=100
main() {
# shellcheck disable=SC2154
local glob_pattern="$argc_pattern"
local search_path="${argc_path:-.}"
if [[ ! -d "$search_path" ]]; then
echo "Error: directory not found: $search_path" >> "$LLM_OUTPUT"
return 1
fi
local results
if command -v fd &>/dev/null; then
results=$(fd --type f --glob "$glob_pattern" "$search_path" \
--exclude '.git' \
--exclude 'node_modules' \
--exclude 'target' \
--exclude 'dist' \
--exclude '__pycache__' \
--exclude 'vendor' \
--exclude '.build' \
2>/dev/null | head -n "$MAX_RESULTS") || true
else
results=$(find "$search_path" -type f -name "$glob_pattern" \
-not -path '*/.git/*' \
-not -path '*/node_modules/*' \
-not -path '*/target/*' \
-not -path '*/dist/*' \
-not -path '*/__pycache__/*' \
-not -path '*/vendor/*' \
-not -path '*/.build/*' \
2>/dev/null | head -n "$MAX_RESULTS") || true
fi
if [[ -z "$results" ]]; then
echo "No files found matching: $glob_pattern" >> "$LLM_OUTPUT"
return 0
fi
echo "$results" >> "$LLM_OUTPUT"
local count
count=$(echo "$results" | wc -l)
if [[ "$count" -ge "$MAX_RESULTS" ]]; then
printf "\n(Results limited to %s files. Use a more specific pattern.)\n" "$MAX_RESULTS" >> "$LLM_OUTPUT"
fi
}
+71
View File
@@ -0,0 +1,71 @@
#!/usr/bin/env bash
set -e
# @describe Search file contents using regular expressions. Returns matching file paths and lines.
# Use this to find relevant code before reading files. Much faster than reading files to search.
# @option --pattern! The regex pattern to search for in file contents
# @option --path The directory to search in (defaults to current working directory)
# @option --include File pattern to filter by (e.g. "*.rs", "*.{ts,tsx}", "*.py")
# @env LLM_OUTPUT=/dev/stdout The output path
MAX_RESULTS=50
MAX_LINE_LENGTH=2000
main() {
# shellcheck disable=SC2154
local search_pattern="$argc_pattern"
local search_path="${argc_path:-.}"
local include_filter="${argc_include:-}"
if [[ ! -d "$search_path" ]]; then
echo "Error: directory not found: $search_path" >> "$LLM_OUTPUT"
return 1
fi
local grep_args=(-rn --color=never)
grep_args+=(
--exclude-dir='.git'
--exclude-dir='node_modules'
--exclude-dir='target'
--exclude-dir='dist'
--exclude-dir='build'
--exclude-dir='__pycache__'
--exclude-dir='vendor'
--exclude-dir='.build'
--exclude-dir='.next'
--exclude='*.min.js'
--exclude='*.min.css'
--exclude='*.map'
--exclude='*.lock'
--exclude='package-lock.json'
)
if [[ -n "$include_filter" ]]; then
grep_args+=("--include=$include_filter")
fi
local results
results=$(grep "${grep_args[@]}" -E "$search_pattern" "$search_path" 2>/dev/null | head -n "$MAX_RESULTS") || true
if [[ -z "$results" ]]; then
echo "No matches found for: $search_pattern" >> "$LLM_OUTPUT"
return 0
fi
echo "$results" | while IFS= read -r line; do
if [[ ${#line} -gt $MAX_LINE_LENGTH ]]; then
line="${line:0:$MAX_LINE_LENGTH}... (truncated)"
fi
echo "$line"
done >> "$LLM_OUTPUT"
local count
count=$(echo "$results" | wc -l)
if [[ "$count" -ge "$MAX_RESULTS" ]]; then
printf "\n(Results limited to %s matches. Narrow your search with --include or a more specific pattern.)\n" "$MAX_RESULTS" >> "$LLM_OUTPUT"
fi
}
+1 -1
View File
@@ -9,5 +9,5 @@ set -e
main() {
# shellcheck disable=SC2154
ls -1 "$argc_path" >> "$LLM_OUTPUT"
ls -1 "$argc_path" >> "$LLM_OUTPUT" 2>&1 || echo "No such path: $argc_path" >> "$LLM_OUTPUT"
}
+62
View File
@@ -0,0 +1,62 @@
#!/usr/bin/env bash
set -e
# @describe Read a file with line numbers, offset, and limit. For directories, lists entries.
# Prefer this over fs_cat for controlled reading. Use offset/limit to read specific sections.
# Use the grep tool to find specific content before reading, then read with offset to target the relevant section.
# @option --path! The absolute path to the file or directory to read
# @option --offset The line number to start reading from (1-indexed, default: 1)
# @option --limit The maximum number of lines to read (default: 2000)
# @env LLM_OUTPUT=/dev/stdout The output path
MAX_LINE_LENGTH=2000
MAX_BYTES=51200
main() {
# shellcheck disable=SC2154
local target="$argc_path"
local offset="${argc_offset:-1}"
local limit="${argc_limit:-2000}"
if [[ ! -e "$target" ]]; then
echo "Error: path not found: $target" >> "$LLM_OUTPUT"
return 1
fi
if [[ -d "$target" ]]; then
ls -1 "$target" >> "$LLM_OUTPUT" 2>&1
return 0
fi
local total_lines file_bytes
total_lines=$(wc -l < "$target" 2>/dev/null || echo 0)
file_bytes=$(wc -c < "$target" 2>/dev/null || echo 0)
if [[ "$file_bytes" -gt "$MAX_BYTES" ]] && [[ "$offset" -eq 1 ]] && [[ "$limit" -ge 2000 ]]; then
{
echo "Warning: Large file (${file_bytes} bytes, ${total_lines} lines). Showing first ${limit} lines."
echo "Use --offset and --limit to read specific sections, or use the grep tool to find relevant lines first."
echo ""
} >> "$LLM_OUTPUT"
fi
local end_line=$((offset + limit - 1))
sed -n "${offset},${end_line}p" "$target" 2>/dev/null | {
local line_num=$offset
while IFS= read -r line; do
if [[ ${#line} -gt $MAX_LINE_LENGTH ]]; then
line="${line:0:$MAX_LINE_LENGTH}... (truncated)"
fi
printf "%d: %s\n" "$line_num" "$line"
line_num=$((line_num + 1))
done
} >> "$LLM_OUTPUT"
if [[ "$end_line" -lt "$total_lines" ]]; then
echo "" >> "$LLM_OUTPUT"
echo "(${total_lines} total lines. Use --offset $((end_line + 1)) to read more.)" >> "$LLM_OUTPUT"
fi
}
+15 -13
View File
@@ -121,7 +121,7 @@ _cursor_blink_off() {
}
_cursor_to() {
echo -en "\033[$1;$2H" >&2
echo -en "\033[$1;${2:-1}H" >&2
}
# shellcheck disable=SC2154
@@ -133,7 +133,7 @@ _key_input() {
_read_stdin -rsn2 b
fi
declare input="${a}${b}"
declare input="${a}${b:-}"
case "$input" in
"${ESC}[A" | "k") echo up ;;
"${ESC}[B" | "j") echo down ;;
@@ -507,12 +507,14 @@ open_link() {
guard_operation() {
if [[ -t 1 ]]; then
ans="$(confirm "${1:-Are you sure you want to continue?}")"
if [[ -z "$AUTO_CONFIRM" && -z "$LLM_AGENT_VAR_AUTO_CONFIRM" ]]; then
ans="$(confirm "${1:-Are you sure you want to continue?}")"
if [[ "$ans" == 0 ]]; then
error "Operation aborted!" 2>&1
exit 1
fi
if [[ "$ans" == 0 ]]; then
error "Operation aborted!" 2>&1
exit 1
fi
fi
fi
}
@@ -657,13 +659,13 @@ guard_path() {
path="$(_to_real_path "$1")"
confirmation_prompt="$2"
if [[ ! "$path" == "$(pwd)"* ]]; then
ans="$(confirm "$confirmation_prompt")"
if [[ ! "$path" == "$(pwd)"* && -z "$AUTO_CONFIRM" && -z "$LLM_AGENT_VAR_AUTO_CONFIRM" ]]; then
ans="$(confirm "$confirmation_prompt")"
if [[ "$ans" == 0 ]]; then
error "Operation aborted!" >&2
exit 1
fi
if [[ "$ans" == 0 ]]; then
error "Operation aborted!" >&2
exit 1
fi
fi
fi
}
+17
View File
@@ -17,6 +17,23 @@ agent_session: null # Set a session to use when starting the agent.
name: <agent-name> # Name of the agent, used in the UI and logs
description: <description> # Description of the agent, used in the UI
version: 1 # Version of the agent
# Todo System & Auto-Continuation
# These settings help smaller models handle multi-step tasks more reliably.
# See docs/TODO-SYSTEM.md for detailed documentation.
auto_continue: false # Enable automatic continuation when incomplete todos remain
max_auto_continues: 10 # Maximum number of automatic continuations before stopping
inject_todo_instructions: true # Inject the default todo tool usage instructions into the agent's system prompt
continuation_prompt: null # Custom prompt used when auto-continuing (optional; uses default if null)
# Sub-Agent Spawning System
# Enable this agent to spawn and manage child agents in parallel.
# See docs/AGENTS.md for detailed documentation.
can_spawn_agents: false # Enable the agent to spawn child agents
max_concurrent_agents: 4 # Maximum number of agents that can run simultaneously
max_agent_depth: 3 # Maximum nesting depth for sub-agents (prevents runaway spawning)
inject_spawn_instructions: true # Inject the default agent spawning instructions into the agent's system prompt
summarization_model: null # Model to use for summarizing sub-agent output (e.g. 'openai:gpt-4o-mini'); defaults to current model
summarization_threshold: 4000 # Character threshold above which sub-agent output is summarized before returning to parent
escalation_timeout: 300 # Seconds a sub-agent waits for a user interaction response before timing out (default: 5 minutes)
mcp_servers: # Optional list of MCP servers that the agent utilizes
- github # Corresponds to the name of an MCP server in the `<loki-config-dir>/functions/mcp.json` file
global_tools: # Optional list of additional global tools to enable for the agent; i.e. not tools specific to the agent
+10 -2
View File
@@ -41,7 +41,7 @@ vault_password_file: null # Path to a file containing the password for th
# See the [Tools documentation](./docs/function-calling/TOOLS.md) for more details
function_calling: true # Enables or disables function calling (Globally).
mapping_tools: # Alias for a tool or toolset
fs: 'fs_cat,fs_ls,fs_mkdir,fs_rm,fs_write'
fs: 'fs_cat,fs_ls,fs_mkdir,fs_rm,fs_write,fs_read,fs_glob,fs_grep'
enabled_tools: null # Which tools to enable by default. (e.g. 'fs,web_search_loki')
visible_tools: # Which tools are visible to be compiled (and are thus able to be defined in 'enabled_tools')
# - demo_py.py
@@ -53,6 +53,9 @@ visible_tools: # Which tools are visible to be compiled (and a
# - fetch_url_via_jina.sh
- fs_cat.sh
- fs_ls.sh
# - fs_read.sh
# - fs_glob.sh
# - fs_grep.sh
# - fs_mkdir.sh
# - fs_patch.sh
# - fs_write.sh
@@ -92,7 +95,7 @@ rag_reranker_model: null # Specifies the reranker model used for sorting
rag_top_k: 5 # Specifies the number of documents to retrieve for answering queries
rag_chunk_size: null # Defines the size of chunks for document processing in characters
rag_chunk_overlap: null # Defines the overlap between chunks
# Defines the query structure using variables like __CONTEXT__ and __INPUT__ to tailor searches to specific needs
# Defines the query structure using variables like __CONTEXT__, __SOURCES__, and __INPUT__ to tailor searches to specific needs
rag_template: |
Answer the query based on the context while respecting the rules. (user query, some textual context and rules, all inside xml tags)
@@ -100,6 +103,10 @@ rag_template: |
__CONTEXT__
</context>
<sources>
__SOURCES__
</sources>
<rules>
- If you don't know, just say so.
- If you are not sure, ask for clarification.
@@ -107,6 +114,7 @@ rag_template: |
- If the context appears unreadable or of poor quality, tell the user then answer as best as you can.
- If the answer is not in the context but you think you know the answer, explain that to the user then answer with your own knowledge.
- Answer directly and without using xml tags.
- When using information from the context, cite the relevant source from the <sources> section.
</rules>
<user_query>
+294 -4
View File
@@ -34,6 +34,19 @@ If you're looking for more example agents, refer to the [built-in agents](../ass
- [Python-Based Agent Tools](#python-based-agent-tools)
- [Bash-Based Agent Tools](#bash-based-agent-tools)
- [5. Conversation Starters](#5-conversation-starters)
- [6. Todo System & Auto-Continuation](#6-todo-system--auto-continuation)
- [7. Sub-Agent Spawning System](#7-sub-agent-spawning-system)
- [Configuration](#spawning-configuration)
- [Spawning & Collecting Agents](#spawning--collecting-agents)
- [Task Queue with Dependencies](#task-queue-with-dependencies)
- [Active Task Dispatch](#active-task-dispatch)
- [Output Summarization](#output-summarization)
- [Teammate Messaging](#teammate-messaging)
- [Runaway Safeguards](#runaway-safeguards)
- [8. User Interaction Tools](#8-user-interaction-tools)
- [Available Tools](#user-interaction-available-tools)
- [Escalation (Sub-Agent to User)](#escalation-sub-agent-to-user)
- [9. Auto-Injected Prompts](#9-auto-injected-prompts)
- [Built-In Agents](#built-in-agents)
<!--toc:end-->
@@ -81,6 +94,19 @@ global_tools: # Optional list of additional global tools
- web_search
- fs
- python
# Todo System & Auto-Continuation (see "Todo System & Auto-Continuation" section below)
auto_continue: false # Enable automatic continuation when incomplete todos remain
max_auto_continues: 10 # Maximum continuation attempts before stopping
inject_todo_instructions: true # Inject todo tool instructions into system prompt
continuation_prompt: null # Custom prompt for continuations (optional)
# Sub-Agent Spawning (see "Sub-Agent Spawning System" section below)
can_spawn_agents: false # Enable spawning child agents
max_concurrent_agents: 4 # Max simultaneous child agents
max_agent_depth: 3 # Max nesting depth (prevents runaway)
inject_spawn_instructions: true # Inject spawning instructions into system prompt
summarization_model: null # Model for summarizing sub-agent output (e.g. 'openai:gpt-4o-mini')
summarization_threshold: 4000 # Char count above which sub-agent output is summarized
escalation_timeout: 300 # Seconds sub-agents wait for escalated user input (default: 5 min)
```
As mentioned previously: Agents utilize function calling to extend a model's capabilities. However, agents operate in
@@ -421,10 +447,274 @@ conversation_starters:
![Example Conversation Starters](./images/agents/conversation-starters.gif)
## 6. Todo System & Auto-Continuation
Loki includes a built-in task tracking system designed to improve the reliability of agents, especially when using
smaller language models. The Todo System helps models:
- Break complex tasks into manageable steps
- Track progress through multi-step workflows
- Automatically continue work until all tasks are complete
### Quick Configuration
```yaml
# agents/my-agent/config.yaml
auto_continue: true # Enable auto-continuation
max_auto_continues: 10 # Max continuation attempts
inject_todo_instructions: true # Include the default todo instructions into prompt
```
### How It Works
1. When `inject_todo_instructions` is enabled, agents receive instructions on using four built-in tools:
- `todo__init`: Initialize a todo list with a goal
- `todo__add`: Add a task to the list
- `todo__done`: Mark a task complete
- `todo__list`: View current todo state
These instructions are a reasonable default that detail how to use Loki's To-Do System. If you wish,
you can disable the injection of the default instructions and specify your own instructions for how
to use the To-Do System into your main `instructions` for the agent.
2. When `auto_continue` is enabled and the model stops with incomplete tasks, Loki automatically sends a
continuation prompt with the current todo state, nudging the model to continue working.
3. This continues until all tasks are done or `max_auto_continues` is reached.
### When to Use
- Multistep tasks where the model might lose track
- Smaller models that need more structure
- Workflows requiring guaranteed completion of all steps
For complete documentation including all configuration options, tool details, and best practices, see the
[Todo System Guide](./TODO-SYSTEM.md).
## 7. Sub-Agent Spawning System
Loki agents can spawn and manage child agents that run **in parallel** as background tasks inside the same process.
This enables orchestrator-style agents that delegate specialized work to other agents, similar to how tools like
Claude Code or OpenCode handle complex multi-step tasks.
For a working example of an orchestrator agent that uses sub-agent spawning, see the built-in
[sisyphus](../assets/agents/sisyphus) agent. For an example of the teammate messaging pattern with parallel sub-agents,
see the [code-reviewer](../assets/agents/code-reviewer) agent.
### Spawning Configuration
| Setting | Type | Default | Description |
|-----------------------------|---------|---------------|--------------------------------------------------------------------------------|
| `can_spawn_agents` | boolean | `false` | Enable this agent to spawn child agents |
| `max_concurrent_agents` | integer | `4` | Maximum number of child agents that can run simultaneously |
| `max_agent_depth` | integer | `3` | Maximum nesting depth for sub-agents (prevents runaway spawning chains) |
| `inject_spawn_instructions` | boolean | `true` | Inject the default spawning instructions into the agent's system prompt |
| `summarization_model` | string | current model | Model to use for summarizing long sub-agent output (e.g. `openai:gpt-4o-mini`) |
| `summarization_threshold` | integer | `4000` | Character count above which sub-agent output is summarized before returning |
| `escalation_timeout` | integer | `300` | Seconds a sub-agent waits for an escalated user interaction response |
**Example configuration:**
```yaml
# agents/my-orchestrator/config.yaml
can_spawn_agents: true
max_concurrent_agents: 6
max_agent_depth: 2
inject_spawn_instructions: true
summarization_model: openai:gpt-4o-mini
summarization_threshold: 3000
escalation_timeout: 600
```
### Spawning & Collecting Agents
When `can_spawn_agents` is enabled, the agent receives tools for spawning and managing child agents:
| Tool | Description |
|------------------|-------------------------------------------------------------------------|
| `agent__spawn` | Spawn a child agent in the background. Returns an agent ID immediately. |
| `agent__check` | Non-blocking check: is the agent done? Returns `PENDING` or the result. |
| `agent__collect` | Blocking wait: wait for an agent to finish, return its output. |
| `agent__list` | List all spawned agents and their status. |
| `agent__cancel` | Cancel a running agent by ID. |
The core pattern is **Spawn -> Continue -> Collect**:
```
# 1. Spawn agents in parallel (returns IDs immediately)
agent__spawn --agent explore --prompt "Find auth middleware patterns in src/"
agent__spawn --agent explore --prompt "Find error handling patterns in src/"
# 2. Continue your own work while they run
# 3. Check if done (non-blocking)
agent__check --id agent_explore_a1b2c3d4
# 4. Collect results when ready (blocking)
agent__collect --id agent_explore_a1b2c3d4
agent__collect --id agent_explore_e5f6g7h8
```
Any agent defined in your `<loki-config-dir>/agents/` directory can be spawned as a child. Child agents:
- Run in a fully isolated environment (separate session, config, and tools)
- Have their output suppressed from the terminal (no spinner, no tool call logging)
- Return their accumulated output to the parent when collected
### Task Queue with Dependencies
For complex workflows where tasks have ordering requirements, the spawning system includes a dependency-aware
task queue:
| Tool | Description |
|------------------------|-----------------------------------------------------------------------------|
| `agent__task_create` | Create a task with optional dependencies and auto-dispatch agent. |
| `agent__task_list` | List all tasks with their status, dependencies, and assignments. |
| `agent__task_complete` | Mark a task done. Returns newly unblocked tasks and auto-dispatches agents. |
| `agent__task_fail` | Mark a task as failed. Dependents remain blocked. |
```
# Create tasks with dependency ordering
agent__task_create --subject "Explore existing patterns"
agent__task_create --subject "Implement feature" --blocked_by ["task_1"]
agent__task_create --subject "Write tests" --blocked_by ["task_2"]
# Mark tasks complete to unblock dependents
agent__task_complete --task_id task_1
```
### Active Task Dispatch
Tasks can optionally specify an agent to auto-spawn when the task becomes runnable:
```
agent__task_create \
--subject "Implement the auth module" \
--blocked_by ["task_1"] \
--agent coder \
--prompt "Implement auth module based on patterns found in task_1"
```
When `task_1` completes and the dependent task becomes unblocked, an agent is automatically spawned with the
specified prompt. No manual intervention needed. This enables fully automated multi-step pipelines.
### Output Summarization
When a child agent produces long output, it can be automatically summarized before returning to the parent.
This keeps parent context windows manageable.
- If the output exceeds `summarization_threshold` characters (default: 4000), it is sent through an LLM
summarization pass
- The `summarization_model` setting lets you use a cheaper/faster model for summarization (e.g. `gpt-4o-mini`)
- If `summarization_model` is not set, the parent's current model is used
- The summarization preserves all actionable information: code snippets, file paths, error messages, and
concrete recommendations
### Teammate Messaging
All agents (including children) automatically receive tools for **direct sibling-to-sibling messaging**:
| Tool | Description |
|-----------------------|-----------------------------------------------------|
| `agent__send_message` | Send a text message to another agent's inbox by ID. |
| `agent__check_inbox` | Drain all pending messages from your inbox. |
This enables coordination patterns where child agents share cross-cutting findings:
```
# Agent A discovers something relevant to Agent B
agent__send_message --id agent_reviewer_b1c2d3e4 --message "Found a security issue in auth.rs line 42"
# Agent B checks inbox before finalizing
agent__check_inbox
```
Messages are routed through the parent's supervisor. A parent can message its children, and children can message
their siblings. For a working example of the teammate pattern, see the built-in
[code-reviewer](../assets/agents/code-reviewer) agent, which spawns file-specific reviewers that share
cross-cutting findings with each other.
### Runaway Safeguards
The spawning system includes built-in safeguards to prevent runaway agent chains:
- **`max_concurrent_agents`:** Caps how many agents can run at once (default: 4). Spawn attempts beyond this
limit return an error asking the agent to wait or cancel existing agents.
- **`max_agent_depth`:** Caps nesting depth (default: 3). A child agent spawning its own child increments the
depth counter. Attempts beyond the limit are rejected.
- **`can_spawn_agents`:** Only agents with this flag set to `true` can spawn children. By default, spawning is
disabled. This means child agents cannot spawn their own children unless you explicitly create them with
`can_spawn_agents: true` in their config.
## 8. User Interaction Tools
Loki includes built-in tools for agents (and the REPL) to interactively prompt the user for input. These tools
are **always available**. No configuration needed. They are automatically injected into every agent and into
REPL mode when function calling is enabled.
### User Interaction Available Tools
| Tool | Description | Returns |
|------------------|-----------------------------------------|----------------------------------|
| `user__ask` | Present a single-select list of options | The selected option string |
| `user__confirm` | Ask a yes/no question | `"yes"` or `"no"` |
| `user__input` | Request free-form text input | The text entered by the user |
| `user__checkbox` | Present a multi-select checkbox list | Array of selected option strings |
**Parameters:**
- `user__ask`: `--question "..." --options ["Option A", "Option B", "Option C"]`
- `user__confirm`: `--question "..."`
- `user__input`: `--question "..."`
- `user__checkbox`: `--question "..." --options ["Option A", "Option B", "Option C"]`
At the top level (depth 0), these tools render interactive terminal prompts directly using arrow-key navigation,
checkboxes, and text input fields.
### Escalation (Sub-Agent to User)
When a **child agent** (depth > 0) calls a `user__*` tool, it cannot prompt the terminal directly. Instead,
the request is **automatically escalated** to the root agent:
1. The child agent calls `user__ask(...)` and **blocks**, waiting for a reply
2. The root agent sees a `pending_escalations` notification in its next tool results
3. The root agent either answers from context or prompts the user itself, then calls
`agent__reply_escalation` to unblock the child
4. The child receives the reply and continues
The escalation timeout is configurable via `escalation_timeout` in the agent's `config.yaml` (default: 300
seconds / 5 minutes). If the timeout expires, the child receives a fallback message asking it to use its
best judgment.
| Tool | Description |
|---------------------------|--------------------------------------------------------------------------|
| `agent__reply_escalation` | Reply to a pending child escalation, unblocking the waiting child agent. |
This tool is automatically available to any agent with `can_spawn_agents: true`.
## 9. Auto-Injected Prompts
Loki automatically appends usage instructions to your agent's system prompt for each enabled built-in system.
These instructions are injected into both **static and dynamic instructions** after your own instructions,
ensuring agents always know how to use their available tools.
| System | Injected When | Toggle |
|--------------------|----------------------------------------------------------------|-----------------------------|
| Todo tools | `auto_continue: true` AND `inject_todo_instructions: true` | `inject_todo_instructions` |
| Spawning tools | `can_spawn_agents: true` AND `inject_spawn_instructions: true` | `inject_spawn_instructions` |
| Teammate messaging | Always (all agents) | None (always injected) |
| User interaction | Always (all agents) | None (always injected) |
If you prefer to write your own instructions for a system, set the corresponding `inject_*` flag to `false`
and include your custom instructions in the agent's `instructions` field. The built-in tools will still be
available; only the auto-injected prompt text is suppressed.
## Built-In Agents
Loki comes packaged with some useful built-in agents:
* `coder`: An agent to assist you with all your coding tasks
* `demo`: An example agent to use for reference when learning to create your own agents
* `jira-helper`: An agent that assists you with all your Jira-related tasks
* `sql`: A universal SQL agent that enables you to talk to any relational database in natural language
* `coder`: An agent to assist you with all your coding tasks
* `code-reviewer`: A [CodeRabbit](https://coderabbit.ai)-style code reviewer that spawns per-file reviewers using the teammate messaging pattern
* `demo`: An example agent to use for reference when learning to create your own agents
* `explore`: An agent designed to help you explore and understand your codebase
* `jira-helper`: An agent that assists you with all your Jira-related tasks
* `oracle`: An agent for high-level architecture, design decisions, and complex debugging
* `sisyphus`: A powerhouse orchestrator agent for writing complex code and acting as a natural language interface for your codebase (similar to ClaudeCode, Gemini CLI, Codex, or OpenCode). Uses sub-agent spawning to delegate to `explore`, `coder`, and `oracle`.
* `sql`: A universal SQL agent that enables you to talk to any relational database in natural language
+7 -1
View File
@@ -17,6 +17,7 @@ loki --info | grep 'config_dir' | awk '{print $2}'
- [Files and Directory Related Variables](#files-and-directory-related-variables)
- [Agent Related Variables](#agent-related-variables)
- [Logging Related Variables](#logging-related-variables)
- [Miscellaneous Variables](#miscellaneous-variables)
<!--toc:end-->
---
@@ -84,7 +85,7 @@ You can also customize the location of full agent configurations using the follo
| Environment Variable | Description |
|------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
| `<AGENT_NAME>_CONFIG_FILE | Customize the location of the agent's configuration file; e.g. `SQL_CONFIG_FILE` |
| `<AGENT_NAME>_CONFIG_FILE` | Customize the location of the agent's configuration file; e.g. `SQL_CONFIG_FILE` |
| `<AGENT_NAME>_MODEL` | Customize the `model` used for the agent; e.g `SQL_MODEL` |
| `<AGENT_NAME>_TEMPERATURE` | Customize the `temperature` used for the agent; e.g. `SQL_TEMPERATURE` |
| `<AGENT_NAME>_TOP_P` | Customize the `top_p` used for the agent; e.g. `SQL_TOP_P` |
@@ -104,3 +105,8 @@ The following variables can be used to change the log level of Loki or the locat
**Pro-Tip:** You can always tail the Loki logs using the `--tail-logs` flag. If you need to disable color output, you
can also pass the `--disable-log-colors` flag as well.
## Miscellaneous Variables
| Environment Variable | Description | Default Value |
|----------------------|--------------------------------------------------------------------------------------------------|---------------|
| `AUTO_CONFIRM` | Bypass all `guard_*` checks in the bash prompt helpers; useful for agent composition and routing | |
+11 -3
View File
@@ -265,12 +265,14 @@ When you use RAG in Loki, after Loki performs the lookup for relevant chunks of
will add the retrieved text chunks as context to your query before sending it to the model. The format of this context
is determined by the `rag_template` setting in your global Loki configuration file.
This template utilizes two placeholders:
This template utilizes three placeholders:
* `__INPUT__`: The user's actual query
* `__CONTEXT__`: The context retrieved from RAG
* `__SOURCES__`: A numbered list of the source file paths or URLs that the retrieved context came from
These placeholders are replaced with the corresponding values into the template and make up what's actually passed to
the model at query-time.
the model at query-time. The `__SOURCES__` placeholder enables the model to cite which documents its answer is based on,
which is especially useful when building knowledge-base assistants that need to provide verifiable references.
The default template that Loki uses is the following:
@@ -281,6 +283,10 @@ Answer the query based on the context while respecting the rules. (user query, s
__CONTEXT__
</context>
<sources>
__SOURCES__
</sources>
<rules>
- If you don't know, just say so.
- If you are not sure, ask for clarification.
@@ -288,6 +294,7 @@ __CONTEXT__
- If the context appears unreadable or of poor quality, tell the user then answer as best as you can.
- If the answer is not in the context but you think you know the answer, explain that to the user then answer with your own knowledge.
- Answer directly and without using xml tags.
- When using information from the context, cite the relevant source from the <sources> section.
</rules>
<user_query>
@@ -296,4 +303,5 @@ __INPUT__
```
You can customize this template by specifying the `rag_template` setting in your global Loki configuration file. Your
template *must* include both the `__INPUT__` and `__CONTEXT__` placeholders in order for it to be valid.
template *must* include both the `__INPUT__` and `__CONTEXT__` placeholders in order for it to be valid. The
`__SOURCES__` placeholder is optional. If it is omitted, source references will not be included in the prompt.
+4 -1
View File
@@ -50,6 +50,9 @@ things like
* **Configurable Keybindings:** You can switch between `emacs` style keybindings or `vi` style keybindings
* [**Custom REPL Prompt:**](./REPL-PROMPT.md) You can even customize the REPL prompt to display information about the
current context in the prompt
* **Built-in user interaction tools:** When function calling is enabled in the REPL, the `user__ask`, `user__confirm`,
`user__input`, and `user__checkbox` tools are always available for interactive prompts. These are not injected in the
one-shot CLI mode.
---
@@ -247,4 +250,4 @@ The `.exit` command is used to move between modes in the Loki REPL.
### `.help` - Show the help guide
Just like with any shell or REPL, you sometimes need a little help and want to know what commands are available to you.
That's when you use the `.help` command.
That's when you use the `.help` command.
+234
View File
@@ -0,0 +1,234 @@
# Todo System
Loki's Todo System is a built-in task tracking feature designed to improve the reliability and effectiveness of LLM agents,
especially smaller models. It provides structured task management that helps models:
- Break complex tasks into manageable steps
- Track progress through multistep workflows
- Automatically continue work until all tasks are complete
- Avoid forgetting steps or losing context
![Todo System Example](./images/agents/todo-system.png)
## Quick Links
<!--toc:start-->
- [Why Use the Todo System?](#why-use-the-todo-system)
- [How It Works](#how-it-works)
- [Configuration Options](#configuration-options)
- [Available Tools](#available-tools)
- [Auto-Continuation](#auto-continuation)
- [Best Practices](#best-practices)
- [Example Workflow](#example-workflow)
- [Troubleshooting](#troubleshooting)
<!--toc:end-->
## Why Use the Todo System?
Smaller language models often struggle with:
- **Context drift**: Forgetting earlier steps in a multi-step task
- **Incomplete execution**: Stopping before all work is done
- **Lack of structure**: Jumping between tasks without clear organization
The Loki Todo System addresses these issues by giving the model explicit tools to plan, track, and verify task completion.
The system automatically prompts the model to continue when incomplete tasks remain, ensuring work gets finished.
## How It Works
1. **Planning Phase**: The model initializes a todo list with a goal and adds individual tasks
2. **Execution Phase**: The model works through tasks, marking each done immediately after completion
3. **Continuation Phase**: If incomplete tasks remain, the system automatically prompts the model to continue
4. **Completion**: When all tasks are marked done, the workflow ends naturally
The todo state is preserved across the conversation (and any compressions), and injected into continuation prompts,
keeping the model focused on remaining work.
## Configuration Options
The Todo System is configured per-agent in `<loki-config-dir>/agents/<agent-name>/config.yaml`:
| Setting | Type | Default | Description |
|----------------------------|---------|-------------|---------------------------------------------------------------------------------|
| `auto_continue` | boolean | `false` | Enable the To-Do system for automatic continuation when incomplete todos remain |
| `max_auto_continues` | integer | `10` | Maximum number of automatic continuations before stopping |
| `inject_todo_instructions` | boolean | `true` | Inject the default todo tool usage instructions into the agent's system prompt |
| `continuation_prompt` | string | (see below) | Custom prompt used when auto-continuing |
### Example Configuration
```yaml
# agents/my-agent/config.yaml
model: openai:gpt-4o
auto_continue: true # Enable auto-continuation
max_auto_continues: 15 # Allow up to 15 automatic continuations
inject_todo_instructions: true # Include todo instructions in system prompt
continuation_prompt: | # Optional: customize the continuation prompt
[CONTINUE]
You have unfinished tasks. Proceed with the next pending item.
Do not explain; just execute.
```
### Default Continuation Prompt
If `continuation_prompt` is not specified, the following default is used:
```
[SYSTEM REMINDER - TODO CONTINUATION]
You have incomplete tasks in your todo list. Continue with the next pending item.
Call tools immediately. Do not explain what you will do.
```
## Available Tools
When `inject_todo_instructions` is enabled (the default), agents have access to four built-in todo management tools:
### `todo__init`
Initialize a new todo list with a goal. Clears any existing todos.
**Parameters:**
- `goal` (string, required): The overall goal to achieve when all todos are completed
**Example:**
```json
{"goal": "Refactor the authentication module"}
```
### `todo__add`
Add a new todo item to the list.
**Parameters:**
- `task` (string, required): Description of the todo task
**Example:**
```json
{"task": "Extract password validation into separate function"}
```
**Returns:** The assigned task ID
### `todo__done`
Mark a todo item as done by its ID.
**Parameters:**
- `id` (integer, required): The ID of the todo item to mark as done
**Example:**
```json
{"id": 1}
```
### `todo__list`
Display the current todo list with status of each item.
**Parameters:** None
**Returns:** The full todo list with goal, progress, and item statuses
## Auto-Continuation
When `auto_continue` is enabled, Loki automatically sends a continuation prompt if:
1. The agent's response completes (model stops generating)
2. There are incomplete tasks in the todo list
3. The continuation count hasn't exceeded `max_auto_continues`
4. The response isn't identical to the previous continuation (prevents loops)
### What Gets Injected
Each continuation prompt includes:
- The continuation prompt text (default or custom)
- The current todo list state showing:
- The goal
- Progress (e.g., "3/5 completed")
- Each task with status (✓ done, ○ pending)
**Example continuation context:**
```
[SYSTEM REMINDER - TODO CONTINUATION]
You have incomplete tasks in your todo list. Continue with the next pending item.
Call tools immediately. Do not explain what you will do.
Goal: Refactor the authentication module
Progress: 2/4 completed
✓ 1. Extract password validation into separate function
✓ 2. Add unit tests for password validation
○ 3. Update login handler to use new validation
○ 4. Update registration handler to use new validation
```
### Visual Feedback
During auto-continuation, you'll see a message in your terminal:
```
📋 Auto-continuing (3/10): 2 incomplete todo(s) remain
```
## Best Practices
### For Agent Developers
1. **Enable for complex workflows**: Use `auto_continue: true` for agents that handle multistep tasks
2. **Set reasonable limits**: Adjust `max_auto_continues` based on typical task complexity
3. **Customize the prompt**: If your agent needs specific continuation behavior, set a custom `continuation_prompt`
### For Model Behavior
The injected instructions tell models to:
1. **Always create a todo list before starting work**: This ensures planning happens upfront
2. **Mark each task done immediately after finishing**: Don't batch completions
3. **Add all planned steps before starting**: Complete planning before execution
### When to Use vs. Skip
**Use the Todo System when:**
- Tasks have 3+ distinct steps
- The model might lose track of progress
- You want guaranteed completion of all steps
- Working with smaller/less capable models
**Skip the Todo System when:**
- Single-step, simple tasks
- Interactive Q&A sessions
- The overhead of task tracking isn't worth it
## Example Workflow
Here's how a typical todo-driven workflow looks:
**User prompt:** "Add input validation to the user registration form"
**Model actions:**
```
1. todo__init(goal="Add input validation to user registration form")
2. todo__add(task="Analyze current registration form fields")
3. todo__add(task="Create validation rules for email field")
4. todo__add(task="Create validation rules for password field")
5. todo__add(task="Implement client-side validation")
6. todo__add(task="Add server-side validation")
7. todo__add(task="Write tests for validation logic")
```
**Model executes first task, then:**
```
8. todo__done(id=1)
9. [Proceeds with task 2...]
10. todo__done(id=2)
...
```
**If model stops with incomplete tasks:**
- System automatically sends continuation prompt
- Model sees remaining tasks and continues
- Repeats until all tasks are done or max continuations reached
## Troubleshooting
### Model Not Using Todo Tools
- Verify `inject_todo_instructions: true` in your agent config
- Check that the agent is properly loaded (not just a role)
- Some models may need explicit prompting to use the tools
### Too Many Continuations
- Lower `max_auto_continues` to a reasonable limit
- Check if the model is creating new tasks without completing old ones
- Ensure tasks are appropriately scoped (not too granular)
### Continuation Loop
The system detects when a model's response is identical to its previous continuation response and stops
automatically. If you're seeing loops:
- The model may be stuck; check if a task is impossible to complete
- Consider adjusting the `continuation_prompt` to be more directive
---
## Additional Docs
- [Agents](./AGENTS.md) - Full agent configuration guide
- [Function Calling](./function-calling/TOOLS.md) - How tools work in Loki
- [Sessions](./SESSIONS.md) - How conversation state is managed
+1 -1
View File
@@ -114,7 +114,7 @@ At the time of writing, the following files support Loki secret injection:
|-------------------------|-----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
| `config.yaml` | The main Loki configuration file | Cannot use secret injection on the `vault_password_file` field |
| `functions/mcp.json` | The MCP server configuration file | |
| `<agent>/tools.<py/sh>` | Tool files for agents | Specific configuration and only supported for Agents, not all global tools ([see below](#environment-variable-secret-injection-in-agents) |
| `<agent>/tools.<py/sh>` | Tool files for agents | Specific configuration and only supported for Agents, not all global tools ([see below](#environment-variable-secret-injection-in-agents)) |
Note that all paths are relative to the Loki configuration directory. The directory varies by system, so you can find yours by
+17 -13
View File
@@ -66,12 +66,12 @@ Prompt for text input
**Example With Validation:**
```bash
text=$(with_validation 'input "Please enter something:"' validate_present)
text=$(with_validation 'input "Please enter something:"' validate_present 2>/dev/tty)
```
**Example Without Validation:**
```bash
text=$(input "Please enter something:")
text=$(input "Please enter something:" 2>/dev/tty)
```
### confirm
@@ -81,7 +81,7 @@ Show a confirm dialog with options for yes/no
**Example:**
```bash
confirmed=$(confirm "Do the thing?")
confirmed=$(confirm "Do the thing?" 2>/dev/tty)
if [[ $confirmed == "0" ]]; then echo "No"; else echo "Yes"; fi
```
@@ -94,7 +94,7 @@ keys that then returns the chosen option.
**Example:**
```bash
options=("one" "two" "three" "four")
choice=$(list "Select an item" "${options[@]}")
choice=$(list "Select an item" "${options[@]}" 2>/dev/tty)
echo "Your choice: ${options[$choice]}"
```
@@ -107,7 +107,7 @@ and enter keys that then returns the chosen options.
**Example:**
```bash
options=("one" "two" "three" "four")
checked=$(checkbox "Select one or more items" "${options[@]}")
checked=$(checkbox "Select one or more items" "${options[@]}" 2>/dev/tty)
echo "Your choices: ${checked}"
```
@@ -124,12 +124,12 @@ validate_password() {
exit 1
fi
}
pass=$(with_validate 'password "Enter your password"' validate_password)
pass=$(with_validate 'password "Enter your password"' validate_password 2>/dev/tty)
```
**Example Without Validation:**
```bash
pass="$(password "Enter your password:")"
pass="$(password "Enter your password:" 2>/dev/tty)"
```
### editor
@@ -137,7 +137,7 @@ Open the default editor (`$EDITOR`); if none is set, default back to `vi`
**Example:**
```bash
text=$(editor "Please enter something in the editor")
text=$(editor "Please enter something in the editor" 2>/dev/tty)
echo -e "You wrote:\n${text}"
```
@@ -150,7 +150,7 @@ validation functions returns 0.
**Example:**
```bash
# Using the built-in 'validate_present' validator
text=$(with_validate 'input "Please enter something and confirm with enter"' validate_present)
text=$(with_validate 'input "Please enter something and confirm with enter"' validate_present 2>/dev/tty)
# Using a custom validator; e.g. for password
validate_password() {
@@ -159,7 +159,7 @@ validate_password() {
exit 1
fi
}
pass=$(with_validate 'password "Enter random password"' validate_password)
pass=$(with_validate 'password "Enter random password"' validate_password 2>/dev/tty)
```
### validate_present
@@ -169,7 +169,7 @@ Validate that the prompt returned a value.
**Example:**
```bash
text=$(with_validate 'input "Please enter something and confirm with enter"' validate_present)
text=$(with_validate 'input "Please enter something and confirm with enter"' validate_present 2>/dev/tty)
```
### detect_os
@@ -207,7 +207,9 @@ open_link https://www.google.com
```
### guard_operation
Prompt for permission to run an operation
Prompt for permission to run an operation.
Can be disabled by setting the environment variable `AUTO_CONFIRM`.
**Example:**
```bash
@@ -216,7 +218,9 @@ _run_sql
```
### guard_path
Prompt for permission to perform path operations
Prompt for permission to perform path operations.
Can be disabled by setting the environment variable `AUTO_CONFIRM`.
**Example:***
```bash
+1 -3
View File
@@ -83,9 +83,7 @@ enabled_mcp_servers: null # Which MCP servers to enable by default (e.g.
```
A special note about `enabled_mcp_servers`: a user can set this to `all` to enable all configured MCP servers in the
`functions/mcp.json` configuration. However, **this should be used with caution**. When there is a significant number
of configured MCP servers, enabling all MCP servers may overwhelm the context length of a model, and quickly exceed
token limits.
`functions/mcp.json` configuration.
(See the [Configuration Example](../../config.example.yaml) file for an example global configuration with all options.)
+51
View File
@@ -16,6 +16,10 @@ loki --info | grep functions_dir | awk '{print $2}'
- [Enabling/Disabling Global Tools](#enablingdisabling-global-tools)
- [Role Configuration](#role-configuration)
- [Agent Configuration](#agent-configuration)
- [Tool Error Handling](#tool-error-handling)
- [Native/Shell Tool Errors](#nativeshell-tool-errors)
- [MCP Errors](#mcp-tool-errors)
- [Why Tool Error Handling Is Important](#why-this-matters)
<!--toc:end-->
---
@@ -34,6 +38,9 @@ be enabled/disabled can be found in the [Configuration](#configuration) section
| [`fetch_url_via_curl.sh`](../../assets/functions/tools/fetch_url_via_curl.sh) | Extract the content from a given URL using cURL. | 🔴 |
| [`fetch_url_via_jina.sh`](../../assets/functions/tools/fetch_url_via_jina.sh) | Extract the content from a given URL using Jina. | 🔴 |
| [`fs_cat.sh`](../../assets/functions/tools/fs_cat.sh) | Read the contents of a file at the specified path. | 🟢 |
| [`fs_read.sh`](../../assets/functions/tools/fs_read.sh) | Controlled reading of the contents of a file at the specified path with line numbers, offset, and limit to read specific sections. | 🟢 |
| [`fs_glob.sh`](../../assets/functions/tools/fs_glob.sh) | Find files by glob pattern. Returns matching file paths sorted by modification time. | 🟢 |
| [`fs_grep.sh`](../../assets/functions/tools/fs_grep.sh) | Search file contents using regular expressions. Returns matching file paths and lines. | 🟢 |
| [`fs_ls.sh`](../../assets/functions/tools/fs_ls.sh) | List all files and directories at the specified path. | 🟢 |
| [`fs_mkdir.sh`](../../assets/functions/tools/fs_mkdir.sh) | Create a new directory at the specified path. | 🔴 |
| [`fs_patch.sh`](../../assets/functions/tools/fs_patch.sh) | Apply a patch to a file at the specified path. <br>This can be used to edit a file without having to rewrite the whole file. | 🔴 |
@@ -137,3 +144,47 @@ The values for `mapping_tools` are inherited from the [global configuration](#gl
For more information about agents, refer to the [Agents](../AGENTS.md) documentation.
For a full example configuration for an agent, see the [Agent Configuration Example](../../config.agent.example.yaml) file.
---
## Tool Error Handling
When tools fail, Loki captures error information and passes it back to the model so it can diagnose issues and
potentially retry or adjust its approach.
### Native/Shell Tool Errors
When a shell-based tool exits with a non-zero exit code, the model receives:
```json
{
"tool_call_error": "Tool call 'my_tool' exited with code 1",
"stderr": "Error: file not found: config.json"
}
```
The `stderr` field contains the actual error output from the tool, giving the model context about what went wrong.
If the tool produces no stderr output, only the `tool_call_error` field is included.
**Note:** Tool stdout streams to your terminal in real-time so you can see progress. Only stderr is captured for
error reporting.
### MCP Tool Errors
When an MCP (Model Context Protocol) tool invocation fails due to connection issues, timeouts, or server errors,
the model receives:
```json
{
"tool_call_error": "MCP tool invocation failed: connection refused"
}
```
This allows the model to understand that an external service failed and take appropriate action (retry, use an
alternative approach, or inform the user).
### Why This Matters
Without proper error propagation, models would only know that "something went wrong" without understanding *what*
went wrong. By including stderr output and detailed error messages, models can:
- Diagnose the root cause of failures
- Suggest fixes (e.g., "the file doesn't exist, should I create it?")
- Retry with corrected parameters
- Fall back to alternative approaches when appropriate
Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

+25
View File
@@ -0,0 +1,25 @@
# List all recipes
default:
@just --list
# Run all tests
[group: 'test']
test:
cargo test --all
# See what linter errors and warnings are unaddressed
[group: 'style']
lint:
cargo clippy --all
# Run Rustfmt against all source files
[group: 'style']
fmt:
cargo fmt --all
# Build the project for the current system architecture
# (Gets stored at ./target/[debug|release]/loki)
[group: 'build']
[arg('build_type', pattern="debug|release")]
build build_type='debug':
@cargo build {{ if build_type == "release" { "--release" } else { "" } }}
+415 -320
View File
File diff suppressed because it is too large Load Diff
+2
View File
@@ -0,0 +1,2 @@
requests
ruamel.yaml
+255
View File
@@ -0,0 +1,255 @@
import requests
import sys
import re
import json
# Provider mapping from models.yaml to OpenRouter prefixes
PROVIDER_MAPPING = {
"openai": "openai",
"claude": "anthropic",
"gemini": "google",
"mistral": "mistralai",
"cohere": "cohere",
"perplexity": "perplexity",
"xai": "x-ai",
"openrouter": "openrouter",
"ai21": "ai21",
"deepseek": "deepseek",
"moonshot": "moonshotai",
"qianwen": "qwen",
"zhipuai": "zhipuai",
"minimax": "minimax",
"vertexai": "google",
"groq": "groq",
"bedrock": "amazon",
"hunyuan": "tencent",
"ernie": "baidu",
"github": "github",
}
def fetch_openrouter_models():
print("Fetching models from OpenRouter...")
try:
response = requests.get("https://openrouter.ai/api/v1/models")
response.raise_for_status()
data = response.json()["data"]
print(f"Fetched {len(data)} models.")
return data
except Exception as e:
print(f"Error fetching models: {e}")
sys.exit(1)
def get_openrouter_model(models_data, provider_prefix, model_name, is_openrouter_provider=False):
if is_openrouter_provider:
# For openrouter provider, the model_name in yaml is usually the full ID
for model in models_data:
if model["id"] == model_name:
return model
return None
expected_id = f"{provider_prefix}/{model_name}"
# 1. Try exact match on ID
for model in models_data:
if model["id"] == expected_id:
return model
# 2. Try match by suffix
for model in models_data:
if model["id"].split("/")[-1] == model_name:
if model["id"].startswith(f"{provider_prefix}/"):
return model
return None
def format_price(price_per_token):
if price_per_token is None:
return None
try:
price_per_1m = float(price_per_token) * 1_000_000
if price_per_1m.is_integer():
return str(int(price_per_1m))
else:
return str(round(price_per_1m, 4))
except:
return None
def get_indentation(line):
return len(line) - len(line.lstrip())
def process_model_block(block_lines, current_provider, or_models):
if not block_lines:
return []
# 1. Identify model name and indentation
name_line = block_lines[0]
name_match = re.match(r"^(\s*)-\s*name:\s*(.+)$", name_line)
if not name_match:
return block_lines
name_indent_str = name_match.group(1)
model_name = name_match.group(2).strip()
# 2. Find OpenRouter model
or_prefix = PROVIDER_MAPPING.get(current_provider)
is_openrouter_provider = (current_provider == "openrouter")
if not or_prefix and not is_openrouter_provider:
return block_lines
or_model = get_openrouter_model(or_models, or_prefix, model_name, is_openrouter_provider)
if not or_model:
return block_lines
print(f" Updating {model_name}...")
# 3. Prepare updates
updates = {}
# Pricing
pricing = or_model.get("pricing", {})
p_in = format_price(pricing.get("prompt"))
p_out = format_price(pricing.get("completion"))
if p_in: updates["input_price"] = p_in
if p_out: updates["output_price"] = p_out
# Context
ctx = or_model.get("context_length")
if ctx: updates["max_input_tokens"] = str(ctx)
max_out = None
if "top_provider" in or_model and or_model["top_provider"]:
max_out = or_model["top_provider"].get("max_completion_tokens")
if max_out: updates["max_output_tokens"] = str(max_out)
# Capabilities
arch = or_model.get("architecture", {})
modality = arch.get("modality", "")
if "image" in modality:
updates["supports_vision"] = "true"
# 4. Detect field indentation
field_indent_str = None
existing_fields = {} # key -> line_index
for i, line in enumerate(block_lines):
if i == 0: continue # Skip name line
# Skip comments
if line.strip().startswith("#"):
continue
# Look for "key: value"
m = re.match(r"^(\s*)([\w_-]+):", line)
if m:
indent = m.group(1)
key = m.group(2)
# Must be deeper than name line
if len(indent) > len(name_indent_str):
if field_indent_str is None:
field_indent_str = indent
existing_fields[key] = i
if field_indent_str is None:
field_indent_str = name_indent_str + " "
# 5. Apply updates
new_block = list(block_lines)
# Update existing fields
for key, value in updates.items():
if key in existing_fields:
idx = existing_fields[key]
# Preserve original key indentation exactly
original_line = new_block[idx]
m = re.match(r"^(\s*)([\w_-]+):", original_line)
if m:
current_indent = m.group(1)
new_block[idx] = f"{current_indent}{key}: {value}\n"
# Insert missing fields
# Insert after the name line
insertion_idx = 1
for key, value in updates.items():
if key not in existing_fields:
new_line = f"{field_indent_str}{key}: {value}\n"
new_block.insert(insertion_idx, new_line)
insertion_idx += 1
return new_block
def main():
or_models = fetch_openrouter_models()
print("Reading models.yaml...")
with open("models.yaml", "r") as f:
lines = f.readlines()
new_lines = []
current_provider = None
i = 0
while i < len(lines):
line = lines[i]
# Check for provider
# - provider: name
p_match = re.match(r"^\s*-?\s*provider:\s*(.+)$", line)
if p_match:
current_provider = p_match.group(1).strip()
new_lines.append(line)
i += 1
continue
# Check for model start
# - name: ...
m_match = re.match(r"^(\s*)-\s*name:\s*.+$", line)
if m_match:
# Start of a model block
start_indent = len(m_match.group(1))
# Collect block lines
block_lines = [line]
j = i + 1
while j < len(lines):
next_line = lines[j]
stripped = next_line.strip()
# If empty or comment, include it
if not stripped or stripped.startswith("#"):
block_lines.append(next_line)
j += 1
continue
# Check indentation
next_indent = get_indentation(next_line)
# If indentation is greater, it's part of the block (property)
if next_indent > start_indent:
block_lines.append(next_line)
j += 1
continue
# If indentation is equal or less, it's the end of the block
break
# Process the block
processed_block = process_model_block(block_lines, current_provider, or_models)
new_lines.extend(processed_block)
# Advance i
i = j
continue
# Otherwise, just a regular line
new_lines.append(line)
i += 1
print("Saving models.yaml...")
with open("models.yaml", "w") as f:
f.writelines(new_lines)
print("Done.")
if __name__ == "__main__":
main()
+2 -2
View File
@@ -234,7 +234,7 @@ async fn chat_completions_streaming(
}
let arguments: Value =
function_arguments.parse().with_context(|| {
format!("Tool call '{function_name}' have non-JSON arguments '{function_arguments}'")
format!("Tool call '{function_name}' has non-JSON arguments '{function_arguments}'")
})?;
handler.tool_call(ToolCall::new(
function_name.clone(),
@@ -272,7 +272,7 @@ async fn chat_completions_streaming(
function_arguments = String::from("{}");
}
let arguments: Value = function_arguments.parse().with_context(|| {
format!("Tool call '{function_name}' have non-JSON arguments '{function_arguments}'")
format!("Tool call '{function_name}' has non-JSON arguments '{function_arguments}'")
})?;
handler.tool_call(ToolCall::new(
function_name.clone(),
+8 -5
View File
@@ -93,10 +93,13 @@ pub async fn claude_chat_completions_streaming(
data["content_block"]["id"].as_str(),
) {
if !function_name.is_empty() {
let arguments: Value =
let arguments: Value = if function_arguments.is_empty() {
json!({})
} else {
function_arguments.parse().with_context(|| {
format!("Tool call '{function_name}' have non-JSON arguments '{function_arguments}'")
})?;
format!("Tool call '{function_name}' has non-JSON arguments '{function_arguments}'")
})?
};
handler.tool_call(ToolCall::new(
function_name.clone(),
arguments,
@@ -134,7 +137,7 @@ pub async fn claude_chat_completions_streaming(
json!({})
} else {
function_arguments.parse().with_context(|| {
format!("Tool call '{function_name}' have non-JSON arguments '{function_arguments}'")
format!("Tool call '{function_name}' has non-JSON arguments '{function_arguments}'")
})?
};
handler.tool_call(ToolCall::new(
@@ -286,7 +289,7 @@ pub fn claude_build_chat_completions_body(
body["tools"] = functions
.iter()
.map(|v| {
if v.parameters.type_value.is_none() {
if v.parameters.is_empty_properties() {
json!({
"name": v.name,
"description": v.description,
+2 -2
View File
@@ -167,7 +167,7 @@ async fn chat_completions_streaming(
"tool-call-end" => {
if !function_name.is_empty() {
let arguments: Value = function_arguments.parse().with_context(|| {
format!("Tool call '{function_name}' have non-JSON arguments '{function_arguments}'")
format!("Tool call '{function_name}' has non-JSON arguments '{function_arguments}'")
})?;
handler.tool_call(ToolCall::new(
function_name.clone(),
@@ -230,7 +230,7 @@ fn extract_chat_completions(data: &Value) -> Result<ChatCompletionsOutput> {
call["id"].as_str(),
) {
let arguments: Value = arguments.parse().with_context(|| {
format!("Tool call '{name}' have non-JSON arguments '{arguments}'")
format!("Tool call '{name}' has non-JSON arguments '{arguments}'")
})?;
tool_calls.push(ToolCall::new(
name.to_string(),
+17 -9
View File
@@ -411,9 +411,11 @@ pub async fn call_chat_completions(
client: &dyn Client,
abort_signal: AbortSignal,
) -> Result<(String, Vec<ToolResult>)> {
let is_child_agent = client.global_config().read().current_depth > 0;
let spinner_message = if is_child_agent { "" } else { "Generating" };
let ret = abortable_run_with_spinner(
client.chat_completions(input.clone()),
"Generating",
spinner_message,
abort_signal,
)
.await;
@@ -433,10 +435,13 @@ pub async fn call_chat_completions(
client.global_config().read().print_markdown(&text)?;
}
}
Ok((
text,
eval_tool_calls(client.global_config(), tool_calls).await?,
))
let tool_results = eval_tool_calls(client.global_config(), tool_calls).await?;
if let Some(tracker) = client.global_config().write().tool_call_tracker.as_mut() {
tool_results
.iter()
.for_each(|res| tracker.record_call(res.call.clone()));
}
Ok((text, tool_results))
}
Err(err) => Err(err),
}
@@ -467,10 +472,13 @@ pub async fn call_chat_completions_streaming(
if !text.is_empty() && !text.ends_with('\n') {
println!();
}
Ok((
text,
eval_tool_calls(client.global_config(), tool_calls).await?,
))
let tool_results = eval_tool_calls(client.global_config(), tool_calls).await?;
if let Some(tracker) = client.global_config().write().tool_call_tracker.as_mut() {
tool_results
.iter()
.for_each(|res| tracker.record_call(res.call.clone()));
}
Ok((text, tool_results))
}
Err(err) => {
if !text.is_empty() {
+1 -1
View File
@@ -228,7 +228,7 @@ macro_rules! config_get_fn {
std::env::var(&env_name)
.ok()
.or_else(|| self.config.$field_name.clone())
.ok_or_else(|| anyhow::anyhow!("Miss '{}'", stringify!($field_name)))
.ok_or_else(|| anyhow::anyhow!("Missing '{}'", stringify!($field_name)))
}
};
}
+2 -2
View File
@@ -164,7 +164,7 @@ pub async fn openai_chat_completions_streaming(
function_arguments = String::from("{}");
}
let arguments: Value = function_arguments.parse().with_context(|| {
format!("Tool call '{function_name}' have non-JSON arguments '{function_arguments}'")
format!("Tool call '{function_name}' has non-JSON arguments '{function_arguments}'")
})?;
handler.tool_call(ToolCall::new(
function_name.clone(),
@@ -370,7 +370,7 @@ pub fn openai_extract_chat_completions(data: &Value) -> Result<ChatCompletionsOu
call["id"].as_str(),
) {
let arguments: Value = arguments.parse().with_context(|| {
format!("Tool call '{name}' have non-JSON arguments '{arguments}'")
format!("Tool call '{name}' has non-JSON arguments '{arguments}'")
})?;
tool_calls.push(ToolCall::new(
name.to_string(),
+153 -2
View File
@@ -13,6 +13,9 @@ pub struct SseHandler {
abort_signal: AbortSignal,
buffer: String,
tool_calls: Vec<ToolCall>,
last_tool_calls: Vec<ToolCall>,
max_call_repeats: usize,
call_repeat_chain_len: usize,
}
impl SseHandler {
@@ -22,11 +25,13 @@ impl SseHandler {
abort_signal,
buffer: String::new(),
tool_calls: Vec::new(),
last_tool_calls: Vec::new(),
max_call_repeats: 2,
call_repeat_chain_len: 3,
}
}
pub fn text(&mut self, text: &str) -> Result<()> {
// debug!("HandleText: {}", text);
if text.is_empty() {
return Ok(());
}
@@ -45,7 +50,6 @@ impl SseHandler {
}
pub fn done(&mut self) {
// debug!("HandleDone");
let ret = self.sender.send(SseEvent::Done);
if ret.is_err() {
if self.abort_signal.aborted() {
@@ -56,14 +60,114 @@ impl SseHandler {
}
pub fn tool_call(&mut self, call: ToolCall) -> Result<()> {
if self.is_call_loop(&call) {
let loop_message = self.create_loop_detection_message(&call);
return Err(anyhow!(loop_message));
}
if self.last_tool_calls.len() == self.call_repeat_chain_len * self.max_call_repeats {
self.last_tool_calls.remove(0);
}
self.last_tool_calls.push(call.clone());
self.tool_calls.push(call);
Ok(())
}
fn is_call_loop(&self, new_call: &ToolCall) -> bool {
if self.last_tool_calls.len() < self.call_repeat_chain_len {
return false;
}
if let Some(last_call) = self.last_tool_calls.last()
&& self.calls_match(last_call, new_call)
{
let mut repeat_count = 1;
for i in (0..self.last_tool_calls.len()).rev() {
if i == 0 {
break;
}
if self.calls_match(&self.last_tool_calls[i - 1], &self.last_tool_calls[i]) {
repeat_count += 1;
if repeat_count >= self.max_call_repeats {
return true;
}
} else {
break;
}
}
}
let chain_start = self
.last_tool_calls
.len()
.saturating_sub(self.call_repeat_chain_len);
let chain = &self.last_tool_calls[chain_start..];
if chain.len() == self.call_repeat_chain_len {
let mut is_repeating = true;
for i in 0..chain.len() - 1 {
if !self.calls_match(&chain[i], &chain[i + 1]) {
is_repeating = false;
break;
}
}
if is_repeating && self.calls_match(&chain[chain.len() - 1], new_call) {
return true;
}
}
false
}
fn calls_match(&self, call1: &ToolCall, call2: &ToolCall) -> bool {
call1.name == call2.name && call1.arguments == call2.arguments
}
fn create_loop_detection_message(&self, new_call: &ToolCall) -> String {
let mut message = String::from("⚠️ Call loop detected! ⚠️");
message.push_str(&format!(
"The call '{}' with arguments '{}' is repeating.\n",
new_call.name, new_call.arguments
));
if self.last_tool_calls.len() >= self.call_repeat_chain_len {
let chain_start = self
.last_tool_calls
.len()
.saturating_sub(self.call_repeat_chain_len);
let chain = &self.last_tool_calls[chain_start..];
message.push_str("The following sequence of calls is repeating:\n");
for (i, call) in chain.iter().enumerate() {
message.push_str(&format!(
" {}. {} with arguments {}\n",
i + 1,
call.name,
call.arguments
));
}
}
message.push_str("\nPlease move on to the next task in your sequence using the last output you got from the call or chain you are trying to re-execute. ");
message.push_str(
"Consider using different parameters or a different approach to avoid this loop.",
);
message
}
pub fn abort(&self) -> AbortSignal {
self.abort_signal.clone()
}
#[cfg(test)]
pub fn last_tool_calls(&self) -> &[ToolCall] {
&self.last_tool_calls
}
pub fn take(self) -> (String, Vec<ToolCall>) {
let Self {
buffer, tool_calls, ..
@@ -239,6 +343,53 @@ mod tests {
use bytes::Bytes;
use futures_util::stream;
use rand::Rng;
use serde_json::json;
#[test]
fn test_last_tool_calls_ring_buffer() {
let (sender, _) = tokio::sync::mpsc::unbounded_channel();
let abort_signal = crate::utils::create_abort_signal();
let mut handler = SseHandler::new(sender, abort_signal);
for i in 0..15 {
let call = ToolCall::new(format!("test_function_{}", i), json!({"param": i}), None);
handler.tool_call(call.clone()).unwrap();
}
let lt_len = handler.call_repeat_chain_len * handler.max_call_repeats;
assert_eq!(handler.last_tool_calls().len(), lt_len);
assert_eq!(
handler.last_tool_calls()[lt_len - 1].name,
"test_function_14"
);
assert_eq!(
handler.last_tool_calls()[0].name,
format!("test_function_{}", 14 - lt_len + 1)
);
}
#[test]
fn test_call_loop_detection() {
let (sender, _) = tokio::sync::mpsc::unbounded_channel();
let abort_signal = crate::utils::create_abort_signal();
let mut handler = SseHandler::new(sender, abort_signal);
handler.max_call_repeats = 2;
handler.call_repeat_chain_len = 3;
let call = ToolCall::new("test_function_loop".to_string(), json!({"param": 1}), None);
for _ in 0..3 {
handler.tool_call(call.clone()).unwrap();
}
let result = handler.tool_call(call.clone());
assert!(result.is_err());
let error_message = result.unwrap_err().to_string();
assert!(error_message.contains("Call loop detected!"));
assert!(error_message.contains("test_function_loop"));
}
fn split_chunks(text: &str) -> Vec<Vec<u8>> {
let mut rng = rand::rng();
+22 -4
View File
@@ -219,7 +219,14 @@ pub async fn gemini_chat_completions_streaming(
part["functionCall"]["name"].as_str(),
part["functionCall"]["args"].as_object(),
) {
handler.tool_call(ToolCall::new(name.to_string(), json!(args), None))?;
let thought_signature = part["thoughtSignature"]
.as_str()
.or_else(|| part["thought_signature"].as_str())
.map(|s| s.to_string());
handler.tool_call(
ToolCall::new(name.to_string(), json!(args), None)
.with_thought_signature(thought_signature),
)?;
}
}
} else if let Some("SAFETY") = data["promptFeedback"]["blockReason"]
@@ -280,7 +287,14 @@ fn gemini_extract_chat_completions_text(data: &Value) -> Result<ChatCompletionsO
part["functionCall"]["name"].as_str(),
part["functionCall"]["args"].as_object(),
) {
tool_calls.push(ToolCall::new(name.to_string(), json!(args), None));
let thought_signature = part["thoughtSignature"]
.as_str()
.or_else(|| part["thought_signature"].as_str())
.map(|s| s.to_string());
tool_calls.push(
ToolCall::new(name.to_string(), json!(args), None)
.with_thought_signature(thought_signature),
);
}
}
}
@@ -347,12 +361,16 @@ pub fn gemini_build_chat_completions_body(
},
MessageContent::ToolCalls(MessageContentToolCalls { tool_results, .. }) => {
let model_parts: Vec<Value> = tool_results.iter().map(|tool_result| {
json!({
let mut part = json!({
"functionCall": {
"name": tool_result.call.name,
"args": tool_result.call.arguments,
}
})
});
if let Some(sig) = &tool_result.call.thought_signature {
part["thoughtSignature"] = json!(sig);
}
part
}).collect();
let function_parts: Vec<Value> = tool_results.into_iter().map(|tool_result| {
json!({
+183 -6
View File
@@ -1,3 +1,4 @@
use super::todo::TodoList;
use super::*;
use crate::{
@@ -5,6 +6,10 @@ use crate::{
function::{Functions, run_llm_function},
};
use crate::config::prompts::{
DEFAULT_SPAWN_INSTRUCTIONS, DEFAULT_TEAMMATE_INSTRUCTIONS, DEFAULT_TODO_INSTRUCTIONS,
DEFAULT_USER_INTERACTION_INSTRUCTIONS,
};
use crate::vault::SECRET_RE;
use anyhow::{Context, Result};
use fancy_regex::Captures;
@@ -33,6 +38,9 @@ pub struct Agent {
rag: Option<Arc<Rag>>,
model: Model,
vault: GlobalVault,
todo_list: TodoList,
continuation_count: usize,
last_continuation_response: Option<String>,
}
impl Agent {
@@ -124,7 +132,6 @@ impl Agent {
}
config.write().mcp_registry = Some(new_mcp_registry);
agent_config.replace_tools_placeholder(&functions);
agent_config.load_envs(&config.read());
@@ -188,6 +195,19 @@ impl Agent {
None
};
if agent_config.auto_continue {
functions.append_todo_functions();
}
if agent_config.can_spawn_agents {
functions.append_supervisor_functions();
}
functions.append_teammate_functions();
functions.append_user_interaction_functions();
agent_config.replace_tools_placeholder(&functions);
Ok(Self {
name: name.to_string(),
config: agent_config,
@@ -199,11 +219,15 @@ impl Agent {
rag,
model,
vault: Arc::clone(&config.read().vault),
todo_list: TodoList::default(),
continuation_count: 0,
last_continuation_response: None,
})
}
pub fn init_agent_variables(
agent_variables: &[AgentVariable],
pre_set_variables: Option<&AgentVariables>,
no_interaction: bool,
) -> Result<AgentVariables> {
let mut output = IndexMap::new();
@@ -214,6 +238,10 @@ impl Agent {
let mut unset_variables = vec![];
for agent_variable in agent_variables {
let key = agent_variable.name.clone();
if let Some(value) = pre_set_variables.and_then(|v| v.get(&key)) {
output.insert(key, value.clone());
continue;
}
if let Some(value) = agent_variable.default.clone() {
output.insert(key, value);
continue;
@@ -280,7 +308,7 @@ impl Agent {
}
pub fn banner(&self) -> String {
self.config.banner()
self.config.banner(&self.conversation_starters())
}
pub fn name(&self) -> &str {
@@ -295,8 +323,12 @@ impl Agent {
self.rag.clone()
}
pub fn conversation_starters(&self) -> &[String] {
&self.config.conversation_starters
pub fn conversation_starters(&self) -> Vec<String> {
self.config
.conversation_starters
.iter()
.map(|starter| self.interpolate_text(starter))
.collect()
}
pub fn interpolated_instructions(&self) -> String {
@@ -305,6 +337,23 @@ impl Agent {
.clone()
.or_else(|| self.shared_dynamic_instructions.clone())
.unwrap_or_else(|| self.config.instructions.clone());
if self.config.auto_continue && self.config.inject_todo_instructions {
output.push_str(DEFAULT_TODO_INSTRUCTIONS);
}
if self.config.can_spawn_agents && self.config.inject_spawn_instructions {
output.push_str(DEFAULT_SPAWN_INSTRUCTIONS);
}
output.push_str(DEFAULT_TEAMMATE_INSTRUCTIONS);
output.push_str(DEFAULT_USER_INTERACTION_INSTRUCTIONS);
self.interpolate_text(&output)
}
fn interpolate_text(&self, text: &str) -> String {
let mut output = text.to_string();
for (k, v) in self.variables() {
output = output.replace(&format!("{{{{{k}}}}}"), v)
}
@@ -362,6 +411,87 @@ impl Agent {
self.session_dynamic_instructions = None;
}
pub fn auto_continue_enabled(&self) -> bool {
self.config.auto_continue
}
pub fn max_auto_continues(&self) -> usize {
self.config.max_auto_continues
}
pub fn can_spawn_agents(&self) -> bool {
self.config.can_spawn_agents
}
pub fn max_concurrent_agents(&self) -> usize {
self.config.max_concurrent_agents
}
pub fn max_agent_depth(&self) -> usize {
self.config.max_agent_depth
}
pub fn summarization_model(&self) -> Option<&str> {
self.config.summarization_model.as_deref()
}
pub fn summarization_threshold(&self) -> usize {
self.config.summarization_threshold
}
pub fn escalation_timeout(&self) -> u64 {
self.config.escalation_timeout
}
pub fn continuation_count(&self) -> usize {
self.continuation_count
}
pub fn increment_continuation(&mut self) {
self.continuation_count += 1;
}
pub fn reset_continuation(&mut self) {
self.continuation_count = 0;
self.last_continuation_response = None;
}
pub fn set_last_continuation_response(&mut self, response: String) {
self.last_continuation_response = Some(response);
}
pub fn todo_list(&self) -> &TodoList {
&self.todo_list
}
pub fn init_todo_list(&mut self, goal: &str) {
self.todo_list = TodoList::new(goal);
}
pub fn add_todo(&mut self, task: &str) -> usize {
self.todo_list.add(task)
}
pub fn mark_todo_done(&mut self, id: usize) -> bool {
self.todo_list.mark_done(id)
}
pub fn continuation_prompt(&self) -> String {
self.config.continuation_prompt.clone().unwrap_or_else(|| {
formatdoc! {"
[SYSTEM REMINDER - TODO CONTINUATION]
You have incomplete tasks. Rules:
1. BEFORE marking a todo done: verify the work compiles/works. No premature completion.
2. If a todo is broad (e.g. \"implement X and implement Y\"): break it into specific subtasks FIRST using todo__add, then work on those.\n\
3. Each todo should be atomic and be \"single responsibility\" - completable in one focused action.
4. Continue with the next pending item now. Call tools immediately."}
})
}
pub fn compression_threshold(&self) -> Option<usize> {
self.config.compression_threshold
}
pub fn is_dynamic_instructions(&self) -> bool {
self.config.dynamic_instructions
}
@@ -484,6 +614,22 @@ pub struct AgentConfig {
#[serde(skip_serializing_if = "Option::is_none")]
pub agent_session: Option<String>,
#[serde(default)]
pub auto_continue: bool,
#[serde(default)]
pub can_spawn_agents: bool,
#[serde(default = "default_max_concurrent_agents")]
pub max_concurrent_agents: usize,
#[serde(default = "default_max_agent_depth")]
pub max_agent_depth: usize,
#[serde(default = "default_max_auto_continues")]
pub max_auto_continues: usize,
#[serde(default = "default_true")]
pub inject_todo_instructions: bool,
#[serde(default = "default_true")]
pub inject_spawn_instructions: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub compression_threshold: Option<usize>,
#[serde(default)]
pub description: String,
#[serde(default)]
pub version: String,
@@ -491,6 +637,8 @@ pub struct AgentConfig {
pub mcp_servers: Vec<String>,
#[serde(default)]
pub global_tools: Vec<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub continuation_prompt: Option<String>,
#[serde(default)]
pub instructions: String,
#[serde(default)]
@@ -501,6 +649,36 @@ pub struct AgentConfig {
pub conversation_starters: Vec<String>,
#[serde(default)]
pub documents: Vec<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub summarization_model: Option<String>,
#[serde(default = "default_summarization_threshold")]
pub summarization_threshold: usize,
#[serde(default = "default_escalation_timeout")]
pub escalation_timeout: u64,
}
fn default_max_auto_continues() -> usize {
10
}
fn default_max_concurrent_agents() -> usize {
4
}
fn default_max_agent_depth() -> usize {
3
}
fn default_true() -> bool {
true
}
fn default_summarization_threshold() -> usize {
4000
}
fn default_escalation_timeout() -> u64 {
300
}
impl AgentConfig {
@@ -550,12 +728,11 @@ impl AgentConfig {
}
}
fn banner(&self) -> String {
fn banner(&self, conversation_starters: &[String]) -> String {
let AgentConfig {
name,
description,
version,
conversation_starters,
..
} = self;
let starters = if conversation_starters.is_empty() {
+111 -19
View File
@@ -1,8 +1,10 @@
mod agent;
mod input;
mod macros;
mod prompts;
mod role;
mod session;
pub(crate) mod todo;
pub use self::agent::{Agent, AgentVariables, complete_agent_variables, list_agents};
pub use self::input::Input;
@@ -17,15 +19,20 @@ use crate::client::{
ClientConfig, MessageContentToolCalls, Model, ModelType, OPENAI_COMPATIBLE_PROVIDERS,
ProviderModels, create_client_config, list_client_types, list_models,
};
use crate::function::{FunctionDeclaration, Functions, ToolResult};
use crate::function::user_interaction::USER_FUNCTION_PREFIX;
use crate::function::{FunctionDeclaration, Functions, ToolCallTracker, ToolResult};
use crate::rag::Rag;
use crate::render::{MarkdownRender, RenderOptions};
use crate::utils::*;
use crate::config::macros::Macro;
use crate::mcp::{
MCP_INVOKE_META_FUNCTION_NAME_PREFIX, MCP_LIST_META_FUNCTION_NAME_PREFIX, McpRegistry,
MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX, MCP_INVOKE_META_FUNCTION_NAME_PREFIX,
MCP_SEARCH_META_FUNCTION_NAME_PREFIX, McpRegistry,
};
use crate::supervisor::Supervisor;
use crate::supervisor::escalation::EscalationQueue;
use crate::supervisor::mailbox::Inbox;
use crate::vault::{GlobalVault, Vault, create_vault_password_file, interpolate_secrets};
use anyhow::{Context, Result, anyhow, bail};
use fancy_regex::Regex;
@@ -94,6 +101,10 @@ const RAG_TEMPLATE: &str = r#"Answer the query based on the context while respec
__CONTEXT__
</context>
<sources>
__SOURCES__
</sources>
<rules>
- If you don't know, just say so.
- If you are not sure, ask for clarification.
@@ -101,6 +112,7 @@ __CONTEXT__
- If the context appears unreadable or of poor quality, tell the user then answer as best as you can.
- If the answer is not in the context but you think you know the answer, explain that to the user then answer with your own knowledge.
- Answer directly and without using xml tags.
- When using information from the context, cite the relevant source from the <sources> section.
</rules>
<user_query>
@@ -198,6 +210,20 @@ pub struct Config {
pub rag: Option<Arc<Rag>>,
#[serde(skip)]
pub agent: Option<Agent>,
#[serde(skip)]
pub(crate) tool_call_tracker: Option<ToolCallTracker>,
#[serde(skip)]
pub supervisor: Option<Arc<RwLock<Supervisor>>>,
#[serde(skip)]
pub parent_supervisor: Option<Arc<RwLock<Supervisor>>>,
#[serde(skip)]
pub self_agent_id: Option<String>,
#[serde(skip)]
pub current_depth: usize,
#[serde(skip)]
pub inbox: Option<Arc<Inbox>>,
#[serde(skip)]
pub root_escalation_queue: Option<Arc<EscalationQueue>>,
}
impl Default for Config {
@@ -270,6 +296,13 @@ impl Default for Config {
session: None,
rag: None,
agent: None,
tool_call_tracker: Some(ToolCallTracker::default()),
supervisor: None,
parent_supervisor: None,
self_agent_id: None,
current_depth: 0,
inbox: None,
root_escalation_queue: None,
}
}
}
@@ -799,7 +832,7 @@ impl Config {
|| s == "all"
}) {
bail!(
"Some of the specified MCP servers in 'enabled_mcp_servers' are configured. Please check your MCP server configuration."
"Some of the specified MCP servers in 'enabled_mcp_servers' are not fully configured. Please check your MCP server configuration."
);
}
}
@@ -1569,8 +1602,18 @@ impl Config {
.summary_context_prompt
.clone()
.unwrap_or_else(|| SUMMARY_CONTEXT_PROMPT.into());
let todo_prefix = config
.read()
.agent
.as_ref()
.map(|agent| agent.todo_list())
.filter(|todos| !todos.is_empty())
.map(|todos| format!("[ACTIVE TODO LIST]\n{}\n\n", todos.render_for_model()))
.unwrap_or_default();
if let Some(session) = config.write().session.as_mut() {
session.compress(format!("{summary_context_prompt}{summary}"));
session.compress(format!("{todo_prefix}{summary_context_prompt}{summary}"));
}
config.write().discontinuous_last_message();
Ok(())
@@ -1741,10 +1784,10 @@ impl Config {
abort_signal: AbortSignal,
) -> Result<String> {
let (reranker_model, top_k) = rag.get_config();
let (embeddings, ids) = rag
let (embeddings, sources, ids) = rag
.search(text, top_k, reranker_model.as_deref(), abort_signal)
.await?;
let text = config.read().rag_template(&embeddings, text);
let text = config.read().rag_template(&embeddings, &sources, text);
rag.set_last_sources(&ids);
Ok(text)
}
@@ -1766,7 +1809,7 @@ impl Config {
}
}
pub fn rag_template(&self, embeddings: &str, text: &str) -> String {
pub fn rag_template(&self, embeddings: &str, sources: &str, text: &str) -> String {
if embeddings.is_empty() {
return text.to_string();
}
@@ -1774,6 +1817,7 @@ impl Config {
.as_deref()
.unwrap_or(RAG_TEMPLATE)
.replace("__CONTEXT__", embeddings)
.replace("__SOURCES__", sources)
.replace("__INPUT__", text)
}
@@ -1797,8 +1841,17 @@ impl Config {
agent.agent_session().map(|v| v.to_string())
}
});
let should_init_supervisor = agent.can_spawn_agents();
let max_concurrent = agent.max_concurrent_agents();
let max_depth = agent.max_agent_depth();
config.write().rag = agent.rag();
config.write().agent = Some(agent);
if should_init_supervisor {
config.write().supervisor = Some(Arc::new(RwLock::new(Supervisor::new(
max_concurrent,
max_depth,
))));
}
if let Some(session) = session {
Config::use_session_safely(config, Some(&session), abort_signal).await?;
} else {
@@ -1850,6 +1903,10 @@ impl Config {
self.exit_session()?;
self.load_functions()?;
if self.agent.take().is_some() {
if let Some(ref supervisor) = self.supervisor {
supervisor.read().cancel_all();
}
self.supervisor.take();
self.rag.take();
self.discontinuous_last_message();
}
@@ -1972,7 +2029,8 @@ impl Config {
.iter()
.filter(|v| {
!v.name.starts_with(MCP_INVOKE_META_FUNCTION_NAME_PREFIX)
&& !v.name.starts_with(MCP_LIST_META_FUNCTION_NAME_PREFIX)
&& !v.name.starts_with(MCP_SEARCH_META_FUNCTION_NAME_PREFIX)
&& !v.name.starts_with(MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX)
})
.map(|v| v.name.to_string())
.collect();
@@ -2007,6 +2065,20 @@ impl Config {
.collect();
}
if self.agent.is_none() {
let existing: HashSet<String> = functions.iter().map(|f| f.name.clone()).collect();
let builtin_functions: Vec<FunctionDeclaration> = self
.functions
.declarations()
.iter()
.filter(|v| {
v.name.starts_with(USER_FUNCTION_PREFIX) && !existing.contains(&v.name)
})
.cloned()
.collect();
functions.extend(builtin_functions);
}
if let Some(agent) = &self.agent {
let mut agent_functions: Vec<FunctionDeclaration> = agent
.functions()
@@ -2015,7 +2087,8 @@ impl Config {
.into_iter()
.filter(|v| {
!v.name.starts_with(MCP_INVOKE_META_FUNCTION_NAME_PREFIX)
&& !v.name.starts_with(MCP_LIST_META_FUNCTION_NAME_PREFIX)
&& !v.name.starts_with(MCP_SEARCH_META_FUNCTION_NAME_PREFIX)
&& !v.name.starts_with(MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX)
})
.collect();
let tool_names: HashSet<String> = agent_functions
@@ -2051,7 +2124,8 @@ impl Config {
.iter()
.filter(|v| {
v.name.starts_with(MCP_INVOKE_META_FUNCTION_NAME_PREFIX)
|| v.name.starts_with(MCP_LIST_META_FUNCTION_NAME_PREFIX)
|| v.name.starts_with(MCP_SEARCH_META_FUNCTION_NAME_PREFIX)
|| v.name.starts_with(MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX)
})
.map(|v| v.name.to_string())
.collect();
@@ -2062,8 +2136,10 @@ impl Config {
let item = item.trim();
let item_invoke_name =
format!("{}_{item}", MCP_INVOKE_META_FUNCTION_NAME_PREFIX);
let item_list_name =
format!("{}_{item}", MCP_LIST_META_FUNCTION_NAME_PREFIX);
let item_search_name =
format!("{}_{item}", MCP_SEARCH_META_FUNCTION_NAME_PREFIX);
let item_describe_name =
format!("{}_{item}", MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX);
if let Some(values) = self.mapping_mcp_servers.get(item) {
server_names.extend(
values
@@ -2077,7 +2153,12 @@ impl Config {
),
format!(
"{}_{}",
MCP_LIST_META_FUNCTION_NAME_PREFIX,
MCP_SEARCH_META_FUNCTION_NAME_PREFIX,
v.to_string()
),
format!(
"{}_{}",
MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX,
v.to_string()
),
]
@@ -2086,7 +2167,8 @@ impl Config {
)
} else if mcp_declaration_names.contains(&item_invoke_name) {
server_names.insert(item_invoke_name);
server_names.insert(item_list_name);
server_names.insert(item_search_name);
server_names.insert(item_describe_name);
}
}
}
@@ -2112,7 +2194,8 @@ impl Config {
.into_iter()
.filter(|v| {
v.name.starts_with(MCP_INVOKE_META_FUNCTION_NAME_PREFIX)
|| v.name.starts_with(MCP_LIST_META_FUNCTION_NAME_PREFIX)
|| v.name.starts_with(MCP_SEARCH_META_FUNCTION_NAME_PREFIX)
|| v.name.starts_with(MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX)
})
.collect();
let tool_names: HashSet<String> = agent_functions
@@ -2594,8 +2677,11 @@ impl Config {
None => return Ok(()),
};
if !agent.defined_variables().is_empty() && agent.shared_variables().is_empty() {
let new_variables =
Agent::init_agent_variables(agent.defined_variables(), self.info_flag)?;
let new_variables = Agent::init_agent_variables(
agent.defined_variables(),
self.agent_variables.as_ref(),
self.info_flag,
)?;
agent.set_shared_variables(new_variables);
}
if !self.info_flag {
@@ -2613,8 +2699,11 @@ impl Config {
let shared_variables = agent.shared_variables().clone();
let session_variables =
if !agent.defined_variables().is_empty() && shared_variables.is_empty() {
let new_variables =
Agent::init_agent_variables(agent.defined_variables(), self.info_flag)?;
let new_variables = Agent::init_agent_variables(
agent.defined_variables(),
self.agent_variables.as_ref(),
self.info_flag,
)?;
agent.set_shared_variables(new_variables.clone());
new_variables
} else {
@@ -2847,6 +2936,9 @@ impl Config {
fn load_functions(&mut self) -> Result<()> {
self.functions = Functions::init(self.visible_tools.as_ref().unwrap_or(&Vec::new()))?;
if self.working_mode.is_repl() {
self.functions.append_user_interaction_functions();
}
Ok(())
}
+129
View File
@@ -0,0 +1,129 @@
use indoc::indoc;
pub(in crate::config) const DEFAULT_TODO_INSTRUCTIONS: &str = indoc! {"
## Task Tracking
You have built-in task tracking tools. Use them to track your progress:
- `todo__init`: Initialize a todo list with a goal. Call this at the start of every multi-step task.
- `todo__add`: Add individual tasks. Add all planned steps before starting work.
- `todo__done`: Mark a task done by id. Call this immediately after completing each step.
- `todo__list`: Show the current todo list.
RULES:
- Always create a todo list before starting work.
- Mark each task done as soon as you finish it; do not batch.
- If you stop with incomplete tasks, the system will automatically prompt you to continue."
};
pub(in crate::config) const DEFAULT_SPAWN_INSTRUCTIONS: &str = indoc! {"
## Agent Spawning System
You have built-in tools for spawning and managing subagents. These run **in parallel** as
background tasks inside the same process; no shell overhead, true concurrency.
### Available Agent Tools
| Tool | Purpose |
|------|----------|
| `agent__spawn` | Spawn a subagent in the background. Returns an `id` immediately. |
| `agent__check` | Non-blocking check: is the agent done yet? Returns PENDING or result. |
| `agent__collect` | Blocking wait: wait for an agent to finish, return its output. |
| `agent__list` | List all spawned agents and their status. |
| `agent__cancel` | Cancel a running agent by ID. |
| `agent__task_create` | Create a task in the dependency-aware task queue. |
| `agent__task_list` | List all tasks and their status/dependencies. |
| `agent__task_complete` | Mark a task done; returns any newly unblocked tasks. Auto-dispatches agents for tasks with a designated agent. |
| `agent__task_fail` | Mark a task as failed. Dependents remain blocked. |
### Core Pattern: Spawn -> Continue -> Collect
```
# 1. Spawn agents in parallel
agent__spawn --agent explore --prompt \"Find auth middleware patterns in src/\"
agent__spawn --agent explore --prompt \"Find error handling patterns in src/\"
# Both return IDs immediately, e.g. agent_explore_a1b2c3d4, agent_explore_e5f6g7h8
# 2. Continue your own work while they run (or spawn more agents)
# 3. Check if done (non-blocking)
agent__check --id agent_explore_a1b2c3d4
# 4. Collect results when ready (blocking)
agent__collect --id agent_explore_a1b2c3d4
agent__collect --id agent_explore_e5f6g7h8
```
### Parallel Spawning (DEFAULT for multi-agent work)
When a task needs multiple agents, **spawn them all at once**, then collect:
```
# Spawn explore and oracle simultaneously
agent__spawn --agent explore --prompt \"Find all database query patterns\"
agent__spawn --agent oracle --prompt \"Evaluate pros/cons of connection pooling approaches\"
# Collect both results
agent__collect --id <explore_id>
agent__collect --id <oracle_id>
```
**NEVER spawn sequentially when tasks are independent.** Parallel is always better.
### Task Queue (for complex dependency chains)
When tasks have ordering requirements, use the task queue:
```
# Create tasks with dependencies (optional: auto-dispatch with --agent)
agent__task_create --subject \"Explore existing patterns\"
agent__task_create --subject \"Implement feature\" --blocked_by [\"task_1\"] --agent coder --prompt \"Implement based on patterns found\"
agent__task_create --subject \"Write tests\" --blocked_by [\"task_2\"]
# Check what's runnable
agent__task_list
# After completing a task, mark it done to unblock dependents
# If dependents have --agent set, they auto-dispatch
agent__task_complete --task_id task_1
```
### Escalation Handling
Child agents may need user input but cannot prompt the user directly. When this happens,
you will see `pending_escalations` in your tool results listing blocked children and their questions.
| Tool | Purpose |
|------|----------|
| `agent__reply_escalation` | Unblock a child agent by answering its escalated question. |
When you see a pending escalation:
1. Read the child's question and options.
2. If you can answer from context, call `agent__reply_escalation` with your answer.
3. If you need the user's input, call the appropriate `user__*` tool yourself, then relay the answer via `agent__reply_escalation`.
4. **Respond promptly**; the child agent is blocked and waiting (5-minute timeout).
"};
pub(in crate::config) const DEFAULT_TEAMMATE_INSTRUCTIONS: &str = indoc! {"
## Teammate Messaging
You have tools to communicate with other agents running alongside you:
- `agent__send_message --id <agent_id> --message \"...\"`: Send a message to a sibling or parent agent.
- `agent__check_inbox`: Check for messages sent to you by other agents.
If you are working alongside other agents (e.g. reviewing different files, exploring different areas):
- **Check your inbox** before finalizing your work to incorporate any cross-cutting findings from teammates.
- **Send messages** to teammates when you discover something that affects their work.
- Messages are delivered to the agent's inbox and read on their next `check_inbox` call."
};
pub(in crate::config) const DEFAULT_USER_INTERACTION_INSTRUCTIONS: &str = indoc! {"
## User Interaction
You have built-in tools to interact with the user directly:
- `user__ask --question \"...\" --options [\"A\", \"B\", \"C\"]`: Present a selection prompt. Returns the chosen option.
- `user__confirm --question \"...\"`: Ask a yes/no question. Returns \"yes\" or \"no\".
- `user__input --question \"...\"`: Request free-form text input from the user.
- `user__checkbox --question \"...\" --options [\"A\", \"B\", \"C\"]`: Multi-select prompt. Returns an array of selected options.
Use these tools when you need user decisions, preferences, or clarification.
If you are running as a subagent, these questions are automatically escalated to the root agent for resolution."
};
+3
View File
@@ -299,6 +299,9 @@ impl Session {
self.role_prompt = agent.interpolated_instructions();
self.agent_variables = agent.variables().clone();
self.agent_instructions = self.role_prompt.clone();
if let Some(threshold) = agent.compression_threshold() {
self.set_compression_threshold(Some(threshold));
}
}
pub fn agent_variables(&self) -> &AgentVariables {
+165
View File
@@ -0,0 +1,165 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum TodoStatus {
Pending,
Done,
}
impl TodoStatus {
fn icon(&self) -> &'static str {
match self {
TodoStatus::Pending => "",
TodoStatus::Done => "",
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TodoItem {
pub id: usize,
#[serde(alias = "description")]
pub desc: String,
pub done: bool,
}
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct TodoList {
#[serde(default)]
pub goal: String,
#[serde(default)]
pub todos: Vec<TodoItem>,
}
impl TodoList {
pub fn new(goal: &str) -> Self {
Self {
goal: goal.to_string(),
todos: Vec::new(),
}
}
pub fn add(&mut self, task: &str) -> usize {
let id = self.todos.iter().map(|t| t.id).max().unwrap_or(0) + 1;
self.todos.push(TodoItem {
id,
desc: task.to_string(),
done: false,
});
id
}
pub fn mark_done(&mut self, id: usize) -> bool {
if let Some(item) = self.todos.iter_mut().find(|t| t.id == id) {
item.done = true;
true
} else {
false
}
}
pub fn has_incomplete(&self) -> bool {
self.todos.iter().any(|item| !item.done)
}
pub fn is_empty(&self) -> bool {
self.todos.is_empty()
}
pub fn render_for_model(&self) -> String {
let mut lines = Vec::new();
if !self.goal.is_empty() {
lines.push(format!("Goal: {}", self.goal));
}
lines.push(format!(
"Progress: {}/{} completed",
self.completed_count(),
self.todos.len()
));
for item in &self.todos {
let status = if item.done {
TodoStatus::Done
} else {
TodoStatus::Pending
};
lines.push(format!(" {} {}. {}", status.icon(), item.id, item.desc));
}
lines.join("\n")
}
pub fn incomplete_count(&self) -> usize {
self.todos.iter().filter(|item| !item.done).count()
}
pub fn completed_count(&self) -> usize {
self.todos.iter().filter(|item| item.done).count()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_new_and_add() {
let mut list = TodoList::new("Map Labs");
assert_eq!(list.add("Discover"), 1);
assert_eq!(list.add("Map columns"), 2);
assert_eq!(list.todos.len(), 2);
assert!(list.has_incomplete());
}
#[test]
fn test_mark_done() {
let mut list = TodoList::new("Test");
list.add("Task 1");
list.add("Task 2");
assert!(list.mark_done(1));
assert!(!list.mark_done(99));
assert_eq!(list.completed_count(), 1);
assert_eq!(list.incomplete_count(), 1);
}
#[test]
fn test_empty_list() {
let list = TodoList::default();
assert!(!list.has_incomplete());
assert!(list.is_empty());
}
#[test]
fn test_all_done() {
let mut list = TodoList::new("Test");
list.add("Done task");
list.mark_done(1);
assert!(!list.has_incomplete());
}
#[test]
fn test_render_for_model() {
let mut list = TodoList::new("Map Labs");
list.add("Discover");
list.add("Map");
list.mark_done(1);
let rendered = list.render_for_model();
assert!(rendered.contains("Goal: Map Labs"));
assert!(rendered.contains("Progress: 1/2 completed"));
assert!(rendered.contains("✓ 1. Discover"));
assert!(rendered.contains("○ 2. Map"));
}
#[test]
fn test_serialization_roundtrip() {
let mut list = TodoList::new("Roundtrip");
list.add("Step 1");
list.add("Step 2");
list.mark_done(1);
let json = serde_json::to_string(&list).unwrap();
let deserialized: TodoList = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.goal, "Roundtrip");
assert_eq!(deserialized.todos.len(), 2);
assert!(deserialized.todos[0].done);
assert!(!deserialized.todos[1].done);
}
}
+441 -58
View File
@@ -1,10 +1,17 @@
pub(crate) mod supervisor;
pub(crate) mod todo;
pub(crate) mod user_interaction;
use crate::{
config::{Agent, Config, GlobalConfig},
utils::*,
};
use crate::config::ensure_parent_exists;
use crate::mcp::{MCP_INVOKE_META_FUNCTION_NAME_PREFIX, MCP_LIST_META_FUNCTION_NAME_PREFIX};
use crate::mcp::{
MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX, MCP_INVOKE_META_FUNCTION_NAME_PREFIX,
MCP_SEARCH_META_FUNCTION_NAME_PREFIX,
};
use crate::parsers::{bash, python};
use anyhow::{Context, Result, anyhow, bail};
use indexmap::IndexMap;
@@ -12,15 +19,20 @@ use indoc::formatdoc;
use rust_embed::Embed;
use serde::{Deserialize, Serialize};
use serde_json::{Value, json};
use std::collections::VecDeque;
use std::ffi::OsStr;
use std::fs::File;
use std::io::Write;
use std::io::{Read, Write};
use std::{
collections::{HashMap, HashSet},
env, fs, io,
path::{Path, PathBuf},
process::{Command, Stdio},
};
use strum_macros::AsRefStr;
use supervisor::SUPERVISOR_FUNCTION_PREFIX;
use todo::TODO_FUNCTION_PREFIX;
use user_interaction::USER_FUNCTION_PREFIX;
#[derive(Embed)]
#[folder = "assets/functions/"]
@@ -87,6 +99,19 @@ pub async fn eval_tool_calls(
}
let mut is_all_null = true;
for call in calls {
if let Some(checker) = &config.read().tool_call_tracker
&& let Some(msg) = checker.check_loop(&call.clone())
{
let dup_msg = format!("{{\"tool_call_loop_alert\":{}}}", &msg.trim());
println!(
"{}",
warning_text(format!("{}: ⚠️ Tool-call loop detected! ⚠️", &call.name).as_str())
);
let val = json!(dup_msg);
output.push(ToolResult::new(call, val));
is_all_null = false;
continue;
}
let mut result = call.eval(config).await?;
if result.is_null() {
result = json!("DONE");
@@ -98,6 +123,34 @@ pub async fn eval_tool_calls(
if is_all_null {
output = vec![];
}
if !output.is_empty() {
let (has_escalations, summary) = {
let cfg = config.read();
if cfg.current_depth == 0
&& let Some(ref queue) = cfg.root_escalation_queue
&& queue.has_pending()
{
(true, queue.pending_summary())
} else {
(false, vec![])
}
};
if has_escalations {
let notification = json!({
"pending_escalations": summary,
"instruction": "Child agents are BLOCKED waiting for your reply. Call agent__reply_escalation for each pending escalation to unblock them."
});
let synthetic_call = ToolCall::new(
"__escalation_notification".to_string(),
json!({}),
Some("escalation_check".to_string()),
);
output.push(ToolResult::new(synthetic_call, notification));
}
}
Ok(output)
}
@@ -244,22 +297,37 @@ impl Functions {
self.declarations.is_empty()
}
pub fn append_todo_functions(&mut self) {
self.declarations.extend(todo::todo_function_declarations());
}
pub fn append_supervisor_functions(&mut self) {
self.declarations
.extend(supervisor::supervisor_function_declarations());
self.declarations
.extend(supervisor::escalation_function_declarations());
}
pub fn append_teammate_functions(&mut self) {
self.declarations
.extend(supervisor::teammate_function_declarations());
}
pub fn append_user_interaction_functions(&mut self) {
self.declarations
.extend(user_interaction::user_interaction_function_declarations());
}
pub fn clear_mcp_meta_functions(&mut self) {
self.declarations.retain(|d| {
!d.name.starts_with(MCP_INVOKE_META_FUNCTION_NAME_PREFIX)
&& !d.name.starts_with(MCP_LIST_META_FUNCTION_NAME_PREFIX)
&& !d.name.starts_with(MCP_SEARCH_META_FUNCTION_NAME_PREFIX)
&& !d.name.starts_with(MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX)
});
}
pub fn append_mcp_meta_functions(&mut self, mcp_servers: Vec<String>) {
let mut invoke_function_properties = IndexMap::new();
invoke_function_properties.insert(
"server".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
..Default::default()
},
);
invoke_function_properties.insert(
"tool".to_string(),
JsonSchema {
@@ -275,32 +343,86 @@ impl Functions {
},
);
let mut search_function_properties = IndexMap::new();
search_function_properties.insert(
"query".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
description: Some("Generalized explanation of what you want to do".into()),
..Default::default()
},
);
search_function_properties.insert(
"top_k".to_string(),
JsonSchema {
type_value: Some("integer".to_string()),
description: Some("How many results to return, between 1 and 20".into()),
default: Some(Value::from(8usize)),
..Default::default()
},
);
let mut describe_function_properties = IndexMap::new();
describe_function_properties.insert(
"tool".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
description: Some("The name of the tool; e.g., search_issues".into()),
..Default::default()
},
);
for server in mcp_servers {
let search_function_name = format!("{}_{server}", MCP_SEARCH_META_FUNCTION_NAME_PREFIX);
let describe_function_name =
format!("{}_{server}", MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX);
let invoke_function_name = format!("{}_{server}", MCP_INVOKE_META_FUNCTION_NAME_PREFIX);
let invoke_function_declaration = FunctionDeclaration {
name: invoke_function_name.clone(),
description: formatdoc!(
r#"
Invoke the specified tool on the {server} MCP server. Always call {invoke_function_name} first to find the
correct names of tools before calling '{invoke_function_name}'.
Invoke the specified tool on the {server} MCP server. Always call {describe_function_name} first to
find the correct invocation schema for the given tool.
"#
),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(invoke_function_properties.clone()),
required: Some(vec!["server".to_string(), "tool".to_string()]),
required: Some(vec!["tool".to_string()]),
..Default::default()
},
agent: false,
};
let list_functions_declaration = FunctionDeclaration {
name: format!("{}_{}", MCP_LIST_META_FUNCTION_NAME_PREFIX, server),
description: format!("List all the available tools for the {server} MCP server"),
parameters: JsonSchema::default(),
let search_functions_declaration = FunctionDeclaration {
name: search_function_name.clone(),
description: formatdoc!(
r#"
Find candidate tools by keywords for the {server} MCP server. Returns small suggestions; fetch
schemas with {describe_function_name}.
"#
),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(search_function_properties.clone()),
required: Some(vec!["query".to_string()]),
..Default::default()
},
agent: false,
};
let describe_functions_declaration = FunctionDeclaration {
name: describe_function_name.clone(),
description: "Get the full JSON schema for exactly one MCP tool.".to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(describe_function_properties.clone()),
required: Some(vec!["tool".to_string()]),
..Default::default()
},
agent: false,
};
self.declarations.push(invoke_function_declaration);
self.declarations.push(list_functions_declaration);
self.declarations.push(search_functions_declaration);
self.declarations.push(describe_functions_declaration);
}
}
@@ -705,6 +827,10 @@ pub struct ToolCall {
pub name: String,
pub arguments: Value,
pub id: Option<String>,
/// Gemini 3's thought signature for stateful reasoning in function calling.
/// Must be preserved and sent back when submitting function responses.
#[serde(skip_serializing_if = "Option::is_none")]
pub thought_signature: Option<String>,
}
type CallConfig = (String, String, Vec<String>, HashMap<String, String>);
@@ -734,9 +860,15 @@ impl ToolCall {
name,
arguments,
id,
thought_signature: None,
}
}
pub fn with_thought_signature(mut self, thought_signature: Option<String>) -> Self {
self.thought_signature = thought_signature;
self
}
pub async fn eval(&self, config: &GlobalConfig) -> Result<Value> {
let (call_name, cmd_name, mut cmd_args, envs) = match &config.read().agent {
Some(agent) => self.extract_call_config_from_agent(config, agent)?,
@@ -766,56 +898,151 @@ impl ToolCall {
let prompt = format!("Call {cmd_name} {}", cmd_args.join(" "));
if *IS_STDOUT_TERMINAL {
if *IS_STDOUT_TERMINAL && config.read().current_depth == 0 {
println!("{}", dimmed_text(&prompt));
}
let output = match cmd_name.as_str() {
_ if cmd_name.starts_with(MCP_LIST_META_FUNCTION_NAME_PREFIX) => {
let registry_arc = {
let cfg = config.read();
cfg.mcp_registry
.clone()
.with_context(|| "MCP is not configured")?
};
registry_arc.catalog().await?
_ if cmd_name.starts_with(MCP_SEARCH_META_FUNCTION_NAME_PREFIX) => {
Self::search_mcp_tools(config, &cmd_name, &json_data).unwrap_or_else(|e| {
let error_msg = format!("MCP search failed: {e}");
eprintln!("{}", warning_text(&format!("⚠️ {error_msg} ⚠️")));
json!({"tool_call_error": error_msg})
})
}
_ if cmd_name.starts_with(MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX) => {
Self::describe_mcp_tool(config, &cmd_name, json_data)
.await
.unwrap_or_else(|e| {
let error_msg = format!("MCP describe failed: {e}");
eprintln!("{}", warning_text(&format!("⚠️ {error_msg} ⚠️")));
json!({"tool_call_error": error_msg})
})
}
_ if cmd_name.starts_with(MCP_INVOKE_META_FUNCTION_NAME_PREFIX) => {
let server = json_data
.get("server")
.ok_or_else(|| anyhow!("Missing 'server' in arguments"))?
.as_str()
.ok_or_else(|| anyhow!("Invalid 'server' in arguments"))?;
let tool = json_data
.get("tool")
.ok_or_else(|| anyhow!("Missing 'tool' in arguments"))?
.as_str()
.ok_or_else(|| anyhow!("Invalid 'tool' in arguments"))?;
let arguments = json_data
.get("arguments")
.cloned()
.unwrap_or_else(|| json!({}));
let registry_arc = {
let cfg = config.read();
cfg.mcp_registry
.clone()
.with_context(|| "MCP is not configured")?
};
let result = registry_arc.invoke(server, tool, arguments).await?;
serde_json::to_value(result)?
Self::invoke_mcp_tool(config, &cmd_name, &json_data)
.await
.unwrap_or_else(|e| {
let error_msg = format!("MCP tool invocation failed: {e}");
eprintln!("{}", warning_text(&format!("⚠️ {error_msg} ⚠️")));
json!({"tool_call_error": error_msg})
})
}
_ => match run_llm_function(cmd_name, cmd_args, envs, agent_name)? {
Some(contents) => serde_json::from_str(&contents)
_ if cmd_name.starts_with(TODO_FUNCTION_PREFIX) => {
todo::handle_todo_tool(config, &cmd_name, &json_data).unwrap_or_else(|e| {
let error_msg = format!("Todo tool failed: {e}");
eprintln!("{}", warning_text(&format!("⚠️ {error_msg} ⚠️")));
json!({"tool_call_error": error_msg})
})
}
_ if cmd_name.starts_with(SUPERVISOR_FUNCTION_PREFIX) => {
supervisor::handle_supervisor_tool(config, &cmd_name, &json_data)
.await
.unwrap_or_else(|e| {
let error_msg = format!("Supervisor tool failed: {e}");
eprintln!("{}", warning_text(&format!("⚠️ {error_msg} ⚠️")));
json!({"tool_call_error": error_msg})
})
}
_ if cmd_name.starts_with(USER_FUNCTION_PREFIX) => {
user_interaction::handle_user_tool(config, &cmd_name, &json_data)
.await
.unwrap_or_else(|e| {
let error_msg = format!("User interaction failed: {e}");
eprintln!("{}", warning_text(&format!("⚠️ {error_msg} ⚠️")));
json!({"tool_call_error": error_msg})
})
}
_ => match run_llm_function(cmd_name, cmd_args, envs, agent_name) {
Ok(Some(contents)) => serde_json::from_str(&contents)
.ok()
.unwrap_or_else(|| json!({"output": contents})),
None => Value::Null,
Ok(None) => Value::Null,
Err(e) => serde_json::from_str(&e.to_string())
.ok()
.unwrap_or_else(|| json!({"output": e.to_string()})),
},
};
Ok(output)
}
async fn describe_mcp_tool(
config: &GlobalConfig,
cmd_name: &str,
json_data: Value,
) -> Result<Value> {
let server_id = cmd_name.replace(&format!("{MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX}_"), "");
let tool = json_data
.get("tool")
.ok_or_else(|| anyhow!("Missing 'tool' in arguments"))?
.as_str()
.ok_or_else(|| anyhow!("Invalid 'tool' in arguments"))?;
let registry_arc = {
let cfg = config.read();
cfg.mcp_registry
.clone()
.with_context(|| "MCP is not configured")?
};
let result = registry_arc.describe(&server_id, tool).await?;
Ok(serde_json::to_value(result)?)
}
fn search_mcp_tools(config: &GlobalConfig, cmd_name: &str, json_data: &Value) -> Result<Value> {
let server = cmd_name.replace(&format!("{MCP_SEARCH_META_FUNCTION_NAME_PREFIX}_"), "");
let query = json_data
.get("query")
.ok_or_else(|| anyhow!("Missing 'query' in arguments"))?
.as_str()
.ok_or_else(|| anyhow!("Invalid 'query' in arguments"))?;
let top_k = json_data
.get("top_k")
.cloned()
.unwrap_or_else(|| Value::from(8u64))
.as_u64()
.ok_or_else(|| anyhow!("Invalid 'top_k' in arguments"))? as usize;
let registry_arc = {
let cfg = config.read();
cfg.mcp_registry
.clone()
.with_context(|| "MCP is not configured")?
};
let catalog_items = registry_arc
.search_tools_server(&server, query, top_k)
.into_iter()
.map(|it| serde_json::to_value(&it).unwrap_or_default())
.collect();
Ok(Value::Array(catalog_items))
}
async fn invoke_mcp_tool(
config: &GlobalConfig,
cmd_name: &str,
json_data: &Value,
) -> Result<Value> {
let server = cmd_name.replace(&format!("{MCP_INVOKE_META_FUNCTION_NAME_PREFIX}_"), "");
let tool = json_data
.get("tool")
.ok_or_else(|| anyhow!("Missing 'tool' in arguments"))?
.as_str()
.ok_or_else(|| anyhow!("Invalid 'tool' in arguments"))?;
let arguments = json_data
.get("arguments")
.cloned()
.unwrap_or_else(|| json!({}));
let registry_arc = {
let cfg = config.read();
cfg.mcp_registry
.clone()
.with_context(|| "MCP is not configured")?
};
let result = registry_arc.invoke(&server, tool, arguments).await?;
Ok(serde_json::to_value(result)?)
}
fn extract_call_config_from_agent(
&self,
config: &GlobalConfig,
@@ -837,7 +1064,7 @@ impl ToolCall {
function_name.clone(),
function_name,
vec![],
Default::default(),
agent.variable_envs(),
))
}
}
@@ -866,7 +1093,9 @@ pub fn run_llm_function(
agent_name: Option<String>,
) -> Result<Option<String>> {
let mut bin_dirs: Vec<PathBuf> = vec![];
let mut command_name = cmd_name.clone();
if let Some(agent_name) = agent_name {
command_name = cmd_args[0].clone();
let dir = Config::agent_bin_dir(&agent_name);
if dir.exists() {
bin_dirs.push(dir);
@@ -888,17 +1117,77 @@ pub fn run_llm_function(
#[cfg(windows)]
let cmd_name = polyfill_cmd_name(&cmd_name, &bin_dirs);
let exit_code = run_command(&cmd_name, &cmd_args, Some(envs))
.map_err(|err| anyhow!("Unable to run {cmd_name}, {err}"))?;
envs.insert("CLICOLOR_FORCE".into(), "1".into());
envs.insert("FORCE_COLOR".into(), "1".into());
let mut child = Command::new(&cmd_name)
.args(&cmd_args)
.envs(envs)
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.map_err(|err| anyhow!("Unable to run {command_name}, {err}"))?;
let stdout = child.stdout.take().expect("Failed to capture stdout");
let mut stderr = child.stderr.take().expect("Failed to capture stderr");
let stdout_thread = std::thread::spawn(move || {
let mut buffer = [0; 1024];
let mut reader = stdout;
let mut out = io::stdout();
while let Ok(n) = reader.read(&mut buffer) {
if n == 0 {
break;
}
let chunk = &buffer[0..n];
let mut last_pos = 0;
for (i, &byte) in chunk.iter().enumerate() {
if byte == b'\n' {
let _ = out.write_all(&chunk[last_pos..i]);
let _ = out.write_all(b"\r\n");
last_pos = i + 1;
}
}
if last_pos < n {
let _ = out.write_all(&chunk[last_pos..n]);
}
let _ = out.flush();
}
});
let stderr_thread = std::thread::spawn(move || {
let mut buf = Vec::new();
let _ = stderr.read_to_end(&mut buf);
buf
});
let status = child
.wait()
.map_err(|err| anyhow!("Unable to run {command_name}, {err}"))?;
let _ = stdout_thread.join();
let stderr_bytes = stderr_thread.join().unwrap_or_default();
let exit_code = status.code().unwrap_or_default();
if exit_code != 0 {
bail!("Tool call exited with {exit_code}");
let stderr = String::from_utf8_lossy(&stderr_bytes).trim().to_string();
if !stderr.is_empty() {
eprintln!("{stderr}");
}
let tool_error_message = format!("Tool call '{command_name}' exited with code {exit_code}");
eprintln!("{}", warning_text(&format!("⚠️ {tool_error_message} ⚠️")));
let mut error_json = json!({"tool_call_error": tool_error_message});
if !stderr.is_empty() {
error_json["stderr"] = json!(stderr);
}
debug!("Tool call error: {error_json:?}");
return Ok(Some(error_json.to_string()));
}
let mut output = None;
if temp_file.exists() {
let contents =
fs::read_to_string(temp_file).context("Failed to retrieve tool call output")?;
if !contents.is_empty() {
debug!("Tool {cmd_name} output: {}", contents);
debug!("Tool {command_name} output: {}", contents);
output = Some(contents);
}
};
@@ -920,3 +1209,97 @@ fn polyfill_cmd_name<T: AsRef<Path>>(cmd_name: &str, bin_dir: &[T]) -> String {
}
cmd_name
}
#[derive(Debug, Clone)]
pub struct ToolCallTracker {
last_calls: VecDeque<ToolCall>,
max_repeats: usize,
chain_len: usize,
}
impl ToolCallTracker {
pub fn new(max_repeats: usize, chain_len: usize) -> Self {
Self {
last_calls: VecDeque::new(),
max_repeats,
chain_len,
}
}
pub fn default() -> Self {
Self::new(2, 3)
}
pub fn check_loop(&self, new_call: &ToolCall) -> Option<String> {
if self.last_calls.len() < self.max_repeats {
return None;
}
if let Some(last) = self.last_calls.back()
&& self.calls_match(last, new_call)
{
let mut repeat_count = 1;
for i in (1..self.last_calls.len()).rev() {
if self.calls_match(&self.last_calls[i - 1], &self.last_calls[i]) {
repeat_count += 1;
if repeat_count >= self.max_repeats {
return Some(self.create_loop_message());
}
} else {
break;
}
}
}
let start = self.last_calls.len().saturating_sub(self.chain_len);
let chain: Vec<_> = self.last_calls.iter().skip(start).collect();
if chain.len() == self.chain_len {
let mut is_repeating = true;
for i in 0..chain.len() - 1 {
if !self.calls_match(chain[i], chain[i + 1]) {
is_repeating = false;
break;
}
}
if is_repeating && self.calls_match(chain[chain.len() - 1], new_call) {
return Some(self.create_loop_message());
}
}
None
}
fn calls_match(&self, a: &ToolCall, b: &ToolCall) -> bool {
a.name == b.name && a.arguments == b.arguments
}
fn create_loop_message(&self) -> String {
let message = r#"{"error":{"message":"⚠️ Tool-call loop detected! ⚠️","code":400,"param":"Use the output of the last call to this function and parameter-set then move on to the next step of workflow, change tools/parameters called, or request assistance in the conversation sream"}}"#;
if self.last_calls.len() >= self.chain_len {
let start = self.last_calls.len().saturating_sub(self.chain_len);
let chain: Vec<_> = self.last_calls.iter().skip(start).collect();
let mut loopset = "[".to_string();
for c in chain {
loopset +=
format!("{{\"name\":{},\"parameters\":{}}},", c.name, c.arguments).as_str();
}
let _ = loopset.pop();
loopset.push(']');
format!(
"{},\"call_history\":{}}}}}",
&message[..(&message.len() - 2)],
loopset
)
} else {
message.to_string()
}
}
pub fn record_call(&mut self, call: ToolCall) {
if self.last_calls.len() >= self.chain_len * self.max_repeats {
self.last_calls.pop_front();
}
self.last_calls.push_back(call);
}
}
File diff suppressed because it is too large Load Diff
+161
View File
@@ -0,0 +1,161 @@
use super::{FunctionDeclaration, JsonSchema};
use crate::config::GlobalConfig;
use anyhow::{Result, bail};
use indexmap::IndexMap;
use serde_json::{Value, json};
pub const TODO_FUNCTION_PREFIX: &str = "todo__";
pub fn todo_function_declarations() -> Vec<FunctionDeclaration> {
vec![
FunctionDeclaration {
name: format!("{TODO_FUNCTION_PREFIX}init"),
description: "Initialize a new todo list with a goal. Clears any existing todos."
.to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(IndexMap::from([(
"goal".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
description: Some(
"The overall goal to achieve when all todos are completed".into(),
),
..Default::default()
},
)])),
required: Some(vec!["goal".to_string()]),
..Default::default()
},
agent: false,
},
FunctionDeclaration {
name: format!("{TODO_FUNCTION_PREFIX}add"),
description: "Add a new todo item to the list.".to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(IndexMap::from([(
"task".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
description: Some("Description of the todo task".into()),
..Default::default()
},
)])),
required: Some(vec!["task".to_string()]),
..Default::default()
},
agent: false,
},
FunctionDeclaration {
name: format!("{TODO_FUNCTION_PREFIX}done"),
description: "Mark a todo item as done by its id.".to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(IndexMap::from([(
"id".to_string(),
JsonSchema {
type_value: Some("integer".to_string()),
description: Some("The id of the todo item to mark as done".into()),
..Default::default()
},
)])),
required: Some(vec!["id".to_string()]),
..Default::default()
},
agent: false,
},
FunctionDeclaration {
name: format!("{TODO_FUNCTION_PREFIX}list"),
description: "Display the current todo list with status of each item.".to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(IndexMap::new()),
..Default::default()
},
agent: false,
},
]
}
pub fn handle_todo_tool(config: &GlobalConfig, cmd_name: &str, args: &Value) -> Result<Value> {
let action = cmd_name
.strip_prefix(TODO_FUNCTION_PREFIX)
.unwrap_or(cmd_name);
match action {
"init" => {
let goal = args.get("goal").and_then(Value::as_str).unwrap_or_default();
let mut cfg = config.write();
let agent = cfg.agent.as_mut();
match agent {
Some(agent) => {
agent.init_todo_list(goal);
Ok(json!({"status": "ok", "message": "Initialized new todo list"}))
}
None => bail!("No active agent"),
}
}
"add" => {
let task = args.get("task").and_then(Value::as_str).unwrap_or_default();
if task.is_empty() {
return Ok(json!({"error": "task description is required"}));
}
let mut cfg = config.write();
let agent = cfg.agent.as_mut();
match agent {
Some(agent) => {
let id = agent.add_todo(task);
Ok(json!({"status": "ok", "id": id}))
}
None => bail!("No active agent"),
}
}
"done" => {
let id = args
.get("id")
.and_then(|v| {
v.as_u64()
.or_else(|| v.as_str().and_then(|s| s.parse().ok()))
})
.map(|v| v as usize);
match id {
Some(id) => {
let mut cfg = config.write();
let agent = cfg.agent.as_mut();
match agent {
Some(agent) => {
if agent.mark_todo_done(id) {
Ok(
json!({"status": "ok", "message": format!("Marked todo {id} as done")}),
)
} else {
Ok(json!({"error": format!("Todo {id} not found")}))
}
}
None => bail!("No active agent"),
}
}
None => Ok(json!({"error": "id is required and must be a number"})),
}
}
"list" => {
let cfg = config.read();
let agent = cfg.agent.as_ref();
match agent {
Some(agent) => {
let list = agent.todo_list();
if list.is_empty() {
Ok(json!({"goal": "", "todos": []}))
} else {
Ok(serde_json::to_value(list)
.unwrap_or(json!({"error": "serialization failed"})))
}
}
None => bail!("No active agent"),
}
}
_ => bail!("Unknown todo action: {action}"),
}
}
+272
View File
@@ -0,0 +1,272 @@
use super::{FunctionDeclaration, JsonSchema};
use crate::config::GlobalConfig;
use crate::supervisor::escalation::{EscalationRequest, new_escalation_id};
use anyhow::{Result, anyhow};
use indexmap::IndexMap;
use inquire::{Confirm, MultiSelect, Select, Text};
use serde_json::{Value, json};
use std::time::Duration;
use tokio::sync::oneshot;
pub const USER_FUNCTION_PREFIX: &str = "user__";
const DEFAULT_ESCALATION_TIMEOUT_SECS: u64 = 300;
pub fn user_interaction_function_declarations() -> Vec<FunctionDeclaration> {
vec![
FunctionDeclaration {
name: format!("{USER_FUNCTION_PREFIX}ask"),
description: "Ask the user to select one option from a list. Returns the selected option. Indicate the recommended choice if there is one.".to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(IndexMap::from([
(
"question".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
description: Some("The question to present to the user".into()),
..Default::default()
},
),
(
"options".to_string(),
JsonSchema {
type_value: Some("array".to_string()),
description: Some("List of options for the user to choose from".into()),
items: Some(Box::new(JsonSchema {
type_value: Some("string".to_string()),
..Default::default()
})),
..Default::default()
},
),
])),
required: Some(vec!["question".to_string(), "options".to_string()]),
..Default::default()
},
agent: false,
},
FunctionDeclaration {
name: format!("{USER_FUNCTION_PREFIX}confirm"),
description: "Ask the user a yes/no question. Returns \"yes\" or \"no\".".to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(IndexMap::from([(
"question".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
description: Some("The yes/no question to ask the user".into()),
..Default::default()
},
)])),
required: Some(vec!["question".to_string()]),
..Default::default()
},
agent: false,
},
FunctionDeclaration {
name: format!("{USER_FUNCTION_PREFIX}input"),
description: "Ask the user for free-form text input. Returns the text entered.".to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(IndexMap::from([(
"question".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
description: Some("The prompt/question to display".into()),
..Default::default()
},
)])),
required: Some(vec!["question".to_string()]),
..Default::default()
},
agent: false,
},
FunctionDeclaration {
name: format!("{USER_FUNCTION_PREFIX}checkbox"),
description: "Ask the user to select one or more options from a list. Returns an array of selected options.".to_string(),
parameters: JsonSchema {
type_value: Some("object".to_string()),
properties: Some(IndexMap::from([
(
"question".to_string(),
JsonSchema {
type_value: Some("string".to_string()),
description: Some("The question to present to the user".into()),
..Default::default()
},
),
(
"options".to_string(),
JsonSchema {
type_value: Some("array".to_string()),
description: Some("List of options the user can select from (multiple selections allowed)".into()),
items: Some(Box::new(JsonSchema {
type_value: Some("string".to_string()),
..Default::default()
})),
..Default::default()
},
),
])),
required: Some(vec!["question".to_string(), "options".to_string()]),
..Default::default()
},
agent: false,
},
]
}
pub async fn handle_user_tool(
config: &GlobalConfig,
cmd_name: &str,
args: &Value,
) -> Result<Value> {
let action = cmd_name
.strip_prefix(USER_FUNCTION_PREFIX)
.unwrap_or(cmd_name);
let depth = config.read().current_depth;
if depth == 0 {
handle_direct(action, args)
} else {
handle_escalated(config, action, args).await
}
}
fn handle_direct(action: &str, args: &Value) -> Result<Value> {
match action {
"ask" => handle_direct_ask(args),
"confirm" => handle_direct_confirm(args),
"input" => handle_direct_input(args),
"checkbox" => handle_direct_checkbox(args),
_ => Err(anyhow!("Unknown user interaction: {action}")),
}
}
fn handle_direct_ask(args: &Value) -> Result<Value> {
let question = args
.get("question")
.and_then(Value::as_str)
.ok_or_else(|| anyhow!("'question' is required"))?;
let options = parse_options(args)?;
let answer = Select::new(question, options).prompt()?;
Ok(json!({ "answer": answer }))
}
fn handle_direct_confirm(args: &Value) -> Result<Value> {
let question = args
.get("question")
.and_then(Value::as_str)
.ok_or_else(|| anyhow!("'question' is required"))?;
let answer = Confirm::new(question).with_default(true).prompt()?;
Ok(json!({ "answer": if answer { "yes" } else { "no" } }))
}
fn handle_direct_input(args: &Value) -> Result<Value> {
let question = args
.get("question")
.and_then(Value::as_str)
.ok_or_else(|| anyhow!("'question' is required"))?;
let answer = Text::new(question).prompt()?;
Ok(json!({ "answer": answer }))
}
fn handle_direct_checkbox(args: &Value) -> Result<Value> {
let question = args
.get("question")
.and_then(Value::as_str)
.ok_or_else(|| anyhow!("'question' is required"))?;
let options = parse_options(args)?;
let answers = MultiSelect::new(question, options).prompt()?;
Ok(json!({ "answers": answers }))
}
async fn handle_escalated(config: &GlobalConfig, action: &str, args: &Value) -> Result<Value> {
let question = args
.get("question")
.and_then(Value::as_str)
.ok_or_else(|| anyhow!("'question' is required"))?
.to_string();
let options: Option<Vec<String>> = args.get("options").and_then(Value::as_array).map(|arr| {
arr.iter()
.filter_map(Value::as_str)
.map(String::from)
.collect()
});
let (from_agent_id, from_agent_name, root_queue, timeout_secs) = {
let cfg = config.read();
let agent_id = cfg
.self_agent_id
.clone()
.unwrap_or_else(|| "unknown".to_string());
let agent_name = cfg
.agent
.as_ref()
.map(|a| a.name().to_string())
.unwrap_or_else(|| "unknown".to_string());
let queue = cfg
.root_escalation_queue
.clone()
.ok_or_else(|| anyhow!("No escalation queue available; cannot reach parent agent"))?;
let timeout = cfg
.agent
.as_ref()
.map(|a| a.escalation_timeout())
.unwrap_or(DEFAULT_ESCALATION_TIMEOUT_SECS);
(agent_id, agent_name, queue, timeout)
};
let escalation_id = new_escalation_id();
let (tx, rx) = oneshot::channel();
let request = EscalationRequest {
id: escalation_id.clone(),
from_agent_id,
from_agent_name: from_agent_name.clone(),
question: format!("[{action}] {question}"),
options,
reply_tx: tx,
};
root_queue.submit(request);
let timeout = Duration::from_secs(timeout_secs);
match tokio::time::timeout(timeout, rx).await {
Ok(Ok(reply)) => Ok(json!({ "answer": reply })),
Ok(Err(_)) => Ok(json!({
"error": "Escalation was cancelled. The parent agent dropped the request",
"fallback": "Make your best judgment and proceed",
})),
Err(_) => Ok(json!({
"error": format!(
"Escalation timed out after {timeout_secs} seconds waiting for user response"
),
"fallback": "Make your best judgment and proceed",
})),
}
}
fn parse_options(args: &Value) -> Result<Vec<String>> {
args.get("options")
.and_then(Value::as_array)
.map(|arr| {
arr.iter()
.filter_map(Value::as_str)
.map(String::from)
.collect()
})
.ok_or_else(|| anyhow!("'options' is required and must be an array of strings"))
}
+1
View File
@@ -9,6 +9,7 @@ mod repl;
mod utils;
mod mcp;
mod parsers;
mod supervisor;
mod vault;
#[macro_use]
+188 -71
View File
@@ -2,14 +2,15 @@ use crate::config::Config;
use crate::utils::{AbortSignal, abortable_run_with_spinner};
use crate::vault::interpolate_secrets;
use anyhow::{Context, Result, anyhow};
use bm25::{Document, Language, SearchEngine, SearchEngineBuilder};
use futures_util::future::BoxFuture;
use futures_util::{StreamExt, TryStreamExt, stream};
use indoc::formatdoc;
use rmcp::model::{CallToolRequestParam, CallToolResult};
use rmcp::model::{CallToolRequestParams, CallToolResult};
use rmcp::service::RunningService;
use rmcp::transport::TokioChildProcess;
use rmcp::{RoleClient, ServiceExt};
use serde::Deserialize;
use serde::{Deserialize, Serialize};
use serde_json::{Value, json};
use std::borrow::Cow;
use std::collections::{HashMap, HashSet};
@@ -20,10 +21,46 @@ use std::sync::Arc;
use tokio::process::Command;
pub const MCP_INVOKE_META_FUNCTION_NAME_PREFIX: &str = "mcp_invoke";
pub const MCP_LIST_META_FUNCTION_NAME_PREFIX: &str = "mcp_list";
pub const MCP_SEARCH_META_FUNCTION_NAME_PREFIX: &str = "mcp_search";
pub const MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX: &str = "mcp_describe";
type ConnectedServer = RunningService<RoleClient, ()>;
#[derive(Clone, Debug, Default, Serialize)]
pub struct CatalogItem {
pub name: String,
pub server: String,
pub description: String,
}
#[derive(Debug)]
struct ServerCatalog {
engine: SearchEngine<String>,
items: HashMap<String, CatalogItem>,
}
impl ServerCatalog {
pub fn build_bm25(items: &HashMap<String, CatalogItem>) -> SearchEngine<String> {
let docs = items.values().map(|it| {
let contents = format!("{}\n{}\nserver:{}", it.name, it.description, it.server);
Document {
id: it.name.clone(),
contents,
}
});
SearchEngineBuilder::<String>::with_documents(Language::English, docs).build()
}
}
impl Clone for ServerCatalog {
fn clone(&self) -> Self {
Self {
engine: Self::build_bm25(&self.items),
items: self.items.clone(),
}
}
}
#[derive(Debug, Clone, Deserialize)]
struct McpServersConfig {
#[serde(rename = "mcpServers")]
@@ -50,7 +87,8 @@ enum JsonField {
pub struct McpRegistry {
log_path: Option<PathBuf>,
config: Option<McpServersConfig>,
servers: HashMap<String, Arc<RunningService<RoleClient, ()>>>,
servers: HashMap<String, Arc<ConnectedServer>>,
catalogs: HashMap<String, ServerCatalog>,
}
impl McpRegistry {
@@ -120,27 +158,31 @@ impl McpRegistry {
}
pub async fn reinit(
registry: McpRegistry,
mut registry: McpRegistry,
enabled_mcp_servers: Option<String>,
abort_signal: AbortSignal,
) -> Result<Self> {
debug!("Reinitializing MCP registry");
debug!("Stopping all MCP servers");
let mut new_registry = abortable_run_with_spinner(
registry.stop_all_servers(),
"Stopping MCP servers",
let desired_ids = registry.resolve_server_ids(enabled_mcp_servers.clone());
let desired_set: HashSet<String> = desired_ids.iter().cloned().collect();
debug!("Stopping unused MCP servers");
abortable_run_with_spinner(
registry.stop_unused_servers(&desired_set),
"Stopping unused MCP servers",
abort_signal.clone(),
)
.await?;
abortable_run_with_spinner(
new_registry.start_select_mcp_servers(enabled_mcp_servers),
registry.start_select_mcp_servers(enabled_mcp_servers),
"Loading MCP servers",
abort_signal,
)
.await?;
Ok(new_registry)
Ok(registry)
}
async fn start_select_mcp_servers(
@@ -154,41 +196,39 @@ impl McpRegistry {
return Ok(());
}
if let Some(servers) = enabled_mcp_servers {
debug!("Starting selected MCP servers: {:?}", servers);
let config = self
.config
.as_ref()
.with_context(|| "MCP Config not defined. Cannot start servers")?;
let mcp_servers = config.mcp_servers.clone();
let desired_ids = self.resolve_server_ids(enabled_mcp_servers);
let ids_to_start: Vec<String> = desired_ids
.into_iter()
.filter(|id| !self.servers.contains_key(id))
.collect();
let enabled_servers: HashSet<String> =
servers.split(',').map(|s| s.trim().to_string()).collect();
let server_ids: Vec<String> = if servers == "all" {
mcp_servers.into_keys().collect()
} else {
mcp_servers
.into_keys()
.filter(|id| enabled_servers.contains(id))
.collect()
};
if ids_to_start.is_empty() {
return Ok(());
}
let results: Vec<(String, Arc<_>)> = stream::iter(
server_ids
.into_iter()
.map(|id| async { self.start_server(id).await }),
)
.buffer_unordered(num_cpus::get())
.try_collect()
.await?;
debug!("Starting selected MCP servers: {:?}", ids_to_start);
self.servers = results.into_iter().collect();
let results: Vec<(String, Arc<_>, ServerCatalog)> = stream::iter(
ids_to_start
.into_iter()
.map(|id| async { self.start_server(id).await }),
)
.buffer_unordered(num_cpus::get())
.try_collect()
.await?;
for (id, server, catalog) in results {
self.servers.insert(id.clone(), server);
self.catalogs.insert(id, catalog);
}
Ok(())
}
async fn start_server(&self, id: String) -> Result<(String, Arc<ConnectedServer>)> {
async fn start_server(
&self,
id: String,
) -> Result<(String, Arc<ConnectedServer>, ServerCatalog)> {
let server = self
.config
.as_ref()
@@ -231,29 +271,82 @@ impl McpRegistry {
.await
.with_context(|| format!("Failed to start MCP server: {}", &server.command))?,
);
debug!(
"Available tools for MCP server {id}: {:?}",
service.list_tools(None).await?
);
let tools = service.list_tools(None).await?;
debug!("Available tools for MCP server {id}: {tools:?}");
let mut items_vec = Vec::new();
for t in tools.tools {
let name = t.name.to_string();
let description = t.description.unwrap_or_default().to_string();
items_vec.push(CatalogItem {
name,
server: id.clone(),
description,
});
}
let mut items_map = HashMap::new();
items_vec.into_iter().for_each(|it| {
items_map.insert(it.name.clone(), it);
});
let catalog = ServerCatalog {
engine: ServerCatalog::build_bm25(&items_map),
items: items_map,
};
info!("Started MCP server: {id}");
Ok((id.to_string(), service))
Ok((id.to_string(), service, catalog))
}
pub async fn stop_all_servers(mut self) -> Result<Self> {
for (id, server) in self.servers {
Arc::try_unwrap(server)
.map_err(|_| anyhow!("Failed to unwrap Arc for MCP server: {id}"))?
.cancel()
.await
.with_context(|| format!("Failed to stop MCP server: {id}"))?;
info!("Stopped MCP server: {id}");
fn resolve_server_ids(&self, enabled_mcp_servers: Option<String>) -> Vec<String> {
if let Some(config) = &self.config
&& let Some(servers) = enabled_mcp_servers
{
if servers == "all" {
config.mcp_servers.keys().cloned().collect()
} else {
let enabled_servers: HashSet<String> =
servers.split(',').map(|s| s.trim().to_string()).collect();
config
.mcp_servers
.keys()
.filter(|id| enabled_servers.contains(*id))
.cloned()
.collect()
}
} else {
vec![]
}
}
pub async fn stop_unused_servers(&mut self, keep_ids: &HashSet<String>) -> Result<()> {
let mut ids_to_remove = Vec::new();
for (id, _) in self.servers.iter() {
if !keep_ids.contains(id) {
ids_to_remove.push(id.clone());
}
}
self.servers = HashMap::new();
Ok(self)
for id in ids_to_remove {
if let Some(server) = self.servers.remove(&id) {
match Arc::try_unwrap(server) {
Ok(server_inner) => {
server_inner
.cancel()
.await
.with_context(|| format!("Failed to stop MCP server: {id}"))?;
info!("Stopped MCP server: {id}");
}
Err(_) => {
info!("Detaching from MCP server: {id} (still in use)");
}
}
self.catalogs.remove(&id);
}
}
Ok(())
}
pub fn list_started_servers(&self) -> Vec<String> {
@@ -268,26 +361,48 @@ impl McpRegistry {
}
}
pub fn catalog(&self) -> BoxFuture<'static, Result<Value>> {
let servers: Vec<(String, Arc<ConnectedServer>)> = self
pub fn search_tools_server(&self, server: &str, query: &str, top_k: usize) -> Vec<CatalogItem> {
let Some(catalog) = self.catalogs.get(server) else {
return vec![];
};
let engine = &catalog.engine;
let raw = engine.search(query, top_k.min(20));
raw.into_iter()
.filter_map(|r| catalog.items.get(&r.document.id))
.take(top_k)
.cloned()
.collect()
}
pub async fn describe(&self, server_id: &str, tool: &str) -> Result<Value> {
let server = self
.servers
.iter()
.map(|(id, s)| (id.clone(), s.clone()))
.collect();
.filter(|(id, _)| &server_id == id)
.map(|(_, s)| s.clone())
.next()
.ok_or(anyhow!("{server_id} MCP server not found in config"))?;
Box::pin(async move {
let mut out = Vec::with_capacity(servers.len());
for (id, server) in servers {
let tools = server.list_tools(None).await?;
let resources = server.list_resources(None).await.unwrap_or_default();
out.push(json!({
"server": id,
"tools": tools,
"resources": resources,
}));
let tool_schema = server
.list_tools(None)
.await?
.tools
.into_iter()
.find(|it| it.name == tool)
.ok_or(anyhow!(
"{tool} not found in {server_id} MCP server catalog"
))?
.input_schema;
Ok(json!({
"type": "object",
"properties": {
"tool": {
"type": "string",
},
"arguments": tool_schema
}
Ok(Value::Array(out))
})
}))
}
pub fn invoke(
@@ -305,9 +420,11 @@ impl McpRegistry {
let tool = tool.to_owned();
Box::pin(async move {
let server = server?;
let call_tool_request = CallToolRequestParam {
let call_tool_request = CallToolRequestParams {
name: Cow::Owned(tool.to_owned()),
arguments: arguments.as_object().cloned(),
meta: None,
task: None,
};
let result = server.call_tool(call_tool_request).await?;
+36 -4
View File
@@ -298,16 +298,48 @@ impl Rag {
top_k: usize,
rerank_model: Option<&str>,
abort_signal: AbortSignal,
) -> Result<(String, Vec<DocumentId>)> {
) -> Result<(String, String, Vec<DocumentId>)> {
let ret = abortable_run_with_spinner(
self.hybird_search(text, top_k, rerank_model),
"Searching",
abort_signal,
)
.await;
let (ids, documents): (Vec<_>, Vec<_>) = ret?.into_iter().unzip();
let embeddings = documents.join("\n\n");
Ok((embeddings, ids))
let results = ret?;
let ids: Vec<_> = results.iter().map(|(id, _)| *id).collect();
let embeddings = results
.iter()
.map(|(id, content)| {
let source = self.resolve_source(id);
format!("[Source: {source}]\n{content}")
})
.collect::<Vec<_>>()
.join("\n\n");
let sources = self.format_sources(&ids);
Ok((embeddings, sources, ids))
}
fn resolve_source(&self, id: &DocumentId) -> String {
let (file_index, _) = id.split();
self.data
.files
.get(&file_index)
.map(|f| f.path.clone())
.unwrap_or_else(|| "unknown".to_string())
}
fn format_sources(&self, ids: &[DocumentId]) -> String {
let mut seen = IndexSet::new();
for id in ids {
let (file_index, _) = id.split();
if let Some(file) = self.data.files.get(&file_index) {
seen.insert(file.path.clone());
}
}
seen.into_iter()
.map(|path| format!("- {path}"))
.collect::<Vec<_>>()
.join("\n")
}
pub async fn sync_documents(
+133 -3
View File
@@ -826,6 +826,14 @@ pub async fn run_repl_command(
_ => unknown_command()?,
},
None => {
if config
.read()
.agent
.as_ref()
.is_some_and(|a| a.continuation_count() > 0)
{
config.write().agent.as_mut().unwrap().reset_continuation();
}
let input = Input::from_str(config, line, None);
ask(config, abort_signal.clone(), input, true).await?;
}
@@ -874,9 +882,131 @@ async fn ask(
)
.await
} else {
Config::maybe_autoname_session(config.clone());
Config::maybe_compress_session(config.clone());
Ok(())
let should_continue = {
let cfg = config.read();
if let Some(agent) = &cfg.agent {
agent.auto_continue_enabled()
&& agent.continuation_count() < agent.max_auto_continues()
&& agent.todo_list().has_incomplete()
} else {
false
}
};
if should_continue {
let full_prompt = {
let mut cfg = config.write();
let agent = cfg.agent.as_mut().expect("agent checked above");
agent.set_last_continuation_response(output.clone());
agent.increment_continuation();
let count = agent.continuation_count();
let max = agent.max_auto_continues();
let todo_state = agent.todo_list().render_for_model();
let remaining = agent.todo_list().incomplete_count();
let prompt = agent.continuation_prompt();
let color = if cfg.light_theme() {
nu_ansi_term::Color::LightGray
} else {
nu_ansi_term::Color::DarkGray
};
eprintln!(
"\n📋 {}",
color.italic().paint(format!(
"Auto-continuing ({count}/{max}): {remaining} incomplete todo(s) remain"
))
);
format!("{prompt}\n\n{todo_state}")
};
let continuation_input = Input::from_str(config, &full_prompt, None);
ask(config, abort_signal, continuation_input, false).await
} else {
if config
.read()
.agent
.as_ref()
.is_some_and(|a| a.continuation_count() > 0)
{
config.write().agent.as_mut().unwrap().reset_continuation();
}
Config::maybe_autoname_session(config.clone());
let needs_compression = {
let cfg = config.read();
let compression_threshold = cfg.compression_threshold;
cfg.session
.as_ref()
.is_some_and(|s| s.needs_compression(compression_threshold))
};
if needs_compression {
let agent_can_continue_after_compress = {
let cfg = config.read();
cfg.agent.as_ref().is_some_and(|agent| {
agent.auto_continue_enabled()
&& agent.continuation_count() < agent.max_auto_continues()
&& agent.todo_list().has_incomplete()
})
};
{
let mut cfg = config.write();
if let Some(session) = cfg.session.as_mut() {
session.set_compressing(true);
}
}
let color = if config.read().light_theme() {
nu_ansi_term::Color::LightGray
} else {
nu_ansi_term::Color::DarkGray
};
eprintln!("\n📢 {}", color.italic().paint("Compressing the session."),);
if let Err(err) = Config::compress_session(config).await {
log::warn!("Failed to compress the session: {err}");
}
if let Some(session) = config.write().session.as_mut() {
session.set_compressing(false);
}
if agent_can_continue_after_compress {
let full_prompt = {
let mut cfg = config.write();
let agent = cfg.agent.as_mut().expect("agent checked above");
agent.increment_continuation();
let count = agent.continuation_count();
let max = agent.max_auto_continues();
let todo_state = agent.todo_list().render_for_model();
let remaining = agent.todo_list().incomplete_count();
let prompt = agent.continuation_prompt();
let color = if cfg.light_theme() {
nu_ansi_term::Color::LightGray
} else {
nu_ansi_term::Color::DarkGray
};
eprintln!(
"\n📋 {}",
color.italic().paint(format!(
"Auto-continuing after compression ({count}/{max}): {remaining} incomplete todo(s) remain"
))
);
format!("{prompt}\n\n{todo_state}")
};
let continuation_input = Input::from_str(config, &full_prompt, None);
return ask(config, abort_signal, continuation_input, false).await;
}
} else {
Config::maybe_compress_session(config.clone());
}
Ok(())
}
}
}
+80
View File
@@ -0,0 +1,80 @@
use fmt::{Debug, Formatter};
use serde_json::{Value, json};
use std::collections::HashMap;
use std::fmt;
use tokio::sync::oneshot;
use uuid::Uuid;
pub struct EscalationRequest {
pub id: String,
pub from_agent_id: String,
pub from_agent_name: String,
pub question: String,
pub options: Option<Vec<String>>,
pub reply_tx: oneshot::Sender<String>,
}
pub struct EscalationQueue {
pending: parking_lot::Mutex<HashMap<String, EscalationRequest>>,
}
impl EscalationQueue {
pub fn new() -> Self {
Self {
pending: parking_lot::Mutex::new(HashMap::new()),
}
}
pub fn submit(&self, request: EscalationRequest) -> String {
let id = request.id.clone();
self.pending.lock().insert(id.clone(), request);
id
}
pub fn take(&self, escalation_id: &str) -> Option<EscalationRequest> {
self.pending.lock().remove(escalation_id)
}
pub fn pending_summary(&self) -> Vec<Value> {
self.pending
.lock()
.values()
.map(|r| {
let mut entry = json!({
"escalation_id": r.id,
"from_agent_id": r.from_agent_id,
"from_agent_name": r.from_agent_name,
"question": r.question,
});
if let Some(ref options) = r.options {
entry["options"] = json!(options);
}
entry
})
.collect()
}
pub fn has_pending(&self) -> bool {
!self.pending.lock().is_empty()
}
}
impl Default for EscalationQueue {
fn default() -> Self {
Self::new()
}
}
impl Debug for EscalationQueue {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
let count = self.pending.lock().len();
f.debug_struct("EscalationQueue")
.field("pending_count", &count)
.finish()
}
}
pub fn new_escalation_id() -> String {
let short = &Uuid::new_v4().to_string()[..8];
format!("esc_{short}")
}
+60
View File
@@ -0,0 +1,60 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Envelope {
pub from: String,
pub to: String,
pub payload: EnvelopePayload,
pub timestamp: chrono::DateTime<chrono::Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum EnvelopePayload {
Text { content: String },
TaskCompleted { task_id: String, summary: String },
ShutdownRequest { reason: String },
ShutdownApproved,
}
#[derive(Debug, Default)]
pub struct Inbox {
messages: parking_lot::Mutex<Vec<Envelope>>,
}
impl Inbox {
pub fn new() -> Self {
Self {
messages: parking_lot::Mutex::new(Vec::new()),
}
}
pub fn deliver(&self, envelope: Envelope) {
self.messages.lock().push(envelope);
}
pub fn drain(&self) -> Vec<Envelope> {
let mut msgs = {
let mut guard = self.messages.lock();
std::mem::take(&mut *guard)
};
msgs.sort_by_key(|e| match &e.payload {
EnvelopePayload::ShutdownRequest { .. } => 0,
EnvelopePayload::ShutdownApproved => 0,
EnvelopePayload::TaskCompleted { .. } => 1,
EnvelopePayload::Text { .. } => 2,
});
msgs
}
}
impl Clone for Inbox {
fn clone(&self) -> Self {
let messages = self.messages.lock().clone();
Self {
messages: parking_lot::Mutex::new(messages),
}
}
}
+128
View File
@@ -0,0 +1,128 @@
pub mod escalation;
pub mod mailbox;
pub mod taskqueue;
use crate::utils::AbortSignal;
use fmt::{Debug, Formatter};
use mailbox::Inbox;
use taskqueue::TaskQueue;
use anyhow::{Result, bail};
use std::collections::HashMap;
use std::fmt;
use std::sync::Arc;
use tokio::task::JoinHandle;
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum AgentExitStatus {
Completed,
Failed(String),
}
pub struct AgentResult {
pub id: String,
pub agent_name: String,
pub output: String,
pub exit_status: AgentExitStatus,
}
pub struct AgentHandle {
pub id: String,
pub agent_name: String,
pub depth: usize,
pub inbox: Arc<Inbox>,
pub abort_signal: AbortSignal,
pub join_handle: JoinHandle<Result<AgentResult>>,
}
pub struct Supervisor {
handles: HashMap<String, AgentHandle>,
task_queue: TaskQueue,
max_concurrent: usize,
max_depth: usize,
}
impl Supervisor {
pub fn new(max_concurrent: usize, max_depth: usize) -> Self {
Self {
handles: HashMap::new(),
task_queue: TaskQueue::new(),
max_concurrent,
max_depth,
}
}
pub fn active_count(&self) -> usize {
self.handles.len()
}
pub fn max_concurrent(&self) -> usize {
self.max_concurrent
}
pub fn max_depth(&self) -> usize {
self.max_depth
}
pub fn task_queue(&self) -> &TaskQueue {
&self.task_queue
}
pub fn task_queue_mut(&mut self) -> &mut TaskQueue {
&mut self.task_queue
}
pub fn register(&mut self, handle: AgentHandle) -> Result<()> {
if self.handles.len() >= self.max_concurrent {
bail!(
"Cannot spawn agent: at capacity ({}/{})",
self.handles.len(),
self.max_concurrent
);
}
if handle.depth > self.max_depth {
bail!(
"Cannot spawn agent: max depth exceeded ({}/{})",
handle.depth,
self.max_depth
);
}
self.handles.insert(handle.id.clone(), handle);
Ok(())
}
pub fn is_finished(&self, id: &str) -> Option<bool> {
self.handles.get(id).map(|h| h.join_handle.is_finished())
}
pub fn take(&mut self, id: &str) -> Option<AgentHandle> {
self.handles.remove(id)
}
pub fn inbox(&self, id: &str) -> Option<&Arc<Inbox>> {
self.handles.get(id).map(|h| &h.inbox)
}
pub fn list_agents(&self) -> Vec<(&str, &str)> {
self.handles
.values()
.map(|h| (h.id.as_str(), h.agent_name.as_str()))
.collect()
}
pub fn cancel_all(&self) {
for handle in self.handles.values() {
handle.abort_signal.set_ctrlc();
}
}
}
impl Debug for Supervisor {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("Supervisor")
.field("active_agents", &self.handles.len())
.field("max_concurrent", &self.max_concurrent)
.field("max_depth", &self.max_depth)
.finish()
}
}
+271
View File
@@ -0,0 +1,271 @@
use serde::{Deserialize, Serialize};
use std::collections::{HashMap, HashSet};
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum TaskStatus {
Pending,
Blocked,
InProgress,
Completed,
Failed,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TaskNode {
pub id: String,
pub subject: String,
pub description: String,
pub status: TaskStatus,
pub owner: Option<String>,
pub blocked_by: HashSet<String>,
pub blocks: HashSet<String>,
pub dispatch_agent: Option<String>,
pub prompt: Option<String>,
}
impl TaskNode {
pub fn new(
id: String,
subject: String,
description: String,
dispatch_agent: Option<String>,
prompt: Option<String>,
) -> Self {
Self {
id,
subject,
description,
status: TaskStatus::Pending,
owner: None,
blocked_by: HashSet::new(),
blocks: HashSet::new(),
dispatch_agent,
prompt,
}
}
pub fn is_runnable(&self) -> bool {
self.status == TaskStatus::Pending && self.blocked_by.is_empty()
}
}
#[derive(Debug, Clone, Default)]
pub struct TaskQueue {
tasks: HashMap<String, TaskNode>,
next_id: usize,
}
impl TaskQueue {
pub fn new() -> Self {
Self {
tasks: HashMap::new(),
next_id: 1,
}
}
pub fn create(
&mut self,
subject: String,
description: String,
dispatch_agent: Option<String>,
prompt: Option<String>,
) -> String {
let id = self.next_id.to_string();
self.next_id += 1;
let task = TaskNode::new(id.clone(), subject, description, dispatch_agent, prompt);
self.tasks.insert(id.clone(), task);
id
}
pub fn add_dependency(&mut self, task_id: &str, blocked_by: &str) -> Result<(), String> {
if task_id == blocked_by {
return Err("A task cannot depend on itself".into());
}
if !self.tasks.contains_key(blocked_by) {
return Err(format!("Dependency task '{blocked_by}' does not exist"));
}
if !self.tasks.contains_key(task_id) {
return Err(format!("Task '{task_id}' does not exist"));
}
if self.would_create_cycle(task_id, blocked_by) {
return Err(format!(
"Adding dependency {task_id} -> {blocked_by} would create a cycle"
));
}
if let Some(task) = self.tasks.get_mut(task_id) {
task.blocked_by.insert(blocked_by.to_string());
task.status = TaskStatus::Blocked;
}
if let Some(blocker) = self.tasks.get_mut(blocked_by) {
blocker.blocks.insert(task_id.to_string());
}
Ok(())
}
pub fn complete(&mut self, task_id: &str) -> Vec<String> {
let mut newly_runnable = Vec::new();
let dependents: Vec<String> = self
.tasks
.get(task_id)
.map(|t| t.blocks.iter().cloned().collect())
.unwrap_or_default();
if let Some(task) = self.tasks.get_mut(task_id) {
task.status = TaskStatus::Completed;
}
for dep_id in &dependents {
if let Some(dep) = self.tasks.get_mut(dep_id) {
dep.blocked_by.remove(task_id);
if dep.blocked_by.is_empty() && dep.status == TaskStatus::Blocked {
dep.status = TaskStatus::Pending;
newly_runnable.push(dep_id.clone());
}
}
}
newly_runnable
}
pub fn fail(&mut self, task_id: &str) {
if let Some(task) = self.tasks.get_mut(task_id) {
task.status = TaskStatus::Failed;
}
}
pub fn claim(&mut self, task_id: &str, owner: &str) -> bool {
if let Some(task) = self.tasks.get_mut(task_id)
&& task.is_runnable()
&& task.owner.is_none()
{
task.owner = Some(owner.to_string());
task.status = TaskStatus::InProgress;
return true;
}
false
}
pub fn get(&self, task_id: &str) -> Option<&TaskNode> {
self.tasks.get(task_id)
}
pub fn list(&self) -> Vec<&TaskNode> {
let mut tasks: Vec<&TaskNode> = self.tasks.values().collect();
tasks.sort_by_key(|t| t.id.parse::<usize>().unwrap_or(0));
tasks
}
fn would_create_cycle(&self, task_id: &str, blocked_by: &str) -> bool {
let mut visited = HashSet::new();
let mut stack = vec![blocked_by.to_string()];
while let Some(current) = stack.pop() {
if current == task_id {
return true;
}
if visited.insert(current.clone())
&& let Some(task) = self.tasks.get(&current)
{
for dep in &task.blocked_by {
stack.push(dep.clone());
}
}
}
false
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_create_and_list() {
let mut queue = TaskQueue::new();
let id1 = queue.create(
"Research".into(),
"Research auth patterns".into(),
None,
None,
);
let id2 = queue.create("Implement".into(), "Write the code".into(), None, None);
assert_eq!(id1, "1");
assert_eq!(id2, "2");
assert_eq!(queue.list().len(), 2);
}
#[test]
fn test_dependency_and_completion() {
let mut queue = TaskQueue::new();
let id1 = queue.create("Step 1".into(), "".into(), None, None);
let id2 = queue.create("Step 2".into(), "".into(), None, None);
queue.add_dependency(&id2, &id1).unwrap();
assert!(queue.get(&id1).unwrap().is_runnable());
assert!(!queue.get(&id2).unwrap().is_runnable());
assert_eq!(queue.get(&id2).unwrap().status, TaskStatus::Blocked);
let unblocked = queue.complete(&id1);
assert_eq!(unblocked, vec![id2.clone()]);
assert!(queue.get(&id2).unwrap().is_runnable());
}
#[test]
fn test_fan_in_dependency() {
let mut queue = TaskQueue::new();
let id1 = queue.create("A".into(), "".into(), None, None);
let id2 = queue.create("B".into(), "".into(), None, None);
let id3 = queue.create("C (needs A and B)".into(), "".into(), None, None);
queue.add_dependency(&id3, &id1).unwrap();
queue.add_dependency(&id3, &id2).unwrap();
assert!(!queue.get(&id3).unwrap().is_runnable());
let unblocked = queue.complete(&id1);
assert!(unblocked.is_empty());
assert!(!queue.get(&id3).unwrap().is_runnable());
let unblocked = queue.complete(&id2);
assert_eq!(unblocked, vec![id3.clone()]);
assert!(queue.get(&id3).unwrap().is_runnable());
}
#[test]
fn test_cycle_detection() {
let mut queue = TaskQueue::new();
let id1 = queue.create("A".into(), "".into(), None, None);
let id2 = queue.create("B".into(), "".into(), None, None);
queue.add_dependency(&id2, &id1).unwrap();
let result = queue.add_dependency(&id1, &id2);
assert!(result.is_err());
assert!(result.unwrap_err().contains("cycle"));
}
#[test]
fn test_self_dependency_rejected() {
let mut queue = TaskQueue::new();
let id1 = queue.create("A".into(), "".into(), None, None);
let result = queue.add_dependency(&id1, &id1);
assert!(result.is_err());
}
#[test]
fn test_claim() {
let mut queue = TaskQueue::new();
let id1 = queue.create("Task".into(), "".into(), None, None);
assert!(queue.claim(&id1, "worker-1"));
assert!(!queue.claim(&id1, "worker-2"));
assert_eq!(queue.get(&id1).unwrap().status, TaskStatus::InProgress);
}
}