Compare commits
66 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
a5899da4fb
|
|||
|
dedcef8ac5
|
|||
|
d658f1d2fe
|
|||
|
6b4a45874f
|
|||
|
7839e1dbd9
|
|||
|
78c3932f36
|
|||
|
11334149b0
|
|||
|
4caa035528
|
|||
|
f30e81af08
|
|||
|
4c75655f58
|
|||
|
f865892c28
|
|||
|
ebeb9c9b7d
|
|||
|
ab2b927fcb
|
|||
|
7e5ff2ba1f
|
|||
|
ed59051f3d
|
|||
|
|
e98bf56a2b | ||
|
|
fb510b1a4f | ||
|
6c17462040
|
|||
|
1536cf384c
|
|||
|
d6842d7e29
|
|||
|
fbc0acda2a
|
|||
|
0327d041b6
|
|||
|
6a01fd4fbd
|
|||
| d822180205 | |||
|
89d0fdce26
|
|||
|
b3ecdce979
|
|||
|
3873821a31
|
|||
|
9c2801b643
|
|||
|
d78820dcd4
|
|||
|
d43c4232a2
|
|||
|
f41c85b703
|
|||
|
9e056bdcf0
|
|||
|
d6022b9f98
|
|||
|
6fc1abf94a
|
|||
|
92ea0f624e
|
|||
|
c3fd8fbc1c
|
|||
|
7fd3f7761c
|
|||
|
05e19098b2
|
|||
|
60067ae757
|
|||
|
c72003b0b6
|
|||
|
7c9d500116
|
|||
|
6b2c87b562
|
|||
|
b2dbdfb4b1
|
|||
|
063e198f96
|
|||
|
73cbe16ec1
|
|||
|
bdea854a9f
|
|||
|
9b4c800597
|
|||
|
eb4d1c02f4
|
|||
|
c428990900
|
|||
|
03b9cc70b9
|
|||
|
3fa0eb832c
|
|||
|
83f66e1061
|
|||
|
741b9c364c
|
|||
|
b6f6f456db
|
|||
|
00a6cf74d7
|
|||
|
d35ca352ca
|
|||
| 57dc1cb252 | |||
|
101a9cdd6e
|
|||
|
c5f52e1efb
|
|||
|
470149b606
|
|||
|
02062c5a50
|
|||
|
e6e99b6926
|
|||
|
15a293204f
|
|||
|
ecf3780aed
|
|||
|
e798747135
|
|||
|
60493728a0
|
@@ -0,0 +1,11 @@
|
||||
### AI assistance (if any):
|
||||
- List tools here and files touched by them
|
||||
|
||||
### Authorship & Understanding
|
||||
|
||||
- [ ] I wrote or heavily modified this code myself
|
||||
- [ ] I understand how it works end-to-end
|
||||
- [ ] I can maintain this code in the future
|
||||
- [ ] No undisclosed AI-generated code was used
|
||||
- [ ] If AI assistance was used, it is documented below
|
||||
|
||||
@@ -1,3 +1,74 @@
|
||||
## v0.3.0 (2026-04-02)
|
||||
|
||||
### Feat
|
||||
|
||||
- Added `todo__clear` function to the todo system and updated REPL commands to have a .clear todo as well for significant changes in agent direction
|
||||
- Added available tools to prompts for sisyphus and code-reviewer agent families
|
||||
- Added available tools to coder prompt
|
||||
- Improved token efficiency when delegating from sisyphus -> coder
|
||||
- modified sisyphus agents to use the new ddg-search MCP server for web searches instead of built-in model searches
|
||||
- Added support for specifying a custom response to multiple-choice prompts when nothing suits the user's needs
|
||||
- Supported theming in the inquire prompts in the REPL
|
||||
- Added the duckduckgo-search MCP server for searching the web (in addition to the built-in tools for web searches)
|
||||
- Support for Gemini OAuth
|
||||
- Support authenticating or refreshing OAuth for supported clients from within the REPL
|
||||
- Allow first-runs to select OAuth for supported providers
|
||||
- Support OAuth authentication flows for Claude
|
||||
- Improved MCP server spinup and spindown when switching contexts or settings in the REPL: Modify existing config rather than stopping all servers always and re-initializing if unnecessary
|
||||
- Allow the explore agent to run search queries for understanding docs or API specs
|
||||
- Allow the oracle to perform web searches for deeper research
|
||||
- Added web search support to the main sisyphus agent to answer user queries
|
||||
- Created a CodeRabbit-style code-reviewer agent
|
||||
- Added configuration option in agents to indicate the timeout for user input before proceeding (defaults to 5 minutes)
|
||||
- Added support for sub-agents to escalate user interaction requests from any depth to the parent agents for user interactions
|
||||
- built-in user interaction tools to remove the need for the list/confirm/etc prompts in prompt tools and to enhance user interactions in Loki
|
||||
- Experimental update to sisyphus to use the new parallel agent spawning system
|
||||
- Added an agent configuration property that allows auto-injecting sub-agent spawning instructions (when using the built-in sub-agent spawning system)
|
||||
- Auto-dispatch support of sub-agents and support for the teammate pattern between subagents
|
||||
- Full passive task queue integration for parallelization of subagents
|
||||
- Implemented initial scaffolding for built-in sub-agent spawning tool call operations
|
||||
- Initial models for agent parallelization
|
||||
- Added interactive prompting between the LLM and the user in Sisyphus using the built-in Bash utils scripts
|
||||
|
||||
### Fix
|
||||
|
||||
- Clarified user text input interaction
|
||||
- recursion bug with similarly named Bash search functions in the explore agent
|
||||
- updated the error for unauthenticated oauth to include the REPL .authenticated command
|
||||
- Corrected a bug in the coder agent that wasn't outputting a summary of the changes made, so the parent Sisyphus agent has no idea if the agent worked or not
|
||||
- Claude code system prompt injected into claude requests to make them valid once again
|
||||
- Do not inject tools when models don't support them; detect this conflict before API calls happen
|
||||
- The REPL .authenticate command works from within sessions, agents, and roles with pre-configured models
|
||||
- Implemented the path normalization fix for the oracle and explore agents
|
||||
- Updated the atlassian MCP server endpoint to account for future deprecation
|
||||
- Fixed a bug in the coder agent that was causing the agent to create absolute paths from the current directory
|
||||
- the updated regex for secrets injection broke MCP server secrets interpolation because the regex greedily matched on new lines, replacing too much content. This fix just ignores commented out lines in YAML files by skipping commented out lines.
|
||||
- Don't try to inject secrets into commented-out lines in the config
|
||||
- Removed top_p parameter from some agents so they can work across model providers
|
||||
- Improved sub-agent stdout and stderr output for users to follow
|
||||
- Inject agent variables into environment variables for global tool calls when invoked from agents to modify global tool behavior
|
||||
- Removed the unnecessary execute_commands tool from the oracle agent
|
||||
- Added auto_confirm to the coder agent so sub-agent spawning doesn't freeze
|
||||
- Fixed a bug in the new supervisor and todo built-ins that was causing errors with OpenAI models
|
||||
- Added condition to sisyphus to always output a summary to clearly indicate completion
|
||||
- Updated the sisyphus prompt to explicitly tell it to delegate to the coder agent when it wants to write any code at all except for trivial changes
|
||||
- Added back in the auto_confirm variable into sisyphus
|
||||
- Removed the now unnecessary is_stale_response that was breaking auto-continuing with parallel agents
|
||||
- Bypassed enabled_tools for user interaction tools so if function calling is enabled at all, the LLM has access to the user interaction tools when in REPL mode
|
||||
- When parallel agents run, only write to stdout from the parent and only display the parent's throbber
|
||||
- Forgot to implement support for failing a task and keep all dependents blocked
|
||||
- Clean up orphaned sub-agents when the parent agent
|
||||
- Fixed the bash prompt utils so that they correctly show output when being run by a tool invocation
|
||||
- Forgot to automatically add the bidirectional communication back up to parent agents from sub-agents (i.e. need to be able to check inbox and send messages)
|
||||
- Agent delegation tools were not being passed into the {{__tools__}} placeholder so agents weren't delegating to subagents
|
||||
|
||||
### Refactor
|
||||
|
||||
- Made the oauth module more generic so it can support loopback OAuth (not just manual)
|
||||
- Changed the default session name for Sisyphus to temp (to require users to explicitly name sessions they wish to save)
|
||||
- Updated the sisyphus agent to use the built-in user interaction tools instead of custom bash-based tools
|
||||
- Cleaned up some left-over implementation stubs
|
||||
|
||||
## v0.2.0 (2026-02-14)
|
||||
|
||||
### Feat
|
||||
|
||||
@@ -76,6 +76,13 @@ Then, you can run workflows locally without having to commit and see if the GitH
|
||||
act -W .github/workflows/release.yml --input_type bump=minor
|
||||
```
|
||||
|
||||
## Authorship Policy
|
||||
|
||||
All code in this repository is written and reviewed by humans. AI-generated code (e.g., Copilot, ChatGPT,
|
||||
Claude, etc.) is not permitted unless explicitly disclosed and approved.
|
||||
|
||||
Submissions must certify that the contributor understands and can maintain the code they submit.
|
||||
|
||||
## Questions? Reach out to me!
|
||||
If you encounter any questions while developing Loki, please don't hesitate to reach out to me at
|
||||
alex.j.tusa@gmail.com. I'm happy to help contributors in any way I can, regardless of if they're new or experienced!
|
||||
|
||||
Generated
+546
-937
File diff suppressed because it is too large
Load Diff
+13
-8
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "loki-ai"
|
||||
version = "0.2.0"
|
||||
version = "0.3.0"
|
||||
edition = "2024"
|
||||
authors = ["Alex Clarke <alex.j.tusa@gmail.com>"]
|
||||
description = "An all-in-one, batteries included LLM CLI Tool"
|
||||
@@ -18,10 +18,11 @@ anyhow = "1.0.69"
|
||||
bytes = "1.4.0"
|
||||
clap = { version = "4.5.40", features = ["cargo", "derive", "wrap_help"] }
|
||||
dirs = "6.0.0"
|
||||
dunce = "1.0.5"
|
||||
futures-util = "0.3.29"
|
||||
inquire = "0.7.0"
|
||||
inquire = "0.9.4"
|
||||
is-terminal = "0.4.9"
|
||||
reedline = "0.40.0"
|
||||
reedline = "0.46.0"
|
||||
serde = { version = "1.0.152", features = ["derive"] }
|
||||
serde_json = { version = "1.0.93", features = ["preserve_order"] }
|
||||
serde_yaml = "0.9.17"
|
||||
@@ -37,7 +38,7 @@ tokio-graceful = "0.2.2"
|
||||
tokio-stream = { version = "0.1.15", default-features = false, features = [
|
||||
"sync",
|
||||
] }
|
||||
crossterm = "0.28.1"
|
||||
crossterm = "0.29.0"
|
||||
chrono = "0.4.23"
|
||||
bincode = { version = "2.0.0", features = [
|
||||
"serde",
|
||||
@@ -90,12 +91,17 @@ strum_macros = "0.27.2"
|
||||
indoc = "2.0.6"
|
||||
rmcp = { version = "0.16.0", features = ["client", "transport-child-process"] }
|
||||
num_cpus = "1.17.0"
|
||||
rustpython-parser = "0.4.0"
|
||||
rustpython-ast = "0.4.0"
|
||||
tree-sitter = "0.26.8"
|
||||
tree-sitter-language = "0.1"
|
||||
tree-sitter-python = "0.25.0"
|
||||
tree-sitter-typescript = "0.23"
|
||||
colored = "3.0.0"
|
||||
clap_complete = { version = "4.5.58", features = ["unstable-dynamic"] }
|
||||
gman = "0.3.0"
|
||||
gman = "0.4.1"
|
||||
clap_complete_nushell = "4.5.9"
|
||||
open = "5"
|
||||
rand = { version = "0.10.0", features = ["default"] }
|
||||
url = "2.5.8"
|
||||
|
||||
[dependencies.reqwest]
|
||||
version = "0.12.0"
|
||||
@@ -126,7 +132,6 @@ arboard = { version = "3.3.0", default-features = false }
|
||||
|
||||
[dev-dependencies]
|
||||
pretty_assertions = "1.4.0"
|
||||
rand = "0.9.0"
|
||||
|
||||
[[bin]]
|
||||
name = "loki"
|
||||
|
||||
@@ -28,6 +28,7 @@ Coming from [AIChat](https://github.com/sigoden/aichat)? Follow the [migration g
|
||||
* [Function Calling](./docs/function-calling/TOOLS.md#Tools): Leverage function calling capabilities to extend Loki's functionality with custom tools
|
||||
* [Creating Custom Tools](./docs/function-calling/CUSTOM-TOOLS.md): You can create your own custom tools to enhance Loki's capabilities.
|
||||
* [Create Custom Python Tools](./docs/function-calling/CUSTOM-TOOLS.md#custom-python-based-tools)
|
||||
* [Create Custom TypeScript Tools](./docs/function-calling/CUSTOM-TOOLS.md#custom-typescript-based-tools)
|
||||
* [Create Custom Bash Tools](./docs/function-calling/CUSTOM-BASH-TOOLS.md)
|
||||
* [Bash Prompt Utilities](./docs/function-calling/BASH-PROMPT-HELPERS.md)
|
||||
* [First-Class MCP Server Support](./docs/function-calling/MCP-SERVERS.md): Easily connect and interact with MCP servers for advanced functionality.
|
||||
@@ -39,6 +40,7 @@ Coming from [AIChat](https://github.com/sigoden/aichat)? Follow the [migration g
|
||||
* [Todo System](./docs/TODO-SYSTEM.md): Built-in task tracking for improved agent reliability with smaller models.
|
||||
* [Environment Variables](./docs/ENVIRONMENT-VARIABLES.md): Override and customize your Loki configuration at runtime with environment variables.
|
||||
* [Client Configurations](./docs/clients/CLIENTS.md): Configuration instructions for various LLM providers.
|
||||
* [Authentication (API Key & OAuth)](./docs/clients/CLIENTS.md#authentication): Authenticate with API keys or OAuth for subscription-based access.
|
||||
* [Patching API Requests](./docs/clients/PATCHES.md): Learn how to patch API requests for advanced customization.
|
||||
* [Custom Themes](./docs/THEMES.md): Change the look and feel of Loki to your preferences with custom themes.
|
||||
* [History](#history): A history of how Loki came to be.
|
||||
@@ -150,6 +152,26 @@ guide you through the process when you first attempt to access the vault. So, to
|
||||
loki --list-secrets
|
||||
```
|
||||
|
||||
### Authentication
|
||||
Each client in your configuration needs authentication (with a few exceptions; e.g. ollama). Most clients use an API key
|
||||
(set via `api_key` in the config or through the [vault](./docs/VAULT.md)). For providers that support OAuth (e.g. Claude Pro/Max
|
||||
subscribers, Google Gemini), you can authenticate with your existing subscription instead:
|
||||
|
||||
```yaml
|
||||
# In your config.yaml
|
||||
clients:
|
||||
- type: claude
|
||||
name: my-claude-oauth
|
||||
auth: oauth # Indicate you want to authenticate with OAuth instead of an API key
|
||||
```
|
||||
|
||||
```sh
|
||||
loki --authenticate my-claude-oauth
|
||||
# Or via the REPL: .authenticate
|
||||
```
|
||||
|
||||
For full details, see the [authentication documentation](./docs/clients/CLIENTS.md#authentication).
|
||||
|
||||
### Tab-Completions
|
||||
You can also enable tab completions to make using Loki easier. To do so, add the following to your shell profile:
|
||||
```shell
|
||||
|
||||
@@ -2,68 +2,6 @@
|
||||
# Shared Agent Utilities - Minimal, focused helper functions
|
||||
set -euo pipefail
|
||||
|
||||
#############################
|
||||
## CONTEXT FILE MANAGEMENT ##
|
||||
#############################
|
||||
|
||||
get_context_file() {
|
||||
local project_dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
|
||||
echo "${project_dir}/.loki-context"
|
||||
}
|
||||
|
||||
# Initialize context file for a new task
|
||||
# Usage: init_context "Task description"
|
||||
init_context() {
|
||||
local task="$1"
|
||||
local project_dir="${LLM_AGENT_VAR_PROJECT_DIR:-.}"
|
||||
local context_file
|
||||
context_file=$(get_context_file)
|
||||
|
||||
cat > "${context_file}" <<EOF
|
||||
## Project: ${project_dir}
|
||||
## Task: ${task}
|
||||
## Started: $(date -Iseconds)
|
||||
|
||||
### Prior Findings
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# Append findings to the context file
|
||||
# Usage: append_context "agent_name" "finding summary
|
||||
append_context() {
|
||||
local agent="$1"
|
||||
local finding="$2"
|
||||
local context_file
|
||||
context_file=$(get_context_file)
|
||||
|
||||
if [[ -f "${context_file}" ]]; then
|
||||
{
|
||||
echo ""
|
||||
echo "[${agent}]:"
|
||||
echo "${finding}"
|
||||
} >> "${context_file}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Read the current context (returns empty string if no context)
|
||||
# Usage: context=$(read_context)
|
||||
read_context() {
|
||||
local context_file
|
||||
context_file=$(get_context_file)
|
||||
|
||||
if [[ -f "${context_file}" ]]; then
|
||||
cat "${context_file}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Clear the context file
|
||||
clear_context() {
|
||||
local context_file
|
||||
context_file=$(get_context_file)
|
||||
rm -f "${context_file}"
|
||||
}
|
||||
|
||||
#######################
|
||||
## PROJECT DETECTION ##
|
||||
#######################
|
||||
@@ -279,9 +217,9 @@ _detect_with_llm() {
|
||||
evidence=$(_gather_project_evidence "${dir}")
|
||||
local prompt
|
||||
prompt=$(cat <<-EOF
|
||||
|
||||
|
||||
Analyze this project directory and determine the project type, primary language, and the correct shell commands to build, test, and check (lint/typecheck) it.
|
||||
|
||||
|
||||
EOF
|
||||
)
|
||||
prompt+=$'\n'"${evidence}"$'\n'
|
||||
@@ -348,77 +286,11 @@ detect_project() {
|
||||
echo '{"type":"unknown","build":"","test":"","check":""}'
|
||||
}
|
||||
|
||||
######################
|
||||
## AGENT INVOCATION ##
|
||||
######################
|
||||
|
||||
# Invoke a subagent with optional context injection
|
||||
# Usage: invoke_agent <agent_name> <prompt> [extra_args...]
|
||||
invoke_agent() {
|
||||
local agent="$1"
|
||||
local prompt="$2"
|
||||
shift 2
|
||||
|
||||
local context
|
||||
context=$(read_context)
|
||||
|
||||
local full_prompt
|
||||
if [[ -n "${context}" ]]; then
|
||||
full_prompt="## Orchestrator Context
|
||||
|
||||
The orchestrator (sisyphus) has gathered this context from prior work:
|
||||
|
||||
<context>
|
||||
${context}
|
||||
</context>
|
||||
|
||||
## Your Task
|
||||
|
||||
${prompt}"
|
||||
else
|
||||
full_prompt="${prompt}"
|
||||
fi
|
||||
|
||||
env AUTO_CONFIRM=true loki --agent "${agent}" "$@" "${full_prompt}" 2>&1
|
||||
}
|
||||
|
||||
# Invoke a subagent and capture a summary of its findings
|
||||
# Usage: result=$(invoke_agent_with_summary "explore" "find auth patterns")
|
||||
invoke_agent_with_summary() {
|
||||
local agent="$1"
|
||||
local prompt="$2"
|
||||
shift 2
|
||||
|
||||
local output
|
||||
output=$(invoke_agent "${agent}" "${prompt}" "$@")
|
||||
|
||||
local summary=""
|
||||
|
||||
if echo "${output}" | grep -q "FINDINGS:"; then
|
||||
summary=$(echo "${output}" | sed -n '/FINDINGS:/,/^[A-Z_]*COMPLETE/p' | grep "^- " | sed 's/^- / - /')
|
||||
elif echo "${output}" | grep -q "CODER_COMPLETE:"; then
|
||||
summary=$(echo "${output}" | grep "CODER_COMPLETE:" | sed 's/CODER_COMPLETE: *//')
|
||||
elif echo "${output}" | grep -q "ORACLE_COMPLETE"; then
|
||||
summary=$(echo "${output}" | sed -n '/^## Recommendation/,/^## /{/^## Recommendation/d;/^## /d;p}' | sed '/^$/d' | head -10)
|
||||
fi
|
||||
|
||||
# Failsafe: extract up to 5 meaningful lines if no markers found
|
||||
if [[ -z "${summary}" ]]; then
|
||||
summary=$(echo "${output}" | grep -v "^$" | grep -v "^#" | grep -v "^\-\-\-" | tail -10 | head -5)
|
||||
fi
|
||||
|
||||
if [[ -n "${summary}" ]]; then
|
||||
append_context "${agent}" "${summary}"
|
||||
fi
|
||||
|
||||
echo "${output}"
|
||||
}
|
||||
|
||||
###########################
|
||||
## FILE SEARCH UTILITIES ##
|
||||
###########################
|
||||
|
||||
search_files() {
|
||||
_search_files() {
|
||||
local pattern="$1"
|
||||
local dir="${2:-.}"
|
||||
|
||||
|
||||
@@ -0,0 +1,36 @@
|
||||
# Code Reviewer
|
||||
|
||||
A CodeRabbit-style code review orchestrator that coordinates per-file reviews and synthesizes findings into a unified
|
||||
report.
|
||||
|
||||
This agent acts as the manager for the review process, delegating actual file analysis to **[File Reviewer](../file-reviewer/README.md)**
|
||||
agents while handling coordination and final reporting.
|
||||
|
||||
## Features
|
||||
|
||||
- 🤖 **Orchestration**: Spawns parallel reviewers for each changed file.
|
||||
- 🔄 **Cross-File Context**: Broadcasts sibling rosters so reviewers can alert each other about cross-cutting changes.
|
||||
- 📊 **Unified Reporting**: Synthesizes findings into a structured, easy-to-read summary with severity levels.
|
||||
- ⚡ **Parallel Execution**: Runs reviews concurrently for maximum speed.
|
||||
|
||||
## Pro-Tip: Use an IDE MCP Server for Improved Performance
|
||||
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
|
||||
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
|
||||
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
|
||||
them), and modify the agent definition to look like this:
|
||||
|
||||
```yaml
|
||||
# ...
|
||||
|
||||
mcp_servers:
|
||||
- jetbrains # The name of your configured IDE MCP server
|
||||
|
||||
global_tools:
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
# - execute_command.sh
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
@@ -2,7 +2,6 @@ name: code-reviewer
|
||||
description: CodeRabbit-style code reviewer - spawns per-file reviewers, synthesizes findings
|
||||
version: 1.0.0
|
||||
temperature: 0.1
|
||||
top_p: 0.95
|
||||
|
||||
auto_continue: true
|
||||
max_auto_continues: 20
|
||||
@@ -123,3 +122,6 @@ instructions: |
|
||||
- Project: {{project_dir}}
|
||||
- CWD: {{__cwd__}}
|
||||
- Shell: {{__shell__}}
|
||||
|
||||
## Available Tools:
|
||||
{{__tools__}}
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
An AI agent that assists you with your coding tasks.
|
||||
|
||||
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent to implement code specifications. Sisyphus
|
||||
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent to implement code specifications. Sisyphus
|
||||
acts as the coordinator/architect, while Coder handles the implementation details.
|
||||
|
||||
## Features
|
||||
@@ -13,4 +13,28 @@ acts as the coordinator/architect, while Coder handles the implementation detail
|
||||
- 🧐 Advanced code analysis and improvement suggestions
|
||||
- 📊 Precise diff-based file editing for controlled code modifications
|
||||
|
||||
It can also be used as a standalone tool for direct coding assistance.
|
||||
It can also be used as a standalone tool for direct coding assistance.
|
||||
|
||||
## Pro-Tip: Use an IDE MCP Server for Improved Performance
|
||||
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
|
||||
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
|
||||
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
|
||||
them), and modify the agent definition to look like this:
|
||||
|
||||
```yaml
|
||||
# ...
|
||||
|
||||
mcp_servers:
|
||||
- jetbrains # The name of your configured IDE MCP server
|
||||
|
||||
global_tools:
|
||||
# Keep useful read-only tools for reading files in other non-project directories
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
# - fs_write.sh
|
||||
# - fs_patch.sh
|
||||
- execute_command.sh
|
||||
|
||||
# ...
|
||||
```
|
||||
@@ -2,7 +2,6 @@ name: coder
|
||||
description: Implementation agent - writes code, follows patterns, verifies with builds
|
||||
version: 1.0.0
|
||||
temperature: 0.1
|
||||
top_p: 0.95
|
||||
|
||||
auto_continue: true
|
||||
max_auto_continues: 15
|
||||
@@ -30,11 +29,30 @@ instructions: |
|
||||
## Your Mission
|
||||
|
||||
Given an implementation task:
|
||||
1. Understand what to build (from context provided)
|
||||
2. Study existing patterns (read 1-2 similar files)
|
||||
1. Check for orchestrator context first (see below)
|
||||
2. Fill gaps only. Read files NOT already covered in context
|
||||
3. Write the code (using tools, NOT chat output)
|
||||
4. Verify it compiles/builds
|
||||
5. Signal completion
|
||||
5. Signal completion with a summary
|
||||
|
||||
## Using Orchestrator Context (IMPORTANT)
|
||||
|
||||
When spawned by sisyphus, your prompt will often contain a `<context>` block
|
||||
with prior findings: file paths, code patterns, and conventions discovered by
|
||||
explore agents.
|
||||
|
||||
**If context is provided:**
|
||||
1. Use it as your primary reference. Don't re-read files already summarized
|
||||
2. Follow the code patterns shown. Snippets in context ARE the style guide
|
||||
3. Read the referenced files ONLY IF you need more detail (e.g. full function
|
||||
signature, import list, or adjacent code not included in the snippet)
|
||||
4. If context includes a "Conventions" section, follow it exactly
|
||||
|
||||
**If context is NOT provided or is too vague to act on:**
|
||||
Fall back to self-exploration: grep for similar files, read 1-2 examples,
|
||||
match their style.
|
||||
|
||||
**Never ignore provided context.** It represents work already done upstream.
|
||||
|
||||
## Todo System
|
||||
|
||||
@@ -83,12 +101,13 @@ instructions: |
|
||||
|
||||
## Completion Signal
|
||||
|
||||
End with:
|
||||
When done, end your response with a summary so the parent agent knows what happened:
|
||||
|
||||
```
|
||||
CODER_COMPLETE: [summary of what was implemented]
|
||||
CODER_COMPLETE: [summary of what was implemented, which files were created/modified, and build status]
|
||||
```
|
||||
|
||||
Or if failed:
|
||||
Or if something went wrong:
|
||||
```
|
||||
CODER_FAILED: [what went wrong]
|
||||
```
|
||||
@@ -105,4 +124,6 @@ instructions: |
|
||||
- Project: {{project_dir}}
|
||||
- CWD: {{__cwd__}}
|
||||
- Shell: {{__shell__}}
|
||||
|
||||
|
||||
## Available tools:
|
||||
{{__tools__}}
|
||||
@@ -14,11 +14,28 @@ _project_dir() {
|
||||
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
|
||||
}
|
||||
|
||||
# Normalize a path to be relative to project root.
|
||||
# Strips the project_dir prefix if the LLM passes an absolute path.
|
||||
# Usage: local rel_path; rel_path=$(_normalize_path "/abs/or/rel/path")
|
||||
_normalize_path() {
|
||||
local input_path="$1"
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
|
||||
if [[ "${input_path}" == /* ]]; then
|
||||
input_path="${input_path#"${project_dir}"/}"
|
||||
fi
|
||||
|
||||
input_path="${input_path#./}"
|
||||
echo "${input_path}"
|
||||
}
|
||||
|
||||
# @cmd Read a file's contents before modifying
|
||||
# @option --path! Path to the file (relative to project root)
|
||||
read_file() {
|
||||
local file_path
|
||||
# shellcheck disable=SC2154
|
||||
local file_path="${argc_path}"
|
||||
file_path=$(_normalize_path "${argc_path}")
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
local full_path="${project_dir}/${file_path}"
|
||||
@@ -39,7 +56,8 @@ read_file() {
|
||||
# @option --path! Path for the file (relative to project root)
|
||||
# @option --content! Complete file contents to write
|
||||
write_file() {
|
||||
local file_path="${argc_path}"
|
||||
local file_path
|
||||
file_path=$(_normalize_path "${argc_path}")
|
||||
# shellcheck disable=SC2154
|
||||
local content="${argc_content}"
|
||||
local project_dir
|
||||
@@ -47,7 +65,7 @@ write_file() {
|
||||
local full_path="${project_dir}/${file_path}"
|
||||
|
||||
mkdir -p "$(dirname "${full_path}")"
|
||||
echo "${content}" > "${full_path}"
|
||||
printf '%s' "${content}" > "${full_path}"
|
||||
|
||||
green "Wrote: ${file_path}" >> "$LLM_OUTPUT"
|
||||
}
|
||||
@@ -55,7 +73,8 @@ write_file() {
|
||||
# @cmd Find files similar to a given path (for pattern matching)
|
||||
# @option --path! Path to find similar files for
|
||||
find_similar_files() {
|
||||
local file_path="${argc_path}"
|
||||
local file_path
|
||||
file_path=$(_normalize_path "${argc_path}")
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
|
||||
@@ -71,14 +90,14 @@ find_similar_files() {
|
||||
! -name "$(basename "${file_path}")" \
|
||||
! -name "*test*" \
|
||||
! -name "*spec*" \
|
||||
2>/dev/null | head -3)
|
||||
2>/dev/null | sed "s|^${project_dir}/||" | head -3)
|
||||
|
||||
if [[ -z "${results}" ]]; then
|
||||
results=$(find "${project_dir}/src" -type f -name "*.${ext}" \
|
||||
! -name "*test*" \
|
||||
! -name "*spec*" \
|
||||
-not -path '*/target/*' \
|
||||
2>/dev/null | head -3)
|
||||
2>/dev/null | sed "s|^${project_dir}/||" | head -3)
|
||||
fi
|
||||
|
||||
if [[ -n "${results}" ]]; then
|
||||
@@ -186,6 +205,7 @@ search_code() {
|
||||
grep -v '/target/' | \
|
||||
grep -v '/node_modules/' | \
|
||||
grep -v '/.git/' | \
|
||||
sed "s|^${project_dir}/||" | \
|
||||
head -20) || true
|
||||
|
||||
if [[ -n "${results}" ]]; then
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
An AI agent specialized in exploring codebases, finding patterns, and understanding project structures.
|
||||
|
||||
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent to gather information and context. Sisyphus
|
||||
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent to gather information and context. Sisyphus
|
||||
acts as the coordinator/architect, while Explore handles the research and discovery phase.
|
||||
|
||||
It can also be used as a standalone tool for understanding codebases and finding specific information.
|
||||
@@ -13,3 +13,25 @@ It can also be used as a standalone tool for understanding codebases and finding
|
||||
- 📂 File system navigation and content analysis
|
||||
- 🧠 Context gathering for complex tasks
|
||||
- 🛡️ Read-only operations for safe investigation
|
||||
|
||||
## Pro-Tip: Use an IDE MCP Server for Improved Performance
|
||||
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
|
||||
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
|
||||
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
|
||||
them), and modify the agent definition to look like this:
|
||||
|
||||
```yaml
|
||||
# ...
|
||||
|
||||
mcp_servers:
|
||||
- jetbrains # The name of your configured IDE MCP server
|
||||
|
||||
global_tools:
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
- fs_ls.sh
|
||||
- web_search_loki.sh
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
@@ -2,19 +2,19 @@ name: explore
|
||||
description: Fast codebase exploration agent - finds patterns, structures, and relevant files
|
||||
version: 1.0.0
|
||||
temperature: 0.1
|
||||
top_p: 0.95
|
||||
|
||||
variables:
|
||||
- name: project_dir
|
||||
description: Project directory to explore
|
||||
default: '.'
|
||||
|
||||
mcp_servers:
|
||||
- ddg-search
|
||||
global_tools:
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
- fs_ls.sh
|
||||
- web_search_loki.sh
|
||||
|
||||
instructions: |
|
||||
You are a codebase explorer. Your job: Search, find, report. Nothing else.
|
||||
@@ -68,6 +68,9 @@ instructions: |
|
||||
## Context
|
||||
- Project: {{project_dir}}
|
||||
- CWD: {{__cwd__}}
|
||||
|
||||
## Available Tools:
|
||||
{{__tools__}}
|
||||
|
||||
conversation_starters:
|
||||
- 'Find how authentication is implemented'
|
||||
|
||||
@@ -14,6 +14,21 @@ _project_dir() {
|
||||
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
|
||||
}
|
||||
|
||||
# Normalize a path to be relative to project root.
|
||||
# Strips the project_dir prefix if the LLM passes an absolute path.
|
||||
_normalize_path() {
|
||||
local input_path="$1"
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
|
||||
if [[ "${input_path}" == /* ]]; then
|
||||
input_path="${input_path#"${project_dir}"/}"
|
||||
fi
|
||||
|
||||
input_path="${input_path#./}"
|
||||
echo "${input_path}"
|
||||
}
|
||||
|
||||
# @cmd Get project structure and layout
|
||||
get_structure() {
|
||||
local project_dir
|
||||
@@ -45,7 +60,7 @@ search_files() {
|
||||
echo "" >> "$LLM_OUTPUT"
|
||||
|
||||
local results
|
||||
results=$(search_files "${pattern}" "${project_dir}")
|
||||
results=$(_search_files "${pattern}" "${project_dir}")
|
||||
|
||||
if [[ -n "${results}" ]]; then
|
||||
echo "${results}" >> "$LLM_OUTPUT"
|
||||
@@ -78,6 +93,7 @@ search_content() {
|
||||
grep -v '/node_modules/' | \
|
||||
grep -v '/.git/' | \
|
||||
grep -v '/dist/' | \
|
||||
sed "s|^${project_dir}/||" | \
|
||||
head -30) || true
|
||||
|
||||
if [[ -n "${results}" ]]; then
|
||||
@@ -91,8 +107,9 @@ search_content() {
|
||||
# @option --path! Path to the file (relative to project root)
|
||||
# @option --lines Maximum lines to read (default: 200)
|
||||
read_file() {
|
||||
local file_path
|
||||
# shellcheck disable=SC2154
|
||||
local file_path="${argc_path}"
|
||||
file_path=$(_normalize_path "${argc_path}")
|
||||
local max_lines="${argc_lines:-200}"
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
@@ -122,7 +139,8 @@ read_file() {
|
||||
# @cmd Find similar files to a given file (for pattern matching)
|
||||
# @option --path! Path to the reference file
|
||||
find_similar() {
|
||||
local file_path="${argc_path}"
|
||||
local file_path
|
||||
file_path=$(_normalize_path "${argc_path}")
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
|
||||
@@ -138,7 +156,7 @@ find_similar() {
|
||||
! -name "$(basename "${file_path}")" \
|
||||
! -name "*test*" \
|
||||
! -name "*spec*" \
|
||||
2>/dev/null | head -5)
|
||||
2>/dev/null | sed "s|^${project_dir}/||" | head -5)
|
||||
|
||||
if [[ -n "${results}" ]]; then
|
||||
echo "${results}" >> "$LLM_OUTPUT"
|
||||
@@ -147,7 +165,7 @@ find_similar() {
|
||||
! -name "$(basename "${file_path}")" \
|
||||
! -name "*test*" \
|
||||
-not -path '*/target/*' \
|
||||
2>/dev/null | head -5)
|
||||
2>/dev/null | sed "s|^${project_dir}/||" | head -5)
|
||||
if [[ -n "${results}" ]]; then
|
||||
echo "${results}" >> "$LLM_OUTPUT"
|
||||
else
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
# File Reviewer
|
||||
|
||||
A specialized worker agent that reviews a single file's diff for bugs, style issues, and cross-cutting concerns.
|
||||
|
||||
This agent is designed to be spawned by the **[Code Reviewer](../code-reviewer/README.md)** agent. It focuses deeply on
|
||||
one file while communicating with sibling agents to catch issues that span multiple files.
|
||||
|
||||
## Features
|
||||
|
||||
- 🔍 **Deep Analysis**: Focuses on bugs, logic errors, security issues, and style problems in a single file.
|
||||
- 🗣️ **Teammate Communication**: Sends and receives alerts to/from sibling reviewers about interface or dependency
|
||||
changes.
|
||||
- 🎯 **Targeted Reading**: Reads only relevant context around changed lines to stay efficient.
|
||||
- 🏷️ **Structured Findings**: Categorizes issues by severity (🔴 Critical, 🟡 Warning, 🟢 Suggestion, 💡 Nitpick).
|
||||
|
||||
## Pro-Tip: Use an IDE MCP Server for Improved Performance
|
||||
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
|
||||
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
|
||||
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
|
||||
them), and modify the agent definition to look like this:
|
||||
|
||||
```yaml
|
||||
# ...
|
||||
|
||||
mcp_servers:
|
||||
- jetbrains # The name of your configured IDE MCP server
|
||||
|
||||
global_tools:
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
@@ -2,7 +2,6 @@ name: file-reviewer
|
||||
description: Reviews a single file's diff for bugs, style issues, and cross-cutting concerns
|
||||
version: 1.0.0
|
||||
temperature: 0.1
|
||||
top_p: 0.95
|
||||
|
||||
variables:
|
||||
- name: project_dir
|
||||
@@ -109,3 +108,6 @@ instructions: |
|
||||
## Context
|
||||
- Project: {{project_dir}}
|
||||
- CWD: {{__cwd__}}
|
||||
|
||||
## Available Tools:
|
||||
{{__tools__}}
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
An AI agent specialized in high-level architecture, complex debugging, and design decisions.
|
||||
|
||||
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent when deep reasoning, architectural advice,
|
||||
This agent is designed to be delegated to by the **[Sisyphus](../sisyphus/README.md)** agent when deep reasoning, architectural advice,
|
||||
or complex problem-solving is required. Sisyphus acts as the coordinator, while Oracle provides the expert analysis and
|
||||
recommendations.
|
||||
|
||||
@@ -15,3 +15,25 @@ It can also be used as a standalone tool for design reviews and solving difficul
|
||||
- ⚖️ Tradeoff analysis and technology selection
|
||||
- 📝 Code review and best practices advice
|
||||
- 🧠 Deep reasoning for ambiguous problems
|
||||
|
||||
## Pro-Tip: Use an IDE MCP Server for Improved Performance
|
||||
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
|
||||
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
|
||||
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
|
||||
them), and modify the agent definition to look like this:
|
||||
|
||||
```yaml
|
||||
# ...
|
||||
|
||||
mcp_servers:
|
||||
- jetbrains # The name of your configured IDE MCP server
|
||||
|
||||
global_tools:
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
- fs_ls.sh
|
||||
- web_search_loki.sh
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
@@ -2,19 +2,19 @@ name: oracle
|
||||
description: High-IQ advisor for architecture, debugging, and complex decisions
|
||||
version: 1.0.0
|
||||
temperature: 0.2
|
||||
top_p: 0.95
|
||||
|
||||
variables:
|
||||
- name: project_dir
|
||||
description: Project directory for context
|
||||
default: '.'
|
||||
|
||||
mcp_servers:
|
||||
- ddg-search
|
||||
global_tools:
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
- fs_ls.sh
|
||||
- web_search_loki.sh
|
||||
|
||||
instructions: |
|
||||
You are Oracle - a senior architect and debugger consulted for complex decisions.
|
||||
@@ -75,6 +75,9 @@ instructions: |
|
||||
## Context
|
||||
- Project: {{project_dir}}
|
||||
- CWD: {{__cwd__}}
|
||||
|
||||
## Available Tools:
|
||||
{{__tools__}}
|
||||
|
||||
conversation_starters:
|
||||
- 'Review this architecture design'
|
||||
|
||||
@@ -14,21 +14,38 @@ _project_dir() {
|
||||
(cd "${dir}" 2>/dev/null && pwd) || echo "${dir}"
|
||||
}
|
||||
|
||||
# Normalize a path to be relative to project root.
|
||||
# Strips the project_dir prefix if the LLM passes an absolute path.
|
||||
_normalize_path() {
|
||||
local input_path="$1"
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
|
||||
if [[ "${input_path}" == /* ]]; then
|
||||
input_path="${input_path#"${project_dir}"/}"
|
||||
fi
|
||||
|
||||
input_path="${input_path#./}"
|
||||
echo "${input_path}"
|
||||
}
|
||||
|
||||
# @cmd Read a file for analysis
|
||||
# @option --path! Path to the file (relative to project root)
|
||||
read_file() {
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
local file_path
|
||||
# shellcheck disable=SC2154
|
||||
local full_path="${project_dir}/${argc_path}"
|
||||
file_path=$(_normalize_path "${argc_path}")
|
||||
local full_path="${project_dir}/${file_path}"
|
||||
|
||||
if [[ ! -f "${full_path}" ]]; then
|
||||
error "File not found: ${argc_path}" >> "$LLM_OUTPUT"
|
||||
error "File not found: ${file_path}" >> "$LLM_OUTPUT"
|
||||
return 1
|
||||
fi
|
||||
|
||||
{
|
||||
info "Reading: ${argc_path}"
|
||||
info "Reading: ${file_path}"
|
||||
echo ""
|
||||
cat "${full_path}"
|
||||
} >> "$LLM_OUTPUT"
|
||||
@@ -80,6 +97,7 @@ search_code() {
|
||||
grep -v '/target/' | \
|
||||
grep -v '/node_modules/' | \
|
||||
grep -v '/.git/' | \
|
||||
sed "s|^${project_dir}/||" | \
|
||||
head -30) || true
|
||||
|
||||
if [[ -n "${results}" ]]; then
|
||||
@@ -113,7 +131,8 @@ analyze_with_command() {
|
||||
# @cmd List directory contents
|
||||
# @option --path Path to list (default: project root)
|
||||
list_directory() {
|
||||
local dir_path="${argc_path:-.}"
|
||||
local dir_path
|
||||
dir_path=$(_normalize_path "${argc_path:-.}")
|
||||
local project_dir
|
||||
project_dir=$(_project_dir)
|
||||
local full_path="${project_dir}/${dir_path}"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Sisyphus
|
||||
|
||||
The main coordinator agent for the Loki coding ecosystem, providing a powerful CLI interface for code generation and
|
||||
The main coordinator agent for the Loki coding ecosystem, providing a powerful CLI interface for code generation and
|
||||
project management similar to OpenCode, ClaudeCode, Codex, or Gemini CLI.
|
||||
|
||||
_Inspired by the Sisyphus and Oracle agents of OpenCode._
|
||||
@@ -16,3 +16,26 @@ Sisyphus acts as the primary entry point, capable of handling complex tasks by c
|
||||
- 💻 **CLI Coding**: Provides a natural language interface for writing and editing code.
|
||||
- 🔄 **Task Management**: Tracks progress and context across complex operations.
|
||||
- 🛠️ **Tool Integration**: Seamlessly uses system tools for building, testing, and file manipulation.
|
||||
|
||||
## Pro-Tip: Use an IDE MCP Server for Improved Performance
|
||||
Many modern IDEs now include MCP servers that let LLMs perform operations within the IDE itself and use IDE tools. Using
|
||||
an IDE's MCP server dramatically improves the performance of coding agents. So if you have an IDE, try adding that MCP
|
||||
server to your config (see the [MCP Server docs](../../../docs/function-calling/MCP-SERVERS.md) to see how to configure
|
||||
them), and modify the agent definition to look like this:
|
||||
|
||||
```yaml
|
||||
# ...
|
||||
|
||||
mcp_servers:
|
||||
- jetbrains
|
||||
|
||||
global_tools:
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
- fs_ls.sh
|
||||
- web_search_loki.sh
|
||||
- execute_command.sh
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
@@ -2,7 +2,6 @@ name: sisyphus
|
||||
description: OpenCode-style orchestrator - classifies intent, delegates to specialists, tracks progress with todos
|
||||
version: 2.0.0
|
||||
temperature: 0.1
|
||||
top_p: 0.95
|
||||
|
||||
agent_session: temp
|
||||
auto_continue: true
|
||||
@@ -13,7 +12,7 @@ can_spawn_agents: true
|
||||
max_concurrent_agents: 4
|
||||
max_agent_depth: 3
|
||||
inject_spawn_instructions: true
|
||||
summarization_threshold: 4000
|
||||
summarization_threshold: 8000
|
||||
|
||||
variables:
|
||||
- name: project_dir
|
||||
@@ -23,12 +22,13 @@ variables:
|
||||
description: Auto-confirm command execution
|
||||
default: '1'
|
||||
|
||||
mcp_servers:
|
||||
- ddg-search
|
||||
global_tools:
|
||||
- fs_read.sh
|
||||
- fs_grep.sh
|
||||
- fs_glob.sh
|
||||
- fs_ls.sh
|
||||
- web_search_loki.sh
|
||||
- execute_command.sh
|
||||
|
||||
instructions: |
|
||||
@@ -70,6 +70,45 @@ instructions: |
|
||||
| coder | Write/edit files, implement features | Creates/modifies files, runs builds |
|
||||
| oracle | Architecture decisions, complex debugging | Advisory, high-quality reasoning |
|
||||
|
||||
## Coder Delegation Format (MANDATORY)
|
||||
|
||||
When spawning the `coder` agent, your prompt MUST include these sections.
|
||||
The coder has NOT seen the codebase. Your prompt IS its entire context.
|
||||
|
||||
### Template:
|
||||
|
||||
```
|
||||
## Goal
|
||||
[1-2 sentences: what to build/modify and where]
|
||||
|
||||
## Reference Files
|
||||
[Files that explore found, with what each demonstrates]
|
||||
- `path/to/file.ext` - what pattern this file shows
|
||||
- `path/to/other.ext` - what convention this file shows
|
||||
|
||||
## Code Patterns to Follow
|
||||
[Paste ACTUAL code snippets from explore results, not descriptions]
|
||||
<code>
|
||||
// From path/to/file.ext - this is the pattern to follow:
|
||||
[actual code explore found, 5-20 lines]
|
||||
</code>
|
||||
|
||||
## Conventions
|
||||
[Naming, imports, error handling, file organization]
|
||||
- Convention 1
|
||||
- Convention 2
|
||||
|
||||
## Constraints
|
||||
[What NOT to do, scope boundaries]
|
||||
- Do NOT modify X
|
||||
- Only touch files in Y/
|
||||
```
|
||||
|
||||
**CRITICAL**: Include actual code snippets, not just file paths.
|
||||
If explore returned code patterns, paste them into the coder prompt.
|
||||
Vague prompts like "follow existing patterns" waste coder's tokens on
|
||||
re-exploration that you already did.
|
||||
|
||||
## Workflow Examples
|
||||
|
||||
### Example 1: Implementation task (explore -> coder, parallel exploration)
|
||||
@@ -81,12 +120,12 @@ instructions: |
|
||||
2. todo__add --task "Explore existing API patterns"
|
||||
3. todo__add --task "Implement profile endpoint"
|
||||
4. todo__add --task "Verify with build/test"
|
||||
5. agent__spawn --agent explore --prompt "Find existing API endpoint patterns, route structures, and controller conventions"
|
||||
6. agent__spawn --agent explore --prompt "Find existing data models and database query patterns"
|
||||
5. agent__spawn --agent explore --prompt "Find existing API endpoint patterns, route structures, and controller conventions. Include code snippets."
|
||||
6. agent__spawn --agent explore --prompt "Find existing data models and database query patterns. Include code snippets."
|
||||
7. agent__collect --id <id1>
|
||||
8. agent__collect --id <id2>
|
||||
9. todo__done --id 1
|
||||
10. agent__spawn --agent coder --prompt "Create user profiles endpoint following existing patterns. [Include context from explore results]"
|
||||
10. agent__spawn --agent coder --prompt "<structured prompt using Coder Delegation Format above, including code snippets from explore results>"
|
||||
11. agent__collect --id <coder_id>
|
||||
12. todo__done --id 2
|
||||
13. run_build
|
||||
@@ -135,7 +174,6 @@ instructions: |
|
||||
|
||||
## When to Do It Yourself
|
||||
|
||||
- Single-file reads/writes
|
||||
- Simple command execution
|
||||
- Trivial changes (typos, renames)
|
||||
- Quick file searches
|
||||
|
||||
@@ -16,11 +16,15 @@
|
||||
},
|
||||
"atlassian": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "mcp-remote@0.1.13", "https://mcp.atlassian.com/v1/sse"]
|
||||
"args": ["-y", "mcp-remote@0.1.13", "https://mcp.atlassian.com/v1/mcp"]
|
||||
},
|
||||
"docker": {
|
||||
"command": "uvx",
|
||||
"args": ["mcp-server-docker"]
|
||||
},
|
||||
"ddg-search": {
|
||||
"command": "uvx",
|
||||
"args": ["duckduckgo-mcp-server"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -50,7 +50,13 @@ def parse_raw_data(data):
|
||||
|
||||
def parse_argv():
|
||||
agent_func = sys.argv[1]
|
||||
agent_data = sys.argv[2]
|
||||
|
||||
tool_data_file = os.environ.get("LLM_TOOL_DATA_FILE")
|
||||
if tool_data_file and os.path.isfile(tool_data_file):
|
||||
with open(tool_data_file, "r", encoding="utf-8") as f:
|
||||
agent_data = f.read()
|
||||
else:
|
||||
agent_data = sys.argv[2]
|
||||
|
||||
if (not agent_data) or (not agent_func):
|
||||
print("Usage: ./{agent_name}.py <agent-func> <agent-data>", file=sys.stderr)
|
||||
|
||||
@@ -14,7 +14,11 @@ main() {
|
||||
|
||||
parse_argv() {
|
||||
agent_func="$1"
|
||||
agent_data="$2"
|
||||
if [[ -n "$LLM_TOOL_DATA_FILE" ]] && [[ -f "$LLM_TOOL_DATA_FILE" ]]; then
|
||||
agent_data="$(cat "$LLM_TOOL_DATA_FILE")"
|
||||
else
|
||||
agent_data="$2"
|
||||
fi
|
||||
if [[ -z "$agent_data" ]] || [[ -z "$agent_func" ]]; then
|
||||
die "usage: ./{agent_name}.sh <agent-func> <agent-data>"
|
||||
fi
|
||||
@@ -57,7 +61,6 @@ run() {
|
||||
if [[ "$OS" == "Windows_NT" ]]; then
|
||||
set -o igncr
|
||||
tools_path="$(cygpath -w "$tools_path")"
|
||||
tool_data="$(echo "$tool_data" | sed 's/\\/\\\\/g')"
|
||||
fi
|
||||
|
||||
jq_script="$(cat <<-'EOF'
|
||||
|
||||
@@ -0,0 +1,189 @@
|
||||
#!/usr/bin/env tsx
|
||||
|
||||
// Usage: ./{agent_name}.ts <agent-func> <agent-data>
|
||||
|
||||
import { readFileSync, writeFileSync, existsSync } from "fs";
|
||||
import { join } from "path";
|
||||
import { pathToFileURL } from "url";
|
||||
|
||||
async function main(): Promise<void> {
|
||||
const { agentFunc, rawData } = parseArgv();
|
||||
const agentData = parseRawData(rawData);
|
||||
|
||||
const configDir = "{config_dir}";
|
||||
setupEnv(configDir, agentFunc);
|
||||
|
||||
const agentToolsPath = join(configDir, "agents", "{agent_name}", "tools.ts");
|
||||
await run(agentToolsPath, agentFunc, agentData);
|
||||
}
|
||||
|
||||
function parseRawData(data: string): Record<string, unknown> {
|
||||
if (!data) {
|
||||
throw new Error("No JSON data");
|
||||
}
|
||||
|
||||
try {
|
||||
return JSON.parse(data);
|
||||
} catch {
|
||||
throw new Error("Invalid JSON data");
|
||||
}
|
||||
}
|
||||
|
||||
function parseArgv(): { agentFunc: string; rawData: string } {
|
||||
const agentFunc = process.argv[2];
|
||||
|
||||
const toolDataFile = process.env["LLM_TOOL_DATA_FILE"];
|
||||
let agentData: string;
|
||||
if (toolDataFile && existsSync(toolDataFile)) {
|
||||
agentData = readFileSync(toolDataFile, "utf-8");
|
||||
} else {
|
||||
agentData = process.argv[3];
|
||||
}
|
||||
|
||||
if (!agentFunc || !agentData) {
|
||||
process.stderr.write("Usage: ./{agent_name}.ts <agent-func> <agent-data>\n");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
return { agentFunc, rawData: agentData };
|
||||
}
|
||||
|
||||
function setupEnv(configDir: string, agentFunc: string): void {
|
||||
loadEnv(join(configDir, ".env"));
|
||||
process.env["LLM_ROOT_DIR"] = configDir;
|
||||
process.env["LLM_AGENT_NAME"] = "{agent_name}";
|
||||
process.env["LLM_AGENT_FUNC"] = agentFunc;
|
||||
process.env["LLM_AGENT_ROOT_DIR"] = join(configDir, "agents", "{agent_name}");
|
||||
process.env["LLM_AGENT_CACHE_DIR"] = join(configDir, "cache", "{agent_name}");
|
||||
}
|
||||
|
||||
function loadEnv(filePath: string): void {
|
||||
let lines: string[];
|
||||
try {
|
||||
lines = readFileSync(filePath, "utf-8").split("\n");
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const raw of lines) {
|
||||
const line = raw.trim();
|
||||
if (line.startsWith("#") || !line) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const eqIdx = line.indexOf("=");
|
||||
if (eqIdx === -1) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const key = line.slice(0, eqIdx).trim();
|
||||
if (key in process.env) {
|
||||
continue;
|
||||
}
|
||||
|
||||
let value = line.slice(eqIdx + 1).trim();
|
||||
if (
|
||||
(value.startsWith('"') && value.endsWith('"')) ||
|
||||
(value.startsWith("'") && value.endsWith("'"))
|
||||
) {
|
||||
value = value.slice(1, -1);
|
||||
}
|
||||
process.env[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
function extractParamNames(fn: Function): string[] {
|
||||
const src = fn.toString();
|
||||
const match = src.match(/^(?:async\s+)?function\s*\w*\s*\(([^)]*)\)/);
|
||||
if (!match) {
|
||||
return [];
|
||||
}
|
||||
return match[1]
|
||||
.split(",")
|
||||
.map((p) => p.trim().replace(/[:=?].*/s, "").trim())
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
function spreadArgs(
|
||||
fn: Function,
|
||||
data: Record<string, unknown>,
|
||||
): unknown[] {
|
||||
const names = extractParamNames(fn);
|
||||
if (names.length === 0) {
|
||||
return [];
|
||||
}
|
||||
return names.map((name) => data[name]);
|
||||
}
|
||||
|
||||
async function run(
|
||||
agentPath: string,
|
||||
agentFunc: string,
|
||||
agentData: Record<string, unknown>,
|
||||
): Promise<void> {
|
||||
const mod = await import(pathToFileURL(agentPath).href);
|
||||
|
||||
if (typeof mod[agentFunc] !== "function") {
|
||||
throw new Error(`No module function '${agentFunc}' at '${agentPath}'`);
|
||||
}
|
||||
|
||||
const fn = mod[agentFunc] as Function;
|
||||
const args = spreadArgs(fn, agentData);
|
||||
const value = await fn(...args);
|
||||
returnToLlm(value);
|
||||
dumpResult(`{agent_name}:${agentFunc}`);
|
||||
}
|
||||
|
||||
function returnToLlm(value: unknown): void {
|
||||
if (value === null || value === undefined) {
|
||||
return;
|
||||
}
|
||||
|
||||
const output = process.env["LLM_OUTPUT"];
|
||||
const write = (s: string) => {
|
||||
if (output) {
|
||||
writeFileSync(output, s, "utf-8");
|
||||
} else {
|
||||
process.stdout.write(s);
|
||||
}
|
||||
};
|
||||
|
||||
if (typeof value === "string" || typeof value === "number" || typeof value === "boolean") {
|
||||
write(String(value));
|
||||
} else if (typeof value === "object") {
|
||||
write(JSON.stringify(value, null, 2));
|
||||
}
|
||||
}
|
||||
|
||||
function dumpResult(name: string): void {
|
||||
const dumpResults = process.env["LLM_DUMP_RESULTS"];
|
||||
const llmOutput = process.env["LLM_OUTPUT"];
|
||||
|
||||
if (!dumpResults || !llmOutput || !process.stdout.isTTY) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const pattern = new RegExp(`\\b(${dumpResults})\\b`);
|
||||
if (!pattern.test(name)) {
|
||||
return;
|
||||
}
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
let data: string;
|
||||
try {
|
||||
data = readFileSync(llmOutput, "utf-8");
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
process.stdout.write(
|
||||
`\x1b[2m----------------------\n${data}\n----------------------\x1b[0m\n`,
|
||||
);
|
||||
}
|
||||
|
||||
main().catch((err) => {
|
||||
process.stderr.write(`${err}\n`);
|
||||
process.exit(1);
|
||||
});
|
||||
@@ -49,6 +49,11 @@ def parse_raw_data(data):
|
||||
|
||||
|
||||
def parse_argv():
|
||||
tool_data_file = os.environ.get("LLM_TOOL_DATA_FILE")
|
||||
if tool_data_file and os.path.isfile(tool_data_file):
|
||||
with open(tool_data_file, "r", encoding="utf-8") as f:
|
||||
return f.read()
|
||||
|
||||
argv = sys.argv[:] + [None] * max(0, 2 - len(sys.argv))
|
||||
|
||||
tool_data = argv[1]
|
||||
|
||||
@@ -13,7 +13,11 @@ main() {
|
||||
}
|
||||
|
||||
parse_argv() {
|
||||
tool_data="$1"
|
||||
if [[ -n "$LLM_TOOL_DATA_FILE" ]] && [[ -f "$LLM_TOOL_DATA_FILE" ]]; then
|
||||
tool_data="$(cat "$LLM_TOOL_DATA_FILE")"
|
||||
else
|
||||
tool_data="$1"
|
||||
fi
|
||||
if [[ -z "$tool_data" ]]; then
|
||||
die "usage: ./{function_name}.sh <tool-data>"
|
||||
fi
|
||||
@@ -54,7 +58,6 @@ run() {
|
||||
if [[ "$OS" == "Windows_NT" ]]; then
|
||||
set -o igncr
|
||||
tool_path="$(cygpath -w "$tool_path")"
|
||||
tool_data="$(echo "$tool_data" | sed 's/\\/\\\\/g')"
|
||||
fi
|
||||
|
||||
jq_script="$(cat <<-'EOF'
|
||||
|
||||
@@ -0,0 +1,184 @@
|
||||
#!/usr/bin/env tsx
|
||||
|
||||
// Usage: ./{function_name}.ts <tool-data>
|
||||
|
||||
import { readFileSync, writeFileSync, existsSync } from "fs";
|
||||
import { join } from "path";
|
||||
import { pathToFileURL } from "url";
|
||||
|
||||
async function main(): Promise<void> {
|
||||
const rawData = parseArgv();
|
||||
const toolData = parseRawData(rawData);
|
||||
|
||||
const rootDir = "{root_dir}";
|
||||
setupEnv(rootDir);
|
||||
|
||||
const toolPath = "{tool_path}.ts";
|
||||
await run(toolPath, "run", toolData);
|
||||
}
|
||||
|
||||
function parseRawData(data: string): Record<string, unknown> {
|
||||
if (!data) {
|
||||
throw new Error("No JSON data");
|
||||
}
|
||||
|
||||
try {
|
||||
return JSON.parse(data);
|
||||
} catch {
|
||||
throw new Error("Invalid JSON data");
|
||||
}
|
||||
}
|
||||
|
||||
function parseArgv(): string {
|
||||
const toolDataFile = process.env["LLM_TOOL_DATA_FILE"];
|
||||
if (toolDataFile && existsSync(toolDataFile)) {
|
||||
return readFileSync(toolDataFile, "utf-8");
|
||||
}
|
||||
|
||||
const toolData = process.argv[2];
|
||||
|
||||
if (!toolData) {
|
||||
process.stderr.write("Usage: ./{function_name}.ts <tool-data>\n");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
return toolData;
|
||||
}
|
||||
|
||||
function setupEnv(rootDir: string): void {
|
||||
loadEnv(join(rootDir, ".env"));
|
||||
process.env["LLM_ROOT_DIR"] = rootDir;
|
||||
process.env["LLM_TOOL_NAME"] = "{function_name}";
|
||||
process.env["LLM_TOOL_CACHE_DIR"] = join(rootDir, "cache", "{function_name}");
|
||||
}
|
||||
|
||||
function loadEnv(filePath: string): void {
|
||||
let lines: string[];
|
||||
try {
|
||||
lines = readFileSync(filePath, "utf-8").split("\n");
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const raw of lines) {
|
||||
const line = raw.trim();
|
||||
if (line.startsWith("#") || !line) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const eqIdx = line.indexOf("=");
|
||||
if (eqIdx === -1) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const key = line.slice(0, eqIdx).trim();
|
||||
if (key in process.env) {
|
||||
continue;
|
||||
}
|
||||
|
||||
let value = line.slice(eqIdx + 1).trim();
|
||||
if (
|
||||
(value.startsWith('"') && value.endsWith('"')) ||
|
||||
(value.startsWith("'") && value.endsWith("'"))
|
||||
) {
|
||||
value = value.slice(1, -1);
|
||||
}
|
||||
process.env[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
function extractParamNames(fn: Function): string[] {
|
||||
const src = fn.toString();
|
||||
const match = src.match(/^(?:async\s+)?function\s*\w*\s*\(([^)]*)\)/);
|
||||
if (!match) {
|
||||
return [];
|
||||
}
|
||||
return match[1]
|
||||
.split(",")
|
||||
.map((p) => p.trim().replace(/[:=?].*/s, "").trim())
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
function spreadArgs(
|
||||
fn: Function,
|
||||
data: Record<string, unknown>,
|
||||
): unknown[] {
|
||||
const names = extractParamNames(fn);
|
||||
if (names.length === 0) {
|
||||
return [];
|
||||
}
|
||||
return names.map((name) => data[name]);
|
||||
}
|
||||
|
||||
async function run(
|
||||
toolPath: string,
|
||||
toolFunc: string,
|
||||
toolData: Record<string, unknown>,
|
||||
): Promise<void> {
|
||||
const mod = await import(pathToFileURL(toolPath).href);
|
||||
|
||||
if (typeof mod[toolFunc] !== "function") {
|
||||
throw new Error(`No module function '${toolFunc}' at '${toolPath}'`);
|
||||
}
|
||||
|
||||
const fn = mod[toolFunc] as Function;
|
||||
const args = spreadArgs(fn, toolData);
|
||||
const value = await fn(...args);
|
||||
returnToLlm(value);
|
||||
dumpResult("{function_name}");
|
||||
}
|
||||
|
||||
function returnToLlm(value: unknown): void {
|
||||
if (value === null || value === undefined) {
|
||||
return;
|
||||
}
|
||||
|
||||
const output = process.env["LLM_OUTPUT"];
|
||||
const write = (s: string) => {
|
||||
if (output) {
|
||||
writeFileSync(output, s, "utf-8");
|
||||
} else {
|
||||
process.stdout.write(s);
|
||||
}
|
||||
};
|
||||
|
||||
if (typeof value === "string" || typeof value === "number" || typeof value === "boolean") {
|
||||
write(String(value));
|
||||
} else if (typeof value === "object") {
|
||||
write(JSON.stringify(value, null, 2));
|
||||
}
|
||||
}
|
||||
|
||||
function dumpResult(name: string): void {
|
||||
const dumpResults = process.env["LLM_DUMP_RESULTS"];
|
||||
const llmOutput = process.env["LLM_OUTPUT"];
|
||||
|
||||
if (!dumpResults || !llmOutput || !process.stdout.isTTY) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const pattern = new RegExp(`\\b(${dumpResults})\\b`);
|
||||
if (!pattern.test(name)) {
|
||||
return;
|
||||
}
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
let data: string;
|
||||
try {
|
||||
data = readFileSync(llmOutput, "utf-8");
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
process.stdout.write(
|
||||
`\x1b[2m----------------------\n${data}\n----------------------\x1b[0m\n`,
|
||||
);
|
||||
}
|
||||
|
||||
main().catch((err) => {
|
||||
process.stderr.write(`${err}\n`);
|
||||
process.exit(1);
|
||||
});
|
||||
@@ -1,6 +1,7 @@
|
||||
import os
|
||||
from typing import List, Literal, Optional
|
||||
|
||||
|
||||
def run(
|
||||
string: str,
|
||||
string_enum: Literal["foo", "bar"],
|
||||
@@ -9,26 +10,38 @@ def run(
|
||||
number: float,
|
||||
array: List[str],
|
||||
string_optional: Optional[str] = None,
|
||||
integer_with_default: int = 42,
|
||||
boolean_with_default: bool = True,
|
||||
number_with_default: float = 3.14,
|
||||
string_with_default: str = "hello",
|
||||
array_optional: Optional[List[str]] = None,
|
||||
):
|
||||
"""Demonstrates how to create a tool using Python and how to use comments.
|
||||
"""Demonstrates all supported Python parameter types and variations.
|
||||
Args:
|
||||
string: Define a required string property
|
||||
string_enum: Define a required string property with enum
|
||||
boolean: Define a required boolean property
|
||||
integer: Define a required integer property
|
||||
number: Define a required number property
|
||||
array: Define a required string array property
|
||||
string_optional: Define an optional string property
|
||||
array_optional: Define an optional string array property
|
||||
string: A required string property
|
||||
string_enum: A required string property constrained to specific values
|
||||
boolean: A required boolean property
|
||||
integer: A required integer property
|
||||
number: A required number (float) property
|
||||
array: A required string array property
|
||||
string_optional: An optional string property (Optional[str] with None default)
|
||||
integer_with_default: An optional integer with a non-None default value
|
||||
boolean_with_default: An optional boolean with a default value
|
||||
number_with_default: An optional number with a default value
|
||||
string_with_default: An optional string with a default value
|
||||
array_optional: An optional string array property
|
||||
"""
|
||||
output = f"""string: {string}
|
||||
string_enum: {string_enum}
|
||||
string_optional: {string_optional}
|
||||
boolean: {boolean}
|
||||
integer: {integer}
|
||||
number: {number}
|
||||
array: {array}
|
||||
string_optional: {string_optional}
|
||||
integer_with_default: {integer_with_default}
|
||||
boolean_with_default: {boolean_with_default}
|
||||
number_with_default: {number_with_default}
|
||||
string_with_default: {string_with_default}
|
||||
array_optional: {array_optional}"""
|
||||
|
||||
for key, value in os.environ.items():
|
||||
|
||||
@@ -0,0 +1,53 @@
|
||||
/**
|
||||
* Demonstrates all supported TypeScript parameter types and variations.
|
||||
*
|
||||
* @param string - A required string property
|
||||
* @param string_enum - A required string property constrained to specific values
|
||||
* @param boolean - A required boolean property
|
||||
* @param number - A required number property
|
||||
* @param array_bracket - A required string array using bracket syntax
|
||||
* @param array_generic - A required string array using generic syntax
|
||||
* @param string_optional - An optional string using the question mark syntax
|
||||
* @param string_nullable - An optional string using the union-with-null syntax
|
||||
* @param number_with_default - An optional number with a default value
|
||||
* @param boolean_with_default - An optional boolean with a default value
|
||||
* @param string_with_default - An optional string with a default value
|
||||
* @param array_optional - An optional string array using the question mark syntax
|
||||
*/
|
||||
export function run(
|
||||
string: string,
|
||||
string_enum: "foo" | "bar",
|
||||
boolean: boolean,
|
||||
number: number,
|
||||
array_bracket: string[],
|
||||
array_generic: Array<string>,
|
||||
string_optional?: string,
|
||||
string_nullable: string | null = null,
|
||||
number_with_default: number = 42,
|
||||
boolean_with_default: boolean = true,
|
||||
string_with_default: string = "hello",
|
||||
array_optional?: string[],
|
||||
): string {
|
||||
const parts = [
|
||||
`string: ${string}`,
|
||||
`string_enum: ${string_enum}`,
|
||||
`boolean: ${boolean}`,
|
||||
`number: ${number}`,
|
||||
`array_bracket: ${JSON.stringify(array_bracket)}`,
|
||||
`array_generic: ${JSON.stringify(array_generic)}`,
|
||||
`string_optional: ${string_optional}`,
|
||||
`string_nullable: ${string_nullable}`,
|
||||
`number_with_default: ${number_with_default}`,
|
||||
`boolean_with_default: ${boolean_with_default}`,
|
||||
`string_with_default: ${string_with_default}`,
|
||||
`array_optional: ${JSON.stringify(array_optional)}`,
|
||||
];
|
||||
|
||||
for (const [key, value] of Object.entries(process.env)) {
|
||||
if (key.startsWith("LLM_")) {
|
||||
parts.push(`${key}: ${value}`);
|
||||
}
|
||||
}
|
||||
|
||||
return parts.join("\n");
|
||||
}
|
||||
@@ -0,0 +1,24 @@
|
||||
#!/usr/bin/env tsx
|
||||
|
||||
import { appendFileSync, mkdirSync } from "fs";
|
||||
import { dirname } from "path";
|
||||
|
||||
/**
|
||||
* Get the current weather in a given location
|
||||
* @param location - The city and optionally the state or country (e.g., "London", "San Francisco, CA").
|
||||
*/
|
||||
export async function run(location: string): string {
|
||||
const encoded = encodeURIComponent(location);
|
||||
const url = `https://wttr.in/${encoded}?format=4`;
|
||||
|
||||
const resp = await fetch(url);
|
||||
const data = await resp.text();
|
||||
|
||||
const dest = process.env["LLM_OUTPUT"] ?? "/dev/stdout";
|
||||
if (dest !== "-" && dest !== "/dev/stdout") {
|
||||
mkdirSync(dirname(dest), { recursive: true });
|
||||
appendFileSync(dest, data, "utf-8");
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
@@ -32,7 +32,7 @@ max_concurrent_agents: 4 # Maximum number of agents that can run simulta
|
||||
max_agent_depth: 3 # Maximum nesting depth for sub-agents (prevents runaway spawning)
|
||||
inject_spawn_instructions: true # Inject the default agent spawning instructions into the agent's system prompt
|
||||
summarization_model: null # Model to use for summarizing sub-agent output (e.g. 'openai:gpt-4o-mini'); defaults to current model
|
||||
summarization_threshold: 4000 # Character threshold above which sub-agent output is summarized before returning to parent
|
||||
summarization_threshold: 4000 # Character threshold above which sub-agent output is summarized before returning to parent
|
||||
escalation_timeout: 300 # Seconds a sub-agent waits for a user interaction response before timing out (default: 5 minutes)
|
||||
mcp_servers: # Optional list of MCP servers that the agent utilizes
|
||||
- github # Corresponds to the name of an MCP server in the `<loki-config-dir>/functions/mcp.json` file
|
||||
|
||||
+7
-1
@@ -46,6 +46,7 @@ enabled_tools: null # Which tools to enable by default. (e.g. 'fs,w
|
||||
visible_tools: # Which tools are visible to be compiled (and are thus able to be defined in 'enabled_tools')
|
||||
# - demo_py.py
|
||||
# - demo_sh.sh
|
||||
# - demo_ts.ts
|
||||
- execute_command.sh
|
||||
# - execute_py_code.py
|
||||
# - execute_sql_code.sh
|
||||
@@ -61,6 +62,7 @@ visible_tools: # Which tools are visible to be compiled (and a
|
||||
# - fs_write.sh
|
||||
- get_current_time.sh
|
||||
# - get_current_weather.py
|
||||
# - get_current_weather.ts
|
||||
- get_current_weather.sh
|
||||
- query_jira_issues.sh
|
||||
# - search_arxiv.sh
|
||||
@@ -77,7 +79,7 @@ visible_tools: # Which tools are visible to be compiled (and a
|
||||
mcp_server_support: true # Enables or disables MCP servers (globally).
|
||||
mapping_mcp_servers: # Alias for an MCP server or set of servers
|
||||
git: github,gitmcp
|
||||
enabled_mcp_servers: null # Which MCP servers to enable by default (e.g. 'github,slack')
|
||||
enabled_mcp_servers: null # Which MCP servers to enable by default (e.g. 'github,slack,ddg-search')
|
||||
|
||||
# ---- Session ----
|
||||
# See the [Session documentation](./docs/SESSIONS.md) for more information
|
||||
@@ -192,6 +194,8 @@ clients:
|
||||
- type: gemini
|
||||
api_base: https://generativelanguage.googleapis.com/v1beta
|
||||
api_key: '{{GEMINI_API_KEY}}' # You can either hard-code or inject secrets from the Loki vault
|
||||
auth: null # When set to 'oauth', Loki will use OAuth instead of an API key
|
||||
# Authenticate with `loki --authenticate` or `.authenticate` in the REPL
|
||||
patch:
|
||||
chat_completions:
|
||||
'.*':
|
||||
@@ -210,6 +214,8 @@ clients:
|
||||
- type: claude
|
||||
api_base: https://api.anthropic.com/v1 # Optional
|
||||
api_key: '{{ANTHROPIC_API_KEY}}' # You can either hard-code or inject secrets from the Loki vault
|
||||
auth: null # When set to 'oauth', Loki will use OAuth instead of an API key
|
||||
# Authenticate with `loki --authenticate` or `.authenticate` in the REPL
|
||||
|
||||
# See https://docs.mistral.ai/
|
||||
- type: openai-compatible
|
||||
|
||||
+65
-10
@@ -33,6 +33,7 @@ If you're looking for more example agents, refer to the [built-in agents](../ass
|
||||
- [.env File Support](#env-file-support)
|
||||
- [Python-Based Agent Tools](#python-based-agent-tools)
|
||||
- [Bash-Based Agent Tools](#bash-based-agent-tools)
|
||||
- [TypeScript-Based Agent Tools](#typescript-based-agent-tools)
|
||||
- [5. Conversation Starters](#5-conversation-starters)
|
||||
- [6. Todo System & Auto-Continuation](#6-todo-system--auto-continuation)
|
||||
- [7. Sub-Agent Spawning System](#7-sub-agent-spawning-system)
|
||||
@@ -62,10 +63,12 @@ Agent configurations often have the following directory structure:
|
||||
├── tools.sh
|
||||
or
|
||||
├── tools.py
|
||||
or
|
||||
├── tools.ts
|
||||
```
|
||||
|
||||
This means that agent configurations often are only two files: the agent configuration file (`config.yaml`), and the
|
||||
tool definitions (`agents/my-agent/tools.sh` or `tools.py`).
|
||||
tool definitions (`agents/my-agent/tools.sh`, `tools.py`, or `tools.ts`).
|
||||
|
||||
To see a full example configuration file, refer to the [example agent config file](../config.agent.example.yaml).
|
||||
|
||||
@@ -114,10 +117,10 @@ isolated environment, so in order for an agent to use a tool or MCP server that
|
||||
explicitly state which tools and/or MCP servers the agent uses. Otherwise, it is assumed that the agent doesn't use any
|
||||
tools outside its own custom defined tools.
|
||||
|
||||
And if you don't define a `agents/my-agent/tools.sh` or `agents/my-agent/tools.py`, then the agent is really just a
|
||||
And if you don't define a `agents/my-agent/tools.sh`, `agents/my-agent/tools.py`, or `agents/my-agent/tools.ts`, then the agent is really just a
|
||||
`role`.
|
||||
|
||||
You'll notice there's no settings for agent-specific tooling. This is because they are handled separately and
|
||||
You'll notice there are no settings for agent-specific tooling. This is because they are handled separately and
|
||||
automatically. See the [Building Tools for Agents](#4-building-tools-for-agents) section below for more information.
|
||||
|
||||
To see a full example configuration file, refer to the [example agent config file](../config.agent.example.yaml).
|
||||
@@ -205,7 +208,7 @@ variables:
|
||||
### Dynamic Instructions
|
||||
Sometimes you may find it useful to dynamically generate instructions on startup. Whether that be via a call to Loki
|
||||
itself to generate them, or by some other means. Loki supports this type of behavior using a special function defined
|
||||
in your `agents/my-agent/tools.py` or `agents/my-agent/tools.sh`.
|
||||
in your `agents/my-agent/tools.py`, `agents/my-agent/tools.sh`, or `agents/my-agent/tools.ts`.
|
||||
|
||||
**Example: Instructions for a JSON-reader agent that specializes on each JSON input it receives**
|
||||
`agents/json-reader/tools.py`:
|
||||
@@ -306,8 +309,8 @@ EOF
|
||||
}
|
||||
```
|
||||
|
||||
For more information on how to create custom tools for your agent and the structure of the `agent/my-agent/tools.sh` or
|
||||
`agent/my-agent/tools.py` files, refer to the [Building Tools for Agents](#4-building-tools-for-agents) section below.
|
||||
For more information on how to create custom tools for your agent and the structure of the `agent/my-agent/tools.sh`,
|
||||
`agent/my-agent/tools.py`, or `agent/my-agent/tools.ts` files, refer to the [Building Tools for Agents](#4-building-tools-for-agents) section below.
|
||||
|
||||
#### Variables
|
||||
All the same variable interpolations supported by static instructions is also supported by dynamic instructions. For
|
||||
@@ -337,10 +340,11 @@ defining a single function that gets executed at runtime (e.g. `main` for bash t
|
||||
tools define a number of *subcommands*.
|
||||
|
||||
### Limitations
|
||||
You can only utilize either a bash-based `<loki-config-dir>/agents/my-agent/tools.sh` or a Python-based
|
||||
`<loki-config-dir>/agents/my-agent/tools.py`. However, if it's easier to achieve a task in one language vs the other,
|
||||
You can only utilize one of: a bash-based `<loki-config-dir>/agents/my-agent/tools.sh`, a Python-based
|
||||
`<loki-config-dir>/agents/my-agent/tools.py`, or a TypeScript-based `<loki-config-dir>/agents/my-agent/tools.ts`.
|
||||
However, if it's easier to achieve a task in one language vs the other,
|
||||
you're free to define other scripts in your agent's configuration directory and reference them from the main
|
||||
`tools.py/sh` file. **Any scripts *not* named `tools.{py,sh}` will not be picked up by Loki's compiler**, meaning they
|
||||
tools file. **Any scripts *not* named `tools.{py,sh,ts}` will not be picked up by Loki's compiler**, meaning they
|
||||
can be used like any other set of scripts.
|
||||
|
||||
It's important to keep in mind the following:
|
||||
@@ -428,6 +432,55 @@ the same syntax ad formatting as is used to create custom bash tools globally.
|
||||
For more information on how to write, [build and test](function-calling/CUSTOM-BASH-TOOLS.md#execute-and-test-your-bash-tools) tools in bash, refer to the
|
||||
[custom bash tools documentation](function-calling/CUSTOM-BASH-TOOLS.md).
|
||||
|
||||
### TypeScript-Based Agent Tools
|
||||
TypeScript-based agent tools work exactly the same as TypeScript global tools. Instead of a single `run` function,
|
||||
you define as many exported functions as you like. Non-exported functions are private helpers and are invisible to the
|
||||
LLM.
|
||||
|
||||
**Example:**
|
||||
`agents/my-agent/tools.ts`
|
||||
```typescript
|
||||
/**
|
||||
* Get your IP information
|
||||
*/
|
||||
export async function get_ip_info(): Promise<string> {
|
||||
const resp = await fetch("https://httpbin.org/ip");
|
||||
return await resp.text();
|
||||
}
|
||||
|
||||
/**
|
||||
* Find your public IP address using AWS
|
||||
*/
|
||||
export async function get_ip_address_from_aws(): Promise<string> {
|
||||
const resp = await fetch("https://checkip.amazonaws.com");
|
||||
return await resp.text();
|
||||
}
|
||||
|
||||
// Non-exported helper — invisible to the LLM
|
||||
function formatResponse(data: string): string {
|
||||
return data.trim();
|
||||
}
|
||||
```
|
||||
|
||||
Loki automatically compiles each exported function as a separate tool for the LLM to call. Just make sure you
|
||||
follow the same JSDoc and parameter conventions as you would when creating custom TypeScript tools.
|
||||
|
||||
TypeScript agent tools also support dynamic instructions via an exported `_instructions()` function:
|
||||
|
||||
```typescript
|
||||
import { readFileSync } from "fs";
|
||||
|
||||
/**
|
||||
* Generates instructions for the agent dynamically
|
||||
*/
|
||||
export function _instructions(): string {
|
||||
const schema = readFileSync("schema.json", "utf-8");
|
||||
return `You are an AI agent that works with the following schema:\n${schema}`;
|
||||
}
|
||||
```
|
||||
|
||||
For more information on how to build tools in TypeScript, refer to the [custom TypeScript tools documentation](function-calling/CUSTOM-TOOLS.md#custom-typescript-based-tools).
|
||||
|
||||
## 5. Conversation Starters
|
||||
It's often helpful to also have some conversation starters so users know what kinds of things the agent is capable of
|
||||
doing. These are available in the REPL via the `.starter` command and are selectable.
|
||||
@@ -467,11 +520,12 @@ inject_todo_instructions: true # Include the default todo instructions into pr
|
||||
|
||||
### How It Works
|
||||
|
||||
1. When `inject_todo_instructions` is enabled, agents receive instructions on using four built-in tools:
|
||||
1. When `inject_todo_instructions` is enabled, agents receive instructions on using five built-in tools:
|
||||
- `todo__init`: Initialize a todo list with a goal
|
||||
- `todo__add`: Add a task to the list
|
||||
- `todo__done`: Mark a task complete
|
||||
- `todo__list`: View current todo state
|
||||
- `todo__clear`: Clear the entire todo list and reset the goal
|
||||
|
||||
These instructions are a reasonable default that detail how to use Loki's To-Do System. If you wish,
|
||||
you can disable the injection of the default instructions and specify your own instructions for how
|
||||
@@ -714,6 +768,7 @@ Loki comes packaged with some useful built-in agents:
|
||||
* `code-reviewer`: A [CodeRabbit](https://coderabbit.ai)-style code reviewer that spawns per-file reviewers using the teammate messaging pattern
|
||||
* `demo`: An example agent to use for reference when learning to create your own agents
|
||||
* `explore`: An agent designed to help you explore and understand your codebase
|
||||
* `file-reviewer`: An agent designed to perform code-review on a single file (used by the `code-reviewer` agent)
|
||||
* `jira-helper`: An agent that assists you with all your Jira-related tasks
|
||||
* `oracle`: An agent for high-level architecture, design decisions, and complex debugging
|
||||
* `sisyphus`: A powerhouse orchestrator agent for writing complex code and acting as a natural language interface for your codebase (similar to ClaudeCode, Gemini CLI, Codex, or OpenCode). Uses sub-agent spawning to delegate to `explore`, `coder`, and `oracle`.
|
||||
|
||||
@@ -107,6 +107,7 @@ The following variables can be used to change the log level of Loki or the locat
|
||||
can also pass the `--disable-log-colors` flag as well.
|
||||
|
||||
## Miscellaneous Variables
|
||||
| Environment Variable | Description | Default Value |
|
||||
|----------------------|--------------------------------------------------------------------------------------------------|---------------|
|
||||
| `AUTO_CONFIRM` | Bypass all `guard_*` checks in the bash prompt helpers; useful for agent composition and routing | |
|
||||
| Environment Variable | Description | Default Value |
|
||||
|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
|
||||
| `AUTO_CONFIRM` | Bypass all `guard_*` checks in the bash prompt helpers; useful for agent composition and routing | |
|
||||
| `LLM_TOOL_DATA_FILE` | Set automatically by Loki on Windows. Points to a temporary file containing the JSON tool call data. <br>Tool scripts (`run-tool.sh`, `run-agent.sh`, etc.) read from this file instead of command-line args <br>to avoid JSON escaping issues when data passes through `cmd.exe` → bash. **Not intended to be set by users.** | |
|
||||
+14
-7
@@ -23,6 +23,7 @@ You can enter the REPL by simply typing `loki` without any follow-up flags or ar
|
||||
- [`.edit` - Modify configuration files](#edit---modify-configuration-files)
|
||||
- [`.delete` - Delete configurations from Loki](#delete---delete-configurations-from-loki)
|
||||
- [`.info` - Display information about the current mode](#info---display-information-about-the-current-mode)
|
||||
- [`.authenticate` - Authenticate the current model client via OAuth](#authenticate---authenticate-the-current-model-client-via-oauth)
|
||||
- [`.exit` - Exit an agent/role/session/rag or the Loki REPL itself](#exit---exit-an-agentrolesessionrag-or-the-loki-repl-itself)
|
||||
- [`.help` - Show the help guide](#help---show-the-help-guide)
|
||||
<!--toc:end-->
|
||||
@@ -119,13 +120,14 @@ For more information on sessions and how to use them in Loki, refer to the [sess
|
||||
Loki lets you build OpenAI GPT-style agents. The following commands let you interact with and manage your agents in
|
||||
Loki:
|
||||
|
||||
| Command | Description |
|
||||
|----------------------|------------------------------------------------------------|
|
||||
| `.agent` | Use an agent |
|
||||
| `.starter` | Display and use conversation starters for the active agent |
|
||||
| `.edit agent-config` | Open the agent configuration in your preferred text editor |
|
||||
| `.info agent` | Display information about the active agent |
|
||||
| `.exit agent` | Leave the active agent |
|
||||
| Command | Description |
|
||||
|----------------------|-----------------------------------------------------------------------------------------------|
|
||||
| `.agent` | Use an agent |
|
||||
| `.starter` | Display and use conversation starters for the active agent |
|
||||
| `.clear todo` | Clear the todo list and stop auto-continuation (requires `auto_continue: true` on the agent) |
|
||||
| `.edit agent-config` | Open the agent configuration in your preferred text editor |
|
||||
| `.info agent` | Display information about the active agent |
|
||||
| `.exit agent` | Leave the active agent |
|
||||
|
||||

|
||||
|
||||
@@ -237,6 +239,11 @@ The following entities are supported:
|
||||
| `.info agent` | Display information about the active agent |
|
||||
| `.info rag` | Display information about the active RAG |
|
||||
|
||||
### `.authenticate` - Authenticate the current model client via OAuth
|
||||
The `.authenticate` command will start the OAuth flow for the current model client if
|
||||
* The client supports OAuth (See the [clients documentation](./clients/CLIENTS.md#providers-that-support-oauth) for supported clients)
|
||||
* The client is configured in your Loki configuration to use OAuth via the `auth: oauth` property
|
||||
|
||||
### `.exit` - Exit an agent/role/session/rag or the Loki REPL itself
|
||||
The `.exit` command is used to move between modes in the Loki REPL.
|
||||
|
||||
|
||||
@@ -117,6 +117,22 @@ Display the current todo list with status of each item.
|
||||
|
||||
**Returns:** The full todo list with goal, progress, and item statuses
|
||||
|
||||
### `todo__clear`
|
||||
Clear the entire todo list and reset the goal. Use when the current task has been canceled or invalidated.
|
||||
|
||||
**Parameters:** None
|
||||
|
||||
**Returns:** Confirmation that the todo list was cleared
|
||||
|
||||
### REPL Command: `.clear todo`
|
||||
You can also clear the todo list manually from the REPL by typing `.clear todo`. This is useful when:
|
||||
- You gave a custom response that changes or cancels the current task
|
||||
- The agent is stuck in auto-continuation with stale todos
|
||||
- You want to start fresh without leaving and re-entering the agent
|
||||
|
||||
**Note:** This command is only available when an agent with `auto_continue: true` is active. If the todo
|
||||
system isn't enabled for the current agent, the command will display an error message.
|
||||
|
||||
## Auto-Continuation
|
||||
When `auto_continue` is enabled, Loki automatically sends a continuation prompt if:
|
||||
|
||||
|
||||
+99
-6
@@ -14,6 +14,7 @@ loki --info | grep 'config_file' | awk '{print $2}'
|
||||
<!--toc:start-->
|
||||
- [Supported Clients](#supported-clients)
|
||||
- [Client Configuration](#client-configuration)
|
||||
- [Authentication](#authentication)
|
||||
- [Extra Settings](#extra-settings)
|
||||
<!--toc:end-->
|
||||
|
||||
@@ -51,12 +52,13 @@ clients:
|
||||
The client metadata uniquely identifies the client in Loki so you can reference it across your configurations. The
|
||||
available settings are listed below:
|
||||
|
||||
| Setting | Description |
|
||||
|----------|-----------------------------------------------------------------------------------------------|
|
||||
| `name` | The name of the client (e.g. `openai`, `gemini`, etc.) |
|
||||
| `models` | See the [model settings](#model-settings) documentation below |
|
||||
| `patch` | See the [client patch configuration](./PATCHES.md#client-configuration-patches) documentation |
|
||||
| `extra` | See the [extra settings](#extra-settings) documentation below |
|
||||
| Setting | Description |
|
||||
|----------|------------------------------------------------------------------------------------------------------------|
|
||||
| `name` | The name of the client (e.g. `openai`, `gemini`, etc.) |
|
||||
| `auth` | Authentication method: `oauth` for OAuth, or omit to use `api_key` (see [Authentication](#authentication)) |
|
||||
| `models` | See the [model settings](#model-settings) documentation below |
|
||||
| `patch` | See the [client patch configuration](./PATCHES.md#client-configuration-patches) documentation |
|
||||
| `extra` | See the [extra settings](#extra-settings) documentation below |
|
||||
|
||||
Be sure to also check provider-specific configurations for any extra fields that are added for authentication purposes.
|
||||
|
||||
@@ -83,6 +85,97 @@ The `models` array lists the available models from the model client. Each one ha
|
||||
| `default_chunk_size` | | `embedding` | The default chunk size to use with the given model |
|
||||
| `max_batch_size` | | `embedding` | The maximum batch size that the given embedding model supports |
|
||||
|
||||
## Authentication
|
||||
|
||||
Loki clients support two authentication methods: **API keys** and **OAuth**. Each client entry in your configuration
|
||||
must use one or the other.
|
||||
|
||||
### API Key Authentication
|
||||
|
||||
Most clients authenticate using an API key. Simply set the `api_key` field directly or inject it from the
|
||||
[Loki vault](../VAULT.md):
|
||||
|
||||
```yaml
|
||||
clients:
|
||||
- type: claude
|
||||
api_key: '{{ANTHROPIC_API_KEY}}'
|
||||
```
|
||||
|
||||
API keys can also be provided via environment variables named `{CLIENT_NAME}_API_KEY` (e.g. `OPENAI_API_KEY`,
|
||||
`GEMINI_API_KEY`). See the [environment variables documentation](../ENVIRONMENT-VARIABLES.md#client-related-variables)
|
||||
for details.
|
||||
|
||||
### OAuth Authentication
|
||||
|
||||
For [providers that support OAuth](#providers-that-support-oauth), you can authenticate using your existing subscription instead of an API key. This uses
|
||||
the OAuth 2.0 PKCE flow.
|
||||
|
||||
**Step 1: Configure the client**
|
||||
|
||||
Add a client entry with `auth: oauth` and no `api_key`:
|
||||
|
||||
```yaml
|
||||
clients:
|
||||
- type: claude
|
||||
name: my-claude-oauth
|
||||
auth: oauth
|
||||
```
|
||||
|
||||
**Step 2: Authenticate**
|
||||
|
||||
Run the `--authenticate` flag with the client name:
|
||||
|
||||
```sh
|
||||
loki --authenticate my-claude-oauth
|
||||
```
|
||||
|
||||
Or if you have only one OAuth-configured client, you can omit the name:
|
||||
|
||||
```sh
|
||||
loki --authenticate
|
||||
```
|
||||
|
||||
Alternatively, you can use the REPL command `.authenticate`.
|
||||
|
||||
This opens your browser for the OAuth authorization flow. Depending on the provider, Loki will either start a
|
||||
temporary localhost server to capture the callback automatically (e.g. Gemini) or ask you to paste the authorization
|
||||
code back into the terminal (e.g. Claude). Loki stores the tokens in `~/.cache/loki/oauth` and automatically refreshes
|
||||
them when they expire.
|
||||
|
||||
#### Gemini OAuth Note
|
||||
Loki uses the following scopes for OAuth with Gemini:
|
||||
* https://www.googleapis.com/auth/generative-language.peruserquota
|
||||
* https://www.googleapis.com/auth/userinfo.email
|
||||
* https://www.googleapis.com/auth/generative-language.retriever (Sensitive)
|
||||
|
||||
Since the `generative-language.retriever` scope is a sensitive scope, Google needs to verify Loki, which requires full
|
||||
branding (logo, official website, privacy policy, terms of service, etc.). The Loki app is open-source and is designed
|
||||
to be used as a simple CLI. As such, there's no terms of service or privacy policy associated with it, and thus Google
|
||||
cannot verify Loki.
|
||||
|
||||
So, when you kick off OAuth with Gemini, you may see a page similar to the following:
|
||||

|
||||
|
||||
Simply click the `Advanced` link and click `Go to Loki (unsafe)` to continue the OAuth flow.
|
||||
|
||||

|
||||

|
||||
|
||||
**Step 3: Use normally**
|
||||
|
||||
Once authenticated, the client works like any other. Loki uses the stored OAuth tokens automatically:
|
||||
|
||||
```sh
|
||||
loki -m my-claude-oauth:claude-sonnet-4-20250514 "Hello!"
|
||||
```
|
||||
|
||||
> **Note:** You can have multiple clients for the same provider. For example: you can have one with an API key and
|
||||
> another with OAuth. Use the `name` field to distinguish them.
|
||||
|
||||
### Providers That Support OAuth
|
||||
* Claude
|
||||
* Gemini
|
||||
|
||||
## Extra Settings
|
||||
Loki also lets you customize some extra settings for interacting with APIs:
|
||||
|
||||
|
||||
@@ -10,6 +10,8 @@ into your Loki setup. This document provides a guide on how to create and use cu
|
||||
- [Environment Variables](#environment-variables)
|
||||
- [Custom Bash-Based Tools](#custom-bash-based-tools)
|
||||
- [Custom Python-Based Tools](#custom-python-based-tools)
|
||||
- [Custom TypeScript-Based Tools](#custom-typescript-based-tools)
|
||||
- [Custom Runtime](#custom-runtime)
|
||||
<!--toc:end-->
|
||||
|
||||
---
|
||||
@@ -19,9 +21,10 @@ Loki supports custom tools written in the following programming languages:
|
||||
|
||||
* Python
|
||||
* Bash
|
||||
* TypeScript
|
||||
|
||||
## Creating a Custom Tool
|
||||
All tools are created as scripts in either Python or Bash. They should be placed in the `functions/tools` directory.
|
||||
All tools are created as scripts in either Python, Bash, or TypeScript. They should be placed in the `functions/tools` directory.
|
||||
The location of the `functions` directory varies between systems, so you can use the following command to locate
|
||||
your `functions` directory:
|
||||
|
||||
@@ -81,6 +84,7 @@ Loki and demonstrates how to create a Python-based tool:
|
||||
import os
|
||||
from typing import List, Literal, Optional
|
||||
|
||||
|
||||
def run(
|
||||
string: str,
|
||||
string_enum: Literal["foo", "bar"],
|
||||
@@ -89,26 +93,38 @@ def run(
|
||||
number: float,
|
||||
array: List[str],
|
||||
string_optional: Optional[str] = None,
|
||||
integer_with_default: int = 42,
|
||||
boolean_with_default: bool = True,
|
||||
number_with_default: float = 3.14,
|
||||
string_with_default: str = "hello",
|
||||
array_optional: Optional[List[str]] = None,
|
||||
):
|
||||
"""Demonstrates how to create a tool using Python and how to use comments.
|
||||
"""Demonstrates all supported Python parameter types and variations.
|
||||
Args:
|
||||
string: Define a required string property
|
||||
string_enum: Define a required string property with enum
|
||||
boolean: Define a required boolean property
|
||||
integer: Define a required integer property
|
||||
number: Define a required number property
|
||||
array: Define a required string array property
|
||||
string_optional: Define an optional string property
|
||||
array_optional: Define an optional string array property
|
||||
string: A required string property
|
||||
string_enum: A required string property constrained to specific values
|
||||
boolean: A required boolean property
|
||||
integer: A required integer property
|
||||
number: A required number (float) property
|
||||
array: A required string array property
|
||||
string_optional: An optional string property (Optional[str] with None default)
|
||||
integer_with_default: An optional integer with a non-None default value
|
||||
boolean_with_default: An optional boolean with a default value
|
||||
number_with_default: An optional number with a default value
|
||||
string_with_default: An optional string with a default value
|
||||
array_optional: An optional string array property
|
||||
"""
|
||||
output = f"""string: {string}
|
||||
string_enum: {string_enum}
|
||||
string_optional: {string_optional}
|
||||
boolean: {boolean}
|
||||
integer: {integer}
|
||||
number: {number}
|
||||
array: {array}
|
||||
string_optional: {string_optional}
|
||||
integer_with_default: {integer_with_default}
|
||||
boolean_with_default: {boolean_with_default}
|
||||
number_with_default: {number_with_default}
|
||||
string_with_default: {string_with_default}
|
||||
array_optional: {array_optional}"""
|
||||
|
||||
for key, value in os.environ.items():
|
||||
@@ -117,3 +133,150 @@ array_optional: {array_optional}"""
|
||||
|
||||
return output
|
||||
```
|
||||
|
||||
### Custom TypeScript-Based Tools
|
||||
Loki supports tools written in TypeScript. TypeScript tools require [Node.js](https://nodejs.org/) and
|
||||
[tsx](https://tsx.is/) (`npx tsx` is used as the default runtime).
|
||||
|
||||
Each TypeScript-based tool must follow a specific structure in order for Loki to properly compile and execute it:
|
||||
|
||||
* The tool must be a TypeScript file with a `.ts` file extension.
|
||||
* The tool must have an `export function run(...)` that serves as the entry point for the tool.
|
||||
* Non-exported functions are ignored by the compiler and can be used as private helpers.
|
||||
* The `run` function must accept flat parameters that define the inputs for the tool.
|
||||
* Always use type annotations to specify the data type of each parameter.
|
||||
* Use `param?: type` or `type | null` to indicate optional parameters.
|
||||
* Use `param: type = value` for parameters with default values.
|
||||
* The `run` function must return a `string` (or `Promise<string>` for async functions).
|
||||
* For TypeScript, the return value is automatically written to the `LLM_OUTPUT` environment variable, so there's
|
||||
no need to explicitly write to the environment variable within the function.
|
||||
* The function must have a JSDoc comment that describes the tool and its parameters.
|
||||
* Each parameter should be documented using `@param name - description` tags.
|
||||
* These descriptions are passed to the LLM as the tool description, letting the LLM know what the tool does and
|
||||
how to use it.
|
||||
* Async functions (`export async function run(...)`) are fully supported and handled transparently.
|
||||
|
||||
**Supported Parameter Types:**
|
||||
|
||||
| TypeScript Type | JSON Schema | Notes |
|
||||
|-------------------|--------------------------------------------------|-----------------------------|
|
||||
| `string` | `{"type": "string"}` | Required string |
|
||||
| `number` | `{"type": "number"}` | Required number |
|
||||
| `boolean` | `{"type": "boolean"}` | Required boolean |
|
||||
| `string[]` | `{"type": "array", "items": {"type": "string"}}` | Array (bracket syntax) |
|
||||
| `Array<string>` | `{"type": "array", "items": {"type": "string"}}` | Array (generic syntax) |
|
||||
| `"foo" \| "bar"` | `{"type": "string", "enum": ["foo", "bar"]}` | String enum (literal union) |
|
||||
| `param?: string` | `{"type": "string"}` (not required) | Optional via question mark |
|
||||
| `string \| null` | `{"type": "string"}` (not required) | Optional via null union |
|
||||
| `param = "value"` | `{"type": "string"}` (not required) | Optional via default value |
|
||||
|
||||
**Unsupported Patterns (will produce a compile error):**
|
||||
|
||||
* Rest parameters (`...args: string[]`)
|
||||
* Destructured object parameters (`{ a, b }: { a: string, b: string }`)
|
||||
* Arrow functions (`const run = (x: string) => ...`)
|
||||
* Function expressions (`const run = function(x: string) { ... }`)
|
||||
|
||||
Only `export function` declarations are recognized. Non-exported functions are invisible to the compiler.
|
||||
|
||||
Below is the [`demo_ts.ts`](../../assets/functions/tools/demo_ts.ts) tool definition that comes pre-packaged with
|
||||
Loki and demonstrates how to create a TypeScript-based tool:
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* Demonstrates all supported TypeScript parameter types and variations.
|
||||
*
|
||||
* @param string - A required string property
|
||||
* @param string_enum - A required string property constrained to specific values
|
||||
* @param boolean - A required boolean property
|
||||
* @param number - A required number property
|
||||
* @param array_bracket - A required string array using bracket syntax
|
||||
* @param array_generic - A required string array using generic syntax
|
||||
* @param string_optional - An optional string using the question mark syntax
|
||||
* @param string_nullable - An optional string using the union-with-null syntax
|
||||
* @param number_with_default - An optional number with a default value
|
||||
* @param boolean_with_default - An optional boolean with a default value
|
||||
* @param string_with_default - An optional string with a default value
|
||||
* @param array_optional - An optional string array using the question mark syntax
|
||||
*/
|
||||
export function run(
|
||||
string: string,
|
||||
string_enum: "foo" | "bar",
|
||||
boolean: boolean,
|
||||
number: number,
|
||||
array_bracket: string[],
|
||||
array_generic: Array<string>,
|
||||
string_optional?: string,
|
||||
string_nullable: string | null = null,
|
||||
number_with_default: number = 42,
|
||||
boolean_with_default: boolean = true,
|
||||
string_with_default: string = "hello",
|
||||
array_optional?: string[],
|
||||
): string {
|
||||
const parts = [
|
||||
`string: ${string}`,
|
||||
`string_enum: ${string_enum}`,
|
||||
`boolean: ${boolean}`,
|
||||
`number: ${number}`,
|
||||
`array_bracket: ${JSON.stringify(array_bracket)}`,
|
||||
`array_generic: ${JSON.stringify(array_generic)}`,
|
||||
`string_optional: ${string_optional}`,
|
||||
`string_nullable: ${string_nullable}`,
|
||||
`number_with_default: ${number_with_default}`,
|
||||
`boolean_with_default: ${boolean_with_default}`,
|
||||
`string_with_default: ${string_with_default}`,
|
||||
`array_optional: ${JSON.stringify(array_optional)}`,
|
||||
];
|
||||
|
||||
for (const [key, value] of Object.entries(process.env)) {
|
||||
if (key.startsWith("LLM_")) {
|
||||
parts.push(`${key}: ${value}`);
|
||||
}
|
||||
}
|
||||
|
||||
return parts.join("\n");
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Runtime
|
||||
By default, Loki uses the following runtimes to execute tools:
|
||||
|
||||
| Language | Default Runtime | Requirement |
|
||||
|------------|-----------------|--------------------------------|
|
||||
| Python | `python` | Python 3 on `$PATH` |
|
||||
| TypeScript | `npx tsx` | Node.js + tsx (`npm i -g tsx`) |
|
||||
| Bash | `bash` | Bash on `$PATH` |
|
||||
|
||||
You can override the runtime for Python and TypeScript tools using a **shebang line** (`#!`) at the top of your
|
||||
script. Loki reads the first line of each tool file; if it starts with `#!`, the specified interpreter is used instead
|
||||
of the default.
|
||||
|
||||
**Examples:**
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3.11
|
||||
# This Python tool will be executed with python3.11 instead of the default `python`
|
||||
|
||||
def run(name: str):
|
||||
"""Greet someone.
|
||||
Args:
|
||||
name: The name to greet
|
||||
"""
|
||||
return f"Hello, {name}!"
|
||||
```
|
||||
|
||||
```typescript
|
||||
#!/usr/bin/env bun
|
||||
// This TypeScript tool will be executed with Bun instead of the default `npx tsx`
|
||||
|
||||
/**
|
||||
* Greet someone.
|
||||
* @param name - The name to greet
|
||||
*/
|
||||
export function run(name: string): string {
|
||||
return `Hello, ${name}!`;
|
||||
}
|
||||
```
|
||||
|
||||
This is useful for pinning a specific Python version, using an alternative TypeScript runtime like
|
||||
[Bun](https://bun.sh/) or [Deno](https://deno.com/), or working with virtual environments.
|
||||
|
||||
@@ -55,6 +55,7 @@ Loki ships with a `functions/mcp.json` file that includes some useful MCP server
|
||||
* [github](https://github.com/github/github-mcp-server) - Interact with GitHub repositories, issues, pull requests, and more.
|
||||
* [docker](https://github.com/ckreiling/mcp-server-docker) - Manage your local Docker containers with natural language
|
||||
* [slack](https://github.com/korotovsky/slack-mcp-server) - Interact with Slack
|
||||
* [ddg-search](https://github.com/nickclyde/duckduckgo-mcp-server) - Perform web searches with the DuckDuckGo search engine
|
||||
|
||||
## Loki Configuration
|
||||
MCP servers, like tools, can be used in a handful of contexts:
|
||||
|
||||
@@ -32,6 +32,7 @@ be enabled/disabled can be found in the [Configuration](#configuration) section
|
||||
|-------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|
|
||||
| [`demo_py.py`](../../assets/functions/tools/demo_py.py) | Demonstrates how to create a tool using Python and how to use comments. | 🔴 |
|
||||
| [`demo_sh.sh`](../../assets/functions/tools/demo_sh.sh) | Demonstrate how to create a tool using Bash and how to use comment tags. | 🔴 |
|
||||
| [`demo_ts.ts`](../../assets/functions/tools/demo_ts.ts) | Demonstrates how to create a tool using TypeScript and how to use JSDoc comments. | 🔴 |
|
||||
| [`execute_command.sh`](../../assets/functions/tools/execute_command.sh) | Execute the shell command. | 🟢 |
|
||||
| [`execute_py_code.py`](../../assets/functions/tools/execute_py_code.py) | Execute the given Python code. | 🔴 |
|
||||
| [`execute_sql_code.sh`](../../assets/functions/tools/execute_sql_code.sh) | Execute SQL code. | 🔴 |
|
||||
@@ -49,6 +50,7 @@ be enabled/disabled can be found in the [Configuration](#configuration) section
|
||||
| [`get_current_time.sh`](../../assets/functions/tools/get_current_time.sh) | Get the current time. | 🟢 |
|
||||
| [`get_current_weather.py`](../../assets/functions/tools/get_current_weather.py) | Get the current weather in a given location (Python implementation) | 🔴 |
|
||||
| [`get_current_weather.sh`](../../assets/functions/tools/get_current_weather.sh) | Get the current weather in a given location. | 🟢 |
|
||||
| [`get_current_weather.ts`](../../assets/functions/tools/get_current_weather.ts) | Get the current weather in a given location (TypeScript implementation) | 🔴 |
|
||||
| [`query_jira_issues.sh`](../../assets/functions/tools/query_jira_issues.sh) | Query for jira issues using a Jira Query Language (JQL) query. | 🟢 |
|
||||
| [`search_arxiv.sh`](../../assets/functions/tools/search_arxiv.sh) | Search arXiv using the given search query and return the top papers. | 🔴 |
|
||||
| [`search_wikipedia.sh`](../../assets/functions/tools/search_wikipedia.sh) | Search Wikipedia using the given search query. <br>Use it to get detailed information about a public figure, interpretation of a <br>complex scientific concept or in-depth connectivity of a significant historical <br>event, etc. | 🔴 |
|
||||
|
||||
+260
-137
@@ -3,6 +3,13 @@
|
||||
# - https://platform.openai.com/docs/api-reference/chat
|
||||
- provider: openai
|
||||
models:
|
||||
- name: gpt-5.2
|
||||
max_input_tokens: 400000
|
||||
max_output_tokens: 128000
|
||||
input_price: 1.75
|
||||
output_price: 14
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gpt-5.1
|
||||
max_input_tokens: 400000
|
||||
max_output_tokens: 128000
|
||||
@@ -81,6 +88,7 @@
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: o4-mini
|
||||
max_output_tokens: 100000
|
||||
max_input_tokens: 200000
|
||||
input_price: 1.1
|
||||
output_price: 4.4
|
||||
@@ -93,6 +101,7 @@
|
||||
temperature: null
|
||||
top_p: null
|
||||
- name: o4-mini-high
|
||||
max_output_tokens: 100000
|
||||
real_name: o4-mini
|
||||
max_input_tokens: 200000
|
||||
input_price: 1.1
|
||||
@@ -107,6 +116,7 @@
|
||||
temperature: null
|
||||
top_p: null
|
||||
- name: o3
|
||||
max_output_tokens: 100000
|
||||
max_input_tokens: 200000
|
||||
input_price: 2
|
||||
output_price: 8
|
||||
@@ -133,6 +143,7 @@
|
||||
temperature: null
|
||||
top_p: null
|
||||
- name: o3-mini
|
||||
max_output_tokens: 100000
|
||||
max_input_tokens: 200000
|
||||
input_price: 1.1
|
||||
output_price: 4.4
|
||||
@@ -145,6 +156,7 @@
|
||||
temperature: null
|
||||
top_p: null
|
||||
- name: o3-mini-high
|
||||
max_output_tokens: 100000
|
||||
real_name: o3-mini
|
||||
max_input_tokens: 200000
|
||||
input_price: 1.1
|
||||
@@ -190,25 +202,32 @@
|
||||
# - https://ai.google.dev/api/rest/v1beta/models/streamGenerateContent
|
||||
- provider: gemini
|
||||
models:
|
||||
- name: gemini-3.1-pro-preview
|
||||
max_input_tokens: 1048576
|
||||
max_output_tokens: 65535
|
||||
input_price: 0.3
|
||||
output_price: 2.5
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gemini-2.5-flash
|
||||
max_input_tokens: 1048576
|
||||
max_output_tokens: 65536
|
||||
input_price: 0
|
||||
output_price: 0
|
||||
max_output_tokens: 65535
|
||||
input_price: 0.3
|
||||
output_price: 2.5
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gemini-2.5-pro
|
||||
max_input_tokens: 1048576
|
||||
max_output_tokens: 65536
|
||||
input_price: 0
|
||||
output_price: 0
|
||||
input_price: 1.25
|
||||
output_price: 10
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gemini-2.5-flash-lite
|
||||
max_input_tokens: 1000000
|
||||
max_output_tokens: 64000
|
||||
input_price: 0
|
||||
output_price: 0
|
||||
max_input_tokens: 1048576
|
||||
max_output_tokens: 65535
|
||||
input_price: 0.1
|
||||
output_price: 0.4
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gemini-2.0-flash
|
||||
@@ -226,10 +245,11 @@
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gemma-3-27b-it
|
||||
max_input_tokens: 131072
|
||||
max_output_tokens: 8192
|
||||
input_price: 0
|
||||
output_price: 0
|
||||
supports_vision: true
|
||||
max_input_tokens: 128000
|
||||
max_output_tokens: 65536
|
||||
input_price: 0.04
|
||||
output_price: 0.15
|
||||
- name: text-embedding-004
|
||||
type: embedding
|
||||
input_price: 0
|
||||
@@ -242,6 +262,54 @@
|
||||
# - https://docs.anthropic.com/en/api/messages
|
||||
- provider: claude
|
||||
models:
|
||||
- name: claude-opus-4-6
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
require_max_tokens: true
|
||||
input_price: 5
|
||||
output_price: 25
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: claude-opus-4-6:thinking
|
||||
real_name: claude-opus-4-6
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 24000
|
||||
require_max_tokens: true
|
||||
input_price: 5
|
||||
output_price: 25
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
patch:
|
||||
body:
|
||||
temperature: null
|
||||
top_p: null
|
||||
thinking:
|
||||
type: enabled
|
||||
budget_tokens: 16000
|
||||
- name: claude-sonnet-4-6
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
require_max_tokens: true
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: claude-sonnet-4-6:thinking
|
||||
real_name: claude-sonnet-4-6
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 24000
|
||||
require_max_tokens: true
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
patch:
|
||||
body:
|
||||
temperature: null
|
||||
top_p: null
|
||||
thinking:
|
||||
type: enabled
|
||||
budget_tokens: 16000
|
||||
- name: claude-sonnet-4-5-20250929
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
@@ -509,8 +577,8 @@
|
||||
output_price: 10
|
||||
supports_vision: true
|
||||
- name: command-r7b-12-2024
|
||||
max_input_tokens: 131072
|
||||
max_output_tokens: 4096
|
||||
max_input_tokens: 128000
|
||||
max_output_tokens: 4000
|
||||
input_price: 0.0375
|
||||
output_price: 0.15
|
||||
- name: embed-v4.0
|
||||
@@ -547,6 +615,7 @@
|
||||
- provider: xai
|
||||
models:
|
||||
- name: grok-4
|
||||
supports_vision: true
|
||||
max_input_tokens: 256000
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
@@ -583,14 +652,18 @@
|
||||
- provider: perplexity
|
||||
models:
|
||||
- name: sonar-pro
|
||||
max_output_tokens: 8000
|
||||
supports_vision: true
|
||||
max_input_tokens: 200000
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
- name: sonar
|
||||
max_input_tokens: 128000
|
||||
supports_vision: true
|
||||
max_input_tokens: 127072
|
||||
input_price: 1
|
||||
output_price: 1
|
||||
- name: sonar-reasoning-pro
|
||||
supports_vision: true
|
||||
max_input_tokens: 128000
|
||||
input_price: 2
|
||||
output_price: 8
|
||||
@@ -659,17 +732,16 @@
|
||||
# - https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini
|
||||
- provider: vertexai
|
||||
models:
|
||||
- name: gemini-3-pro-preview
|
||||
hipaa_safe: true
|
||||
- name: gemini-3.1-pro-preview
|
||||
max_input_tokens: 1048576
|
||||
max_output_tokens: 65536
|
||||
input_price: 0
|
||||
output_price: 0
|
||||
input_price: 2
|
||||
output_price: 12
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gemini-2.5-flash
|
||||
max_input_tokens: 1048576
|
||||
max_output_tokens: 65536
|
||||
max_output_tokens: 65535
|
||||
input_price: 0.3
|
||||
output_price: 2.5
|
||||
supports_vision: true
|
||||
@@ -683,16 +755,16 @@
|
||||
supports_function_calling: true
|
||||
- name: gemini-2.5-flash-lite
|
||||
max_input_tokens: 1048576
|
||||
max_output_tokens: 65536
|
||||
input_price: 0.3
|
||||
max_output_tokens: 65535
|
||||
input_price: 0.1
|
||||
output_price: 0.4
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gemini-2.0-flash-001
|
||||
max_input_tokens: 1048576
|
||||
max_output_tokens: 8192
|
||||
input_price: 0.15
|
||||
output_price: 0.6
|
||||
input_price: 0.1
|
||||
output_price: 0.4
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: gemini-2.0-flash-lite-001
|
||||
@@ -1187,17 +1259,22 @@
|
||||
max_input_tokens: 1024
|
||||
input_price: 0.07
|
||||
|
||||
|
||||
# Links:
|
||||
# - https://help.aliyun.com/zh/model-studio/getting-started/models
|
||||
# - https://help.aliyun.com/zh/model-studio/developer-reference/use-qwen-by-calling-api
|
||||
- provider: qianwen
|
||||
models:
|
||||
- name: qwen3-max
|
||||
input_price: 1.2
|
||||
output_price: 6
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 262144
|
||||
supports_function_calling: true
|
||||
- name: qwen-plus
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.4
|
||||
output_price: 1.2
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 1000000
|
||||
supports_function_calling: true
|
||||
- name: qwen-flash
|
||||
max_input_tokens: 1000000
|
||||
@@ -1213,14 +1290,14 @@
|
||||
- name: qwen-coder-flash
|
||||
max_input_tokens: 1000000
|
||||
- name: qwen3-next-80b-a3b-instruct
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.14
|
||||
output_price: 0.56
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.09
|
||||
output_price: 1.1
|
||||
supports_function_calling: true
|
||||
- name: qwen3-next-80b-a3b-thinking
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.14
|
||||
output_price: 1.4
|
||||
max_input_tokens: 128000
|
||||
input_price: 0.15
|
||||
output_price: 1.2
|
||||
- name: qwen3-235b-a22b-instruct-2507
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.28
|
||||
@@ -1228,35 +1305,39 @@
|
||||
supports_function_calling: true
|
||||
- name: qwen3-235b-a22b-thinking-2507
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.28
|
||||
output_price: 2.8
|
||||
input_price: 0
|
||||
output_price: 0
|
||||
- name: qwen3-30b-a3b-instruct-2507
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.105
|
||||
output_price: 0.42
|
||||
max_output_tokens: 262144
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.09
|
||||
output_price: 0.3
|
||||
supports_function_calling: true
|
||||
- name: qwen3-30b-a3b-thinking-2507
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.105
|
||||
output_price: 1.05
|
||||
max_input_tokens: 32768
|
||||
input_price: 0.051
|
||||
output_price: 0.34
|
||||
- name: qwen3-vl-32b-instruct
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.28
|
||||
output_price: 1.12
|
||||
input_price: 0.104
|
||||
output_price: 0.416
|
||||
supports_vision: true
|
||||
- name: qwen3-vl-8b-instruct
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.07
|
||||
output_price: 0.28
|
||||
input_price: 0.08
|
||||
output_price: 0.5
|
||||
supports_vision: true
|
||||
- name: qwen3-coder-480b-a35b-instruct
|
||||
max_input_tokens: 262144
|
||||
input_price: 1.26
|
||||
output_price: 5.04
|
||||
- name: qwen3-coder-30b-a3b-instruct
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.315
|
||||
output_price: 1.26
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 160000
|
||||
input_price: 0.07
|
||||
output_price: 0.27
|
||||
- name: deepseek-v3.2-exp
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.28
|
||||
@@ -1332,9 +1413,9 @@
|
||||
output_price: 8.12
|
||||
supports_vision: true
|
||||
- name: kimi-k2-thinking
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.56
|
||||
output_price: 2.24
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.47
|
||||
output_price: 2
|
||||
supports_vision: true
|
||||
|
||||
# Links:
|
||||
@@ -1343,10 +1424,10 @@
|
||||
- provider: deepseek
|
||||
models:
|
||||
- name: deepseek-chat
|
||||
max_input_tokens: 64000
|
||||
max_output_tokens: 8192
|
||||
input_price: 0.56
|
||||
output_price: 1.68
|
||||
max_input_tokens: 163840
|
||||
max_output_tokens: 163840
|
||||
input_price: 0.32
|
||||
output_price: 0.89
|
||||
supports_function_calling: true
|
||||
- name: deepseek-reasoner
|
||||
max_input_tokens: 64000
|
||||
@@ -1424,9 +1505,10 @@
|
||||
- provider: minimax
|
||||
models:
|
||||
- name: minimax-m2
|
||||
max_input_tokens: 204800
|
||||
input_price: 0.294
|
||||
output_price: 1.176
|
||||
max_output_tokens: 65536
|
||||
max_input_tokens: 196608
|
||||
input_price: 0.255
|
||||
output_price: 1
|
||||
supports_function_calling: true
|
||||
|
||||
# Links:
|
||||
@@ -1442,8 +1524,8 @@
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: openai/gpt-5.1-chat
|
||||
max_input_tokens: 400000
|
||||
max_output_tokens: 128000
|
||||
max_input_tokens: 128000
|
||||
max_output_tokens: 16384
|
||||
input_price: 1.25
|
||||
output_price: 10
|
||||
supports_vision: true
|
||||
@@ -1456,8 +1538,8 @@
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: openai/gpt-5-chat
|
||||
max_input_tokens: 400000
|
||||
max_output_tokens: 128000
|
||||
max_input_tokens: 128000
|
||||
max_output_tokens: 16384
|
||||
input_price: 1.25
|
||||
output_price: 10
|
||||
supports_vision: true
|
||||
@@ -1498,18 +1580,21 @@
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: openai/gpt-4o
|
||||
max_output_tokens: 16384
|
||||
max_input_tokens: 128000
|
||||
input_price: 2.5
|
||||
output_price: 10
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: openai/gpt-4o-mini
|
||||
max_output_tokens: 16384
|
||||
max_input_tokens: 128000
|
||||
input_price: 0.15
|
||||
output_price: 0.6
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: openai/o4-mini
|
||||
max_output_tokens: 100000
|
||||
max_input_tokens: 200000
|
||||
input_price: 1.1
|
||||
output_price: 4.4
|
||||
@@ -1522,6 +1607,7 @@
|
||||
temperature: null
|
||||
top_p: null
|
||||
- name: openai/o4-mini-high
|
||||
max_output_tokens: 100000
|
||||
max_input_tokens: 200000
|
||||
input_price: 1.1
|
||||
output_price: 4.4
|
||||
@@ -1535,6 +1621,7 @@
|
||||
temperature: null
|
||||
top_p: null
|
||||
- name: openai/o3
|
||||
max_output_tokens: 100000
|
||||
max_input_tokens: 200000
|
||||
input_price: 2
|
||||
output_price: 8
|
||||
@@ -1560,6 +1647,7 @@
|
||||
temperature: null
|
||||
top_p: null
|
||||
- name: openai/o3-mini
|
||||
max_output_tokens: 100000
|
||||
max_input_tokens: 200000
|
||||
input_price: 1.1
|
||||
output_price: 4.4
|
||||
@@ -1571,6 +1659,7 @@
|
||||
temperature: null
|
||||
top_p: null
|
||||
- name: openai/o3-mini-high
|
||||
max_output_tokens: 100000
|
||||
max_input_tokens: 200000
|
||||
input_price: 1.1
|
||||
output_price: 4.4
|
||||
@@ -1583,50 +1672,57 @@
|
||||
top_p: null
|
||||
- name: openai/gpt-oss-120b
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.09
|
||||
output_price: 0.45
|
||||
input_price: 0.039
|
||||
output_price: 0.19
|
||||
supports_function_calling: true
|
||||
- name: openai/gpt-oss-20b
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.04
|
||||
output_price: 0.16
|
||||
input_price: 0.03
|
||||
output_price: 0.14
|
||||
supports_function_calling: true
|
||||
- name: google/gemini-2.5-flash
|
||||
max_output_tokens: 65535
|
||||
max_input_tokens: 1048576
|
||||
input_price: 0.3
|
||||
output_price: 2.5
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: google/gemini-2.5-pro
|
||||
max_output_tokens: 65536
|
||||
max_input_tokens: 1048576
|
||||
input_price: 1.25
|
||||
output_price: 10
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: google/gemini-2.5-flash-lite
|
||||
max_output_tokens: 65535
|
||||
max_input_tokens: 1048576
|
||||
input_price: 0.3
|
||||
input_price: 0.1
|
||||
output_price: 0.4
|
||||
supports_vision: true
|
||||
- name: google/gemini-2.0-flash-001
|
||||
max_input_tokens: 1000000
|
||||
input_price: 0.15
|
||||
output_price: 0.6
|
||||
max_output_tokens: 8192
|
||||
max_input_tokens: 1048576
|
||||
input_price: 0.1
|
||||
output_price: 0.4
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: google/gemini-2.0-flash-lite-001
|
||||
max_output_tokens: 8192
|
||||
max_input_tokens: 1048576
|
||||
input_price: 0.075
|
||||
output_price: 0.3
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: google/gemma-3-27b-it
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.1
|
||||
output_price: 0.2
|
||||
max_output_tokens: 65536
|
||||
supports_vision: true
|
||||
max_input_tokens: 128000
|
||||
input_price: 0.04
|
||||
output_price: 0.15
|
||||
- name: anthropic/claude-sonnet-4.5
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
max_input_tokens: 1000000
|
||||
max_output_tokens: 64000
|
||||
require_max_tokens: true
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
@@ -1634,7 +1730,7 @@
|
||||
supports_function_calling: true
|
||||
- name: anthropic/claude-haiku-4.5
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
max_output_tokens: 64000
|
||||
require_max_tokens: true
|
||||
input_price: 1
|
||||
output_price: 5
|
||||
@@ -1642,7 +1738,7 @@
|
||||
supports_function_calling: true
|
||||
- name: anthropic/claude-opus-4.1
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
max_output_tokens: 32000
|
||||
require_max_tokens: true
|
||||
input_price: 15
|
||||
output_price: 75
|
||||
@@ -1650,15 +1746,15 @@
|
||||
supports_function_calling: true
|
||||
- name: anthropic/claude-opus-4
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
max_output_tokens: 32000
|
||||
require_max_tokens: true
|
||||
input_price: 15
|
||||
output_price: 75
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: anthropic/claude-sonnet-4
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
max_input_tokens: 1000000
|
||||
max_output_tokens: 64000
|
||||
require_max_tokens: true
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
@@ -1666,7 +1762,7 @@
|
||||
supports_function_calling: true
|
||||
- name: anthropic/claude-3.7-sonnet
|
||||
max_input_tokens: 200000
|
||||
max_output_tokens: 8192
|
||||
max_output_tokens: 64000
|
||||
require_max_tokens: true
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
@@ -1681,21 +1777,24 @@
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: meta-llama/llama-4-maverick
|
||||
max_output_tokens: 16384
|
||||
max_input_tokens: 1048576
|
||||
input_price: 0.18
|
||||
input_price: 0.15
|
||||
output_price: 0.6
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: meta-llama/llama-4-scout
|
||||
max_output_tokens: 16384
|
||||
max_input_tokens: 327680
|
||||
input_price: 0.08
|
||||
output_price: 0.3
|
||||
supports_vision: true
|
||||
supports_function_calling: true
|
||||
- name: meta-llama/llama-3.3-70b-instruct
|
||||
max_output_tokens: 16384
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.12
|
||||
output_price: 0.3
|
||||
input_price: 0.1
|
||||
output_price: 0.32
|
||||
- name: mistralai/mistral-medium-3.1
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.4
|
||||
@@ -1703,9 +1802,10 @@
|
||||
supports_function_calling: true
|
||||
supports_vision: true
|
||||
- name: mistralai/mistral-small-3.2-24b-instruct
|
||||
max_output_tokens: 131072
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.1
|
||||
output_price: 0.3
|
||||
input_price: 0.06
|
||||
output_price: 0.18
|
||||
supports_vision: true
|
||||
- name: mistralai/magistral-medium-2506
|
||||
max_input_tokens: 40960
|
||||
@@ -1726,8 +1826,8 @@
|
||||
supports_function_calling: true
|
||||
- name: mistralai/devstral-small
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.07
|
||||
output_price: 0.28
|
||||
input_price: 0.1
|
||||
output_price: 0.3
|
||||
supports_function_calling: true
|
||||
- name: mistralai/codestral-2508
|
||||
max_input_tokens: 256000
|
||||
@@ -1735,6 +1835,7 @@
|
||||
output_price: 0.9
|
||||
supports_function_calling: true
|
||||
- name: ai21/jamba-large-1.7
|
||||
max_output_tokens: 4096
|
||||
max_input_tokens: 256000
|
||||
input_price: 2
|
||||
output_price: 8
|
||||
@@ -1745,110 +1846,121 @@
|
||||
output_price: 0.4
|
||||
supports_function_calling: true
|
||||
- name: cohere/command-a
|
||||
max_output_tokens: 8192
|
||||
max_input_tokens: 256000
|
||||
input_price: 2.5
|
||||
output_price: 10
|
||||
supports_function_calling: true
|
||||
- name: cohere/command-r7b-12-2024
|
||||
max_input_tokens: 128000
|
||||
max_output_tokens: 4096
|
||||
max_output_tokens: 4000
|
||||
input_price: 0.0375
|
||||
output_price: 0.15
|
||||
- name: deepseek/deepseek-v3.2-exp
|
||||
max_output_tokens: 65536
|
||||
max_input_tokens: 163840
|
||||
input_price: 0.27
|
||||
output_price: 0.40
|
||||
output_price: 0.41
|
||||
- name: deepseek/deepseek-v3.1-terminus
|
||||
max_input_tokens: 163840
|
||||
input_price: 0.23
|
||||
output_price: 0.90
|
||||
input_price: 0.21
|
||||
output_price: 0.79
|
||||
- name: deepseek/deepseek-chat-v3.1
|
||||
max_input_tokens: 163840
|
||||
input_price: 0.2
|
||||
output_price: 0.8
|
||||
max_output_tokens: 7168
|
||||
max_input_tokens: 32768
|
||||
input_price: 0.15
|
||||
output_price: 0.75
|
||||
- name: deepseek/deepseek-r1-0528
|
||||
max_input_tokens: 128000
|
||||
input_price: 0.50
|
||||
output_price: 2.15
|
||||
max_output_tokens: 65536
|
||||
max_input_tokens: 163840
|
||||
input_price: 0.4
|
||||
output_price: 1.75
|
||||
patch:
|
||||
body:
|
||||
include_reasoning: true
|
||||
- name: qwen/qwen3-max
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 262144
|
||||
input_price: 1.2
|
||||
output_price: 6
|
||||
supports_function_calling: true
|
||||
- name: qwen/qwen-plus
|
||||
max_input_tokens: 131072
|
||||
max_output_tokens: 8192
|
||||
max_input_tokens: 1000000
|
||||
max_output_tokens: 32768
|
||||
input_price: 0.4
|
||||
output_price: 1.2
|
||||
supports_function_calling: true
|
||||
- name: qwen/qwen3-next-80b-a3b-instruct
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.1
|
||||
output_price: 0.8
|
||||
input_price: 0.09
|
||||
output_price: 1.1
|
||||
supports_function_calling: true
|
||||
- name: qwen/qwen3-next-80b-a3b-thinking
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.1
|
||||
output_price: 0.8
|
||||
max_input_tokens: 128000
|
||||
input_price: 0.15
|
||||
output_price: 1.2
|
||||
- name: qwen/qwen5-235b-a22b-2507 # Qwen3 235B A22B Instruct 2507
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.12
|
||||
output_price: 0.59
|
||||
supports_function_calling: true
|
||||
- name: qwen/qwen3-235b-a22b-thinking-2507
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.118
|
||||
output_price: 0.118
|
||||
- name: qwen/qwen3-30b-a3b-instruct-2507
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.2
|
||||
output_price: 0.8
|
||||
input_price: 0
|
||||
output_price: 0
|
||||
- name: qwen/qwen3-30b-a3b-instruct-2507
|
||||
max_output_tokens: 262144
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.09
|
||||
output_price: 0.3
|
||||
- name: qwen/qwen3-30b-a3b-thinking-2507
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.071
|
||||
output_price: 0.285
|
||||
max_input_tokens: 32768
|
||||
input_price: 0.051
|
||||
output_price: 0.34
|
||||
- name: qwen/qwen3-vl-32b-instruct
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.35
|
||||
output_price: 1.1
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.104
|
||||
output_price: 0.416
|
||||
supports_vision: true
|
||||
- name: qwen/qwen3-vl-8b-instruct
|
||||
max_input_tokens: 262144
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.08
|
||||
output_price: 0.50
|
||||
output_price: 0.5
|
||||
supports_vision: true
|
||||
- name: qwen/qwen3-coder-plus
|
||||
max_input_tokens: 128000
|
||||
max_output_tokens: 65536
|
||||
max_input_tokens: 1000000
|
||||
input_price: 1
|
||||
output_price: 5
|
||||
supports_function_calling: true
|
||||
- name: qwen/qwen3-coder-flash
|
||||
max_input_tokens: 128000
|
||||
max_output_tokens: 65536
|
||||
max_input_tokens: 1000000
|
||||
input_price: 0.3
|
||||
output_price: 1.5
|
||||
supports_function_calling: true
|
||||
- name: qwen/qwen3-coder # Qwen3 Coder 480B A35B
|
||||
- name: qwen/qwen3-coder # Qwen3 Coder 480B A35B
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.22
|
||||
output_price: 0.95
|
||||
supports_function_calling: true
|
||||
- name: qwen/qwen3-coder-30b-a3b-instruct
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.052
|
||||
output_price: 0.207
|
||||
max_output_tokens: 32768
|
||||
max_input_tokens: 160000
|
||||
input_price: 0.07
|
||||
output_price: 0.27
|
||||
supports_function_calling: true
|
||||
- name: moonshotai/kimi-k2-0905
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.296
|
||||
output_price: 1.185
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.4
|
||||
output_price: 2
|
||||
supports_function_calling: true
|
||||
- name: moonshotai/kimi-k2-thinking
|
||||
max_input_tokens: 262144
|
||||
input_price: 0.45
|
||||
output_price: 2.35
|
||||
max_input_tokens: 131072
|
||||
input_price: 0.47
|
||||
output_price: 2
|
||||
supports_function_calling: true
|
||||
- name: moonshotai/kimi-dev-72b
|
||||
max_input_tokens: 131072
|
||||
@@ -1856,21 +1968,26 @@
|
||||
output_price: 1.15
|
||||
supports_function_calling: true
|
||||
- name: x-ai/grok-4
|
||||
supports_vision: true
|
||||
max_input_tokens: 256000
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
supports_function_calling: true
|
||||
- name: x-ai/grok-4-fast
|
||||
max_output_tokens: 30000
|
||||
supports_vision: true
|
||||
max_input_tokens: 2000000
|
||||
input_price: 0.2
|
||||
output_price: 0.5
|
||||
supports_function_calling: true
|
||||
- name: x-ai/grok-code-fast-1
|
||||
max_output_tokens: 10000
|
||||
max_input_tokens: 256000
|
||||
input_price: 0.2
|
||||
output_price: 1.5
|
||||
supports_function_calling: true
|
||||
- name: amazon/nova-premier-v1
|
||||
max_output_tokens: 32000
|
||||
max_input_tokens: 1000000
|
||||
input_price: 2.5
|
||||
output_price: 12.5
|
||||
@@ -1893,14 +2010,18 @@
|
||||
input_price: 0.035
|
||||
output_price: 0.14
|
||||
- name: perplexity/sonar-pro
|
||||
max_output_tokens: 8000
|
||||
supports_vision: true
|
||||
max_input_tokens: 200000
|
||||
input_price: 3
|
||||
output_price: 15
|
||||
- name: perplexity/sonar
|
||||
supports_vision: true
|
||||
max_input_tokens: 127072
|
||||
input_price: 1
|
||||
output_price: 1
|
||||
- name: perplexity/sonar-reasoning-pro
|
||||
supports_vision: true
|
||||
max_input_tokens: 128000
|
||||
input_price: 2
|
||||
output_price: 8
|
||||
@@ -1915,20 +2036,22 @@
|
||||
body:
|
||||
include_reasoning: true
|
||||
- name: perplexity/sonar-deep-research
|
||||
max_input_tokens: 200000
|
||||
max_input_tokens: 128000
|
||||
input_price: 2
|
||||
output_price: 8
|
||||
patch:
|
||||
body:
|
||||
include_reasoning: true
|
||||
- name: minimax/minimax-m2
|
||||
max_output_tokens: 65536
|
||||
max_input_tokens: 196608
|
||||
input_price: 0.15
|
||||
output_price: 0.45
|
||||
input_price: 0.255
|
||||
output_price: 1
|
||||
- name: z-ai/glm-4.6
|
||||
max_output_tokens: 131072
|
||||
max_input_tokens: 202752
|
||||
input_price: 0.5
|
||||
output_price: 1.75
|
||||
input_price: 0.35
|
||||
output_price: 1.71
|
||||
supports_function_calling: true
|
||||
|
||||
# Links:
|
||||
@@ -2298,4 +2421,4 @@
|
||||
- name: rerank-2-lite
|
||||
type: reranker
|
||||
max_input_tokens: 8000
|
||||
input_price: 0.02
|
||||
input_price: 0.02
|
||||
|
||||
@@ -0,0 +1,2 @@
|
||||
requests
|
||||
ruamel.yaml
|
||||
@@ -0,0 +1,255 @@
|
||||
import requests
|
||||
import sys
|
||||
import re
|
||||
import json
|
||||
|
||||
# Provider mapping from models.yaml to OpenRouter prefixes
|
||||
PROVIDER_MAPPING = {
|
||||
"openai": "openai",
|
||||
"claude": "anthropic",
|
||||
"gemini": "google",
|
||||
"mistral": "mistralai",
|
||||
"cohere": "cohere",
|
||||
"perplexity": "perplexity",
|
||||
"xai": "x-ai",
|
||||
"openrouter": "openrouter",
|
||||
"ai21": "ai21",
|
||||
"deepseek": "deepseek",
|
||||
"moonshot": "moonshotai",
|
||||
"qianwen": "qwen",
|
||||
"zhipuai": "zhipuai",
|
||||
"minimax": "minimax",
|
||||
"vertexai": "google",
|
||||
"groq": "groq",
|
||||
"bedrock": "amazon",
|
||||
"hunyuan": "tencent",
|
||||
"ernie": "baidu",
|
||||
"github": "github",
|
||||
}
|
||||
|
||||
def fetch_openrouter_models():
|
||||
print("Fetching models from OpenRouter...")
|
||||
try:
|
||||
response = requests.get("https://openrouter.ai/api/v1/models")
|
||||
response.raise_for_status()
|
||||
data = response.json()["data"]
|
||||
print(f"Fetched {len(data)} models.")
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f"Error fetching models: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
def get_openrouter_model(models_data, provider_prefix, model_name, is_openrouter_provider=False):
|
||||
if is_openrouter_provider:
|
||||
# For openrouter provider, the model_name in yaml is usually the full ID
|
||||
for model in models_data:
|
||||
if model["id"] == model_name:
|
||||
return model
|
||||
return None
|
||||
|
||||
expected_id = f"{provider_prefix}/{model_name}"
|
||||
|
||||
# 1. Try exact match on ID
|
||||
for model in models_data:
|
||||
if model["id"] == expected_id:
|
||||
return model
|
||||
|
||||
# 2. Try match by suffix
|
||||
for model in models_data:
|
||||
if model["id"].split("/")[-1] == model_name:
|
||||
if model["id"].startswith(f"{provider_prefix}/"):
|
||||
return model
|
||||
|
||||
return None
|
||||
|
||||
def format_price(price_per_token):
|
||||
if price_per_token is None:
|
||||
return None
|
||||
try:
|
||||
price_per_1m = float(price_per_token) * 1_000_000
|
||||
if price_per_1m.is_integer():
|
||||
return str(int(price_per_1m))
|
||||
else:
|
||||
return str(round(price_per_1m, 4))
|
||||
except:
|
||||
return None
|
||||
|
||||
def get_indentation(line):
|
||||
return len(line) - len(line.lstrip())
|
||||
|
||||
def process_model_block(block_lines, current_provider, or_models):
|
||||
if not block_lines:
|
||||
return []
|
||||
|
||||
# 1. Identify model name and indentation
|
||||
name_line = block_lines[0]
|
||||
name_match = re.match(r"^(\s*)-\s*name:\s*(.+)$", name_line)
|
||||
if not name_match:
|
||||
return block_lines
|
||||
|
||||
name_indent_str = name_match.group(1)
|
||||
model_name = name_match.group(2).strip()
|
||||
|
||||
# 2. Find OpenRouter model
|
||||
or_prefix = PROVIDER_MAPPING.get(current_provider)
|
||||
is_openrouter_provider = (current_provider == "openrouter")
|
||||
|
||||
if not or_prefix and not is_openrouter_provider:
|
||||
return block_lines
|
||||
|
||||
or_model = get_openrouter_model(or_models, or_prefix, model_name, is_openrouter_provider)
|
||||
if not or_model:
|
||||
return block_lines
|
||||
|
||||
print(f" Updating {model_name}...")
|
||||
|
||||
# 3. Prepare updates
|
||||
updates = {}
|
||||
|
||||
# Pricing
|
||||
pricing = or_model.get("pricing", {})
|
||||
p_in = format_price(pricing.get("prompt"))
|
||||
p_out = format_price(pricing.get("completion"))
|
||||
if p_in: updates["input_price"] = p_in
|
||||
if p_out: updates["output_price"] = p_out
|
||||
|
||||
# Context
|
||||
ctx = or_model.get("context_length")
|
||||
if ctx: updates["max_input_tokens"] = str(ctx)
|
||||
|
||||
max_out = None
|
||||
if "top_provider" in or_model and or_model["top_provider"]:
|
||||
max_out = or_model["top_provider"].get("max_completion_tokens")
|
||||
if max_out: updates["max_output_tokens"] = str(max_out)
|
||||
|
||||
# Capabilities
|
||||
arch = or_model.get("architecture", {})
|
||||
modality = arch.get("modality", "")
|
||||
if "image" in modality:
|
||||
updates["supports_vision"] = "true"
|
||||
|
||||
# 4. Detect field indentation
|
||||
field_indent_str = None
|
||||
existing_fields = {} # key -> line_index
|
||||
|
||||
for i, line in enumerate(block_lines):
|
||||
if i == 0: continue # Skip name line
|
||||
|
||||
# Skip comments
|
||||
if line.strip().startswith("#"):
|
||||
continue
|
||||
|
||||
# Look for "key: value"
|
||||
m = re.match(r"^(\s*)([\w_-]+):", line)
|
||||
if m:
|
||||
indent = m.group(1)
|
||||
key = m.group(2)
|
||||
# Must be deeper than name line
|
||||
if len(indent) > len(name_indent_str):
|
||||
if field_indent_str is None:
|
||||
field_indent_str = indent
|
||||
existing_fields[key] = i
|
||||
|
||||
if field_indent_str is None:
|
||||
field_indent_str = name_indent_str + " "
|
||||
|
||||
# 5. Apply updates
|
||||
new_block = list(block_lines)
|
||||
|
||||
# Update existing fields
|
||||
for key, value in updates.items():
|
||||
if key in existing_fields:
|
||||
idx = existing_fields[key]
|
||||
# Preserve original key indentation exactly
|
||||
original_line = new_block[idx]
|
||||
m = re.match(r"^(\s*)([\w_-]+):", original_line)
|
||||
if m:
|
||||
current_indent = m.group(1)
|
||||
new_block[idx] = f"{current_indent}{key}: {value}\n"
|
||||
|
||||
# Insert missing fields
|
||||
# Insert after the name line
|
||||
insertion_idx = 1
|
||||
|
||||
for key, value in updates.items():
|
||||
if key not in existing_fields:
|
||||
new_line = f"{field_indent_str}{key}: {value}\n"
|
||||
new_block.insert(insertion_idx, new_line)
|
||||
insertion_idx += 1
|
||||
|
||||
return new_block
|
||||
|
||||
def main():
|
||||
or_models = fetch_openrouter_models()
|
||||
|
||||
print("Reading models.yaml...")
|
||||
with open("models.yaml", "r") as f:
|
||||
lines = f.readlines()
|
||||
|
||||
new_lines = []
|
||||
current_provider = None
|
||||
|
||||
i = 0
|
||||
while i < len(lines):
|
||||
line = lines[i]
|
||||
|
||||
# Check for provider
|
||||
# - provider: name
|
||||
p_match = re.match(r"^\s*-?\s*provider:\s*(.+)$", line)
|
||||
if p_match:
|
||||
current_provider = p_match.group(1).strip()
|
||||
new_lines.append(line)
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# Check for model start
|
||||
# - name: ...
|
||||
m_match = re.match(r"^(\s*)-\s*name:\s*.+$", line)
|
||||
if m_match:
|
||||
# Start of a model block
|
||||
start_indent = len(m_match.group(1))
|
||||
|
||||
# Collect block lines
|
||||
block_lines = [line]
|
||||
j = i + 1
|
||||
while j < len(lines):
|
||||
next_line = lines[j]
|
||||
stripped = next_line.strip()
|
||||
|
||||
# If empty or comment, include it
|
||||
if not stripped or stripped.startswith("#"):
|
||||
block_lines.append(next_line)
|
||||
j += 1
|
||||
continue
|
||||
|
||||
# Check indentation
|
||||
next_indent = get_indentation(next_line)
|
||||
|
||||
# If indentation is greater, it's part of the block (property)
|
||||
if next_indent > start_indent:
|
||||
block_lines.append(next_line)
|
||||
j += 1
|
||||
continue
|
||||
|
||||
# If indentation is equal or less, it's the end of the block
|
||||
break
|
||||
|
||||
# Process the block
|
||||
processed_block = process_model_block(block_lines, current_provider, or_models)
|
||||
new_lines.extend(processed_block)
|
||||
|
||||
# Advance i
|
||||
i = j
|
||||
continue
|
||||
|
||||
# Otherwise, just a regular line
|
||||
new_lines.append(line)
|
||||
i += 1
|
||||
|
||||
print("Saving models.yaml...")
|
||||
with open("models.yaml", "w") as f:
|
||||
f.writelines(new_lines)
|
||||
print("Done.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -127,6 +127,9 @@ pub struct Cli {
|
||||
/// List all secrets stored in the Loki vault
|
||||
#[arg(long, exclusive = true)]
|
||||
pub list_secrets: bool,
|
||||
/// Authenticate with an LLM provider using OAuth (e.g., --authenticate client_name)
|
||||
#[arg(long, exclusive = true, value_name = "CLIENT_NAME")]
|
||||
pub authenticate: Option<Option<String>>,
|
||||
/// Generate static shell completion scripts
|
||||
#[arg(long, value_name = "SHELL", value_enum)]
|
||||
pub completions: Option<ShellCompletion>,
|
||||
|
||||
@@ -18,16 +18,16 @@ pub struct AzureOpenAIConfig {
|
||||
impl AzureOpenAIClient {
|
||||
config_get_fn!(api_base, get_api_base);
|
||||
config_get_fn!(api_key, get_api_key);
|
||||
|
||||
pub const PROMPTS: [PromptAction<'static>; 2] = [
|
||||
|
||||
create_client_config!([
|
||||
(
|
||||
"api_base",
|
||||
"API Base",
|
||||
Some("e.g. https://{RESOURCE}.openai.azure.com"),
|
||||
false
|
||||
false,
|
||||
),
|
||||
("api_key", "API Key", None, true),
|
||||
];
|
||||
]);
|
||||
}
|
||||
|
||||
impl_client_trait!(
|
||||
|
||||
@@ -32,11 +32,11 @@ impl BedrockClient {
|
||||
config_get_fn!(region, get_region);
|
||||
config_get_fn!(session_token, get_session_token);
|
||||
|
||||
pub const PROMPTS: [PromptAction<'static>; 3] = [
|
||||
create_client_config!([
|
||||
("access_key_id", "AWS Access Key ID", None, true),
|
||||
("secret_access_key", "AWS Secret Access Key", None, true),
|
||||
("region", "AWS Region", None, false),
|
||||
];
|
||||
]);
|
||||
|
||||
fn chat_completions_builder(
|
||||
&self,
|
||||
|
||||
+103
-15
@@ -1,19 +1,24 @@
|
||||
use super::access_token::get_access_token;
|
||||
use super::claude_oauth::ClaudeOAuthProvider;
|
||||
use super::oauth::{self, OAuthProvider};
|
||||
use super::*;
|
||||
|
||||
use crate::utils::strip_think_tag;
|
||||
|
||||
use anyhow::{Context, Result, bail};
|
||||
use reqwest::RequestBuilder;
|
||||
use reqwest::{Client as ReqwestClient, RequestBuilder};
|
||||
use serde::Deserialize;
|
||||
use serde_json::{Value, json};
|
||||
|
||||
const API_BASE: &str = "https://api.anthropic.com/v1";
|
||||
const CLAUDE_CODE_PREFIX: &str = "You are Claude Code, Anthropic's official CLI for Claude.";
|
||||
|
||||
#[derive(Debug, Clone, Deserialize)]
|
||||
pub struct ClaudeConfig {
|
||||
pub name: Option<String>,
|
||||
pub api_key: Option<String>,
|
||||
pub api_base: Option<String>,
|
||||
pub auth: Option<String>,
|
||||
#[serde(default)]
|
||||
pub models: Vec<ModelData>,
|
||||
pub patch: Option<RequestPatch>,
|
||||
@@ -24,25 +29,44 @@ impl ClaudeClient {
|
||||
config_get_fn!(api_key, get_api_key);
|
||||
config_get_fn!(api_base, get_api_base);
|
||||
|
||||
pub const PROMPTS: [PromptAction<'static>; 1] = [("api_key", "API Key", None, true)];
|
||||
create_oauth_supported_client_config!();
|
||||
}
|
||||
|
||||
impl_client_trait!(
|
||||
ClaudeClient,
|
||||
(
|
||||
prepare_chat_completions,
|
||||
claude_chat_completions,
|
||||
claude_chat_completions_streaming
|
||||
),
|
||||
(noop_prepare_embeddings, noop_embeddings),
|
||||
(noop_prepare_rerank, noop_rerank),
|
||||
);
|
||||
#[async_trait::async_trait]
|
||||
impl Client for ClaudeClient {
|
||||
client_common_fns!();
|
||||
|
||||
fn prepare_chat_completions(
|
||||
fn supports_oauth(&self) -> bool {
|
||||
self.config.auth.as_deref() == Some("oauth")
|
||||
}
|
||||
|
||||
async fn chat_completions_inner(
|
||||
&self,
|
||||
client: &ReqwestClient,
|
||||
data: ChatCompletionsData,
|
||||
) -> Result<ChatCompletionsOutput> {
|
||||
let request_data = prepare_chat_completions(self, client, data).await?;
|
||||
let builder = self.request_builder(client, request_data);
|
||||
claude_chat_completions(builder, self.model()).await
|
||||
}
|
||||
|
||||
async fn chat_completions_streaming_inner(
|
||||
&self,
|
||||
client: &ReqwestClient,
|
||||
handler: &mut SseHandler,
|
||||
data: ChatCompletionsData,
|
||||
) -> Result<()> {
|
||||
let request_data = prepare_chat_completions(self, client, data).await?;
|
||||
let builder = self.request_builder(client, request_data);
|
||||
claude_chat_completions_streaming(builder, handler, self.model()).await
|
||||
}
|
||||
}
|
||||
|
||||
async fn prepare_chat_completions(
|
||||
self_: &ClaudeClient,
|
||||
client: &ReqwestClient,
|
||||
data: ChatCompletionsData,
|
||||
) -> Result<RequestData> {
|
||||
let api_key = self_.get_api_key()?;
|
||||
let api_base = self_
|
||||
.get_api_base()
|
||||
.unwrap_or_else(|_| API_BASE.to_string());
|
||||
@@ -53,11 +77,75 @@ fn prepare_chat_completions(
|
||||
let mut request_data = RequestData::new(url, body);
|
||||
|
||||
request_data.header("anthropic-version", "2023-06-01");
|
||||
request_data.header("x-api-key", api_key);
|
||||
|
||||
let uses_oauth = self_.config.auth.as_deref() == Some("oauth");
|
||||
|
||||
if uses_oauth {
|
||||
let provider = ClaudeOAuthProvider;
|
||||
let ready = oauth::prepare_oauth_access_token(client, &provider, self_.name()).await?;
|
||||
if !ready {
|
||||
bail!(
|
||||
"OAuth configured but no tokens found for '{}'. Run: 'loki --authenticate {}' or '.authenticate' in the REPL",
|
||||
self_.name(),
|
||||
self_.name()
|
||||
);
|
||||
}
|
||||
let token = get_access_token(self_.name())?;
|
||||
request_data.bearer_auth(token);
|
||||
for (key, value) in provider.extra_request_headers() {
|
||||
request_data.header(key, value);
|
||||
}
|
||||
inject_oauth_system_prompt(&mut request_data.body);
|
||||
} else if let Ok(api_key) = self_.get_api_key() {
|
||||
request_data.header("x-api-key", api_key);
|
||||
} else {
|
||||
bail!(
|
||||
"No authentication configured for '{}'. Set `api_key` or use `auth: oauth` with `loki --authenticate {}`.",
|
||||
self_.name(),
|
||||
self_.name()
|
||||
);
|
||||
}
|
||||
|
||||
Ok(request_data)
|
||||
}
|
||||
|
||||
/// Anthropic requires OAuth-authenticated requests to include a Claude Code
|
||||
/// system prompt prefix in order to consider a request body as "valid".
|
||||
///
|
||||
/// This behavior was discovered 2026-03-17.
|
||||
///
|
||||
/// So this function injects the Claude Code system prompt into the request
|
||||
/// body to make it a valid request.
|
||||
fn inject_oauth_system_prompt(body: &mut Value) {
|
||||
let prefix_block = json!({
|
||||
"type": "text",
|
||||
"text": CLAUDE_CODE_PREFIX,
|
||||
});
|
||||
|
||||
match body.get("system") {
|
||||
Some(Value::String(existing)) => {
|
||||
let existing_block = json!({
|
||||
"type": "text",
|
||||
"text": existing,
|
||||
});
|
||||
body["system"] = json!([prefix_block, existing_block]);
|
||||
}
|
||||
Some(Value::Array(_)) => {
|
||||
if let Some(arr) = body["system"].as_array_mut() {
|
||||
let already_injected = arr
|
||||
.iter()
|
||||
.any(|block| block["text"].as_str() == Some(CLAUDE_CODE_PREFIX));
|
||||
if !already_injected {
|
||||
arr.insert(0, prefix_block);
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
body["system"] = json!([prefix_block]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn claude_chat_completions(
|
||||
builder: RequestBuilder,
|
||||
_model: &Model,
|
||||
|
||||
@@ -0,0 +1,43 @@
|
||||
use super::oauth::OAuthProvider;
|
||||
|
||||
pub const BETA_HEADER: &str = "oauth-2025-04-20";
|
||||
|
||||
pub struct ClaudeOAuthProvider;
|
||||
|
||||
impl OAuthProvider for ClaudeOAuthProvider {
|
||||
fn provider_name(&self) -> &str {
|
||||
"claude"
|
||||
}
|
||||
|
||||
fn client_id(&self) -> &str {
|
||||
"9d1c250a-e61b-44d9-88ed-5944d1962f5e"
|
||||
}
|
||||
|
||||
fn authorize_url(&self) -> &str {
|
||||
"https://claude.ai/oauth/authorize"
|
||||
}
|
||||
|
||||
fn token_url(&self) -> &str {
|
||||
"https://console.anthropic.com/v1/oauth/token"
|
||||
}
|
||||
|
||||
fn redirect_uri(&self) -> &str {
|
||||
"https://console.anthropic.com/oauth/code/callback"
|
||||
}
|
||||
|
||||
fn scopes(&self) -> &str {
|
||||
"org:create_api_key user:profile user:inference"
|
||||
}
|
||||
|
||||
fn extra_authorize_params(&self) -> Vec<(&str, &str)> {
|
||||
vec![("code", "true")]
|
||||
}
|
||||
|
||||
fn extra_token_headers(&self) -> Vec<(&str, &str)> {
|
||||
vec![("anthropic-beta", BETA_HEADER)]
|
||||
}
|
||||
|
||||
fn extra_request_headers(&self) -> Vec<(&str, &str)> {
|
||||
vec![("anthropic-beta", BETA_HEADER)]
|
||||
}
|
||||
}
|
||||
@@ -24,7 +24,7 @@ impl CohereClient {
|
||||
config_get_fn!(api_key, get_api_key);
|
||||
config_get_fn!(api_base, get_api_base);
|
||||
|
||||
pub const PROMPTS: [PromptAction<'static>; 1] = [("api_key", "API Key", None, true)];
|
||||
create_client_config!([("api_key", "API Key", None, true)]);
|
||||
}
|
||||
|
||||
impl_client_trait!(
|
||||
|
||||
@@ -47,6 +47,10 @@ pub trait Client: Sync + Send {
|
||||
|
||||
fn model(&self) -> &Model;
|
||||
|
||||
fn supports_oauth(&self) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
fn build_client(&self) -> Result<ReqwestClient> {
|
||||
let mut builder = ReqwestClient::builder();
|
||||
let extra = self.extra_config();
|
||||
@@ -489,14 +493,6 @@ pub async fn call_chat_completions_streaming(
|
||||
}
|
||||
}
|
||||
|
||||
pub fn noop_prepare_embeddings<T>(_client: &T, _data: &EmbeddingsData) -> Result<RequestData> {
|
||||
bail!("The client doesn't support embeddings api")
|
||||
}
|
||||
|
||||
pub async fn noop_embeddings(_builder: RequestBuilder, _model: &Model) -> Result<EmbeddingsOutput> {
|
||||
bail!("The client doesn't support embeddings api")
|
||||
}
|
||||
|
||||
pub fn noop_prepare_rerank<T>(_client: &T, _data: &RerankData) -> Result<RequestData> {
|
||||
bail!("The client doesn't support rerank api")
|
||||
}
|
||||
@@ -554,7 +550,7 @@ pub fn json_str_from_map<'a>(
|
||||
map.get(field_name).and_then(|v| v.as_str())
|
||||
}
|
||||
|
||||
async fn set_client_models_config(client_config: &mut Value, client: &str) -> Result<String> {
|
||||
pub async fn set_client_models_config(client_config: &mut Value, client: &str) -> Result<String> {
|
||||
if let Some(provider) = ALL_PROVIDER_MODELS.iter().find(|v| v.provider == client) {
|
||||
let models: Vec<String> = provider
|
||||
.models
|
||||
|
||||
+120
-35
@@ -1,10 +1,13 @@
|
||||
use super::access_token::get_access_token;
|
||||
use super::gemini_oauth::GeminiOAuthProvider;
|
||||
use super::oauth;
|
||||
use super::vertexai::*;
|
||||
use super::*;
|
||||
|
||||
use anyhow::{Context, Result};
|
||||
use reqwest::RequestBuilder;
|
||||
use anyhow::{Context, Result, bail};
|
||||
use reqwest::{Client as ReqwestClient, RequestBuilder};
|
||||
use serde::Deserialize;
|
||||
use serde_json::{json, Value};
|
||||
use serde_json::{Value, json};
|
||||
|
||||
const API_BASE: &str = "https://generativelanguage.googleapis.com/v1beta";
|
||||
|
||||
@@ -13,6 +16,7 @@ pub struct GeminiConfig {
|
||||
pub name: Option<String>,
|
||||
pub api_key: Option<String>,
|
||||
pub api_base: Option<String>,
|
||||
pub auth: Option<String>,
|
||||
#[serde(default)]
|
||||
pub models: Vec<ModelData>,
|
||||
pub patch: Option<RequestPatch>,
|
||||
@@ -23,25 +27,64 @@ impl GeminiClient {
|
||||
config_get_fn!(api_key, get_api_key);
|
||||
config_get_fn!(api_base, get_api_base);
|
||||
|
||||
pub const PROMPTS: [PromptAction<'static>; 1] = [("api_key", "API Key", None, true)];
|
||||
create_oauth_supported_client_config!();
|
||||
}
|
||||
|
||||
impl_client_trait!(
|
||||
GeminiClient,
|
||||
(
|
||||
prepare_chat_completions,
|
||||
gemini_chat_completions,
|
||||
gemini_chat_completions_streaming
|
||||
),
|
||||
(prepare_embeddings, embeddings),
|
||||
(noop_prepare_rerank, noop_rerank),
|
||||
);
|
||||
#[async_trait::async_trait]
|
||||
impl Client for GeminiClient {
|
||||
client_common_fns!();
|
||||
|
||||
fn prepare_chat_completions(
|
||||
fn supports_oauth(&self) -> bool {
|
||||
self.config.auth.as_deref() == Some("oauth")
|
||||
}
|
||||
|
||||
async fn chat_completions_inner(
|
||||
&self,
|
||||
client: &ReqwestClient,
|
||||
data: ChatCompletionsData,
|
||||
) -> Result<ChatCompletionsOutput> {
|
||||
let request_data = prepare_chat_completions(self, client, data).await?;
|
||||
let builder = self.request_builder(client, request_data);
|
||||
gemini_chat_completions(builder, self.model()).await
|
||||
}
|
||||
|
||||
async fn chat_completions_streaming_inner(
|
||||
&self,
|
||||
client: &ReqwestClient,
|
||||
handler: &mut SseHandler,
|
||||
data: ChatCompletionsData,
|
||||
) -> Result<()> {
|
||||
let request_data = prepare_chat_completions(self, client, data).await?;
|
||||
let builder = self.request_builder(client, request_data);
|
||||
gemini_chat_completions_streaming(builder, handler, self.model()).await
|
||||
}
|
||||
|
||||
async fn embeddings_inner(
|
||||
&self,
|
||||
client: &ReqwestClient,
|
||||
data: &EmbeddingsData,
|
||||
) -> Result<EmbeddingsOutput> {
|
||||
let request_data = prepare_embeddings(self, client, data).await?;
|
||||
let builder = self.request_builder(client, request_data);
|
||||
embeddings(builder, self.model()).await
|
||||
}
|
||||
|
||||
async fn rerank_inner(
|
||||
&self,
|
||||
client: &ReqwestClient,
|
||||
data: &RerankData,
|
||||
) -> Result<RerankOutput> {
|
||||
let request_data = noop_prepare_rerank(self, data)?;
|
||||
let builder = self.request_builder(client, request_data);
|
||||
noop_rerank(builder, self.model()).await
|
||||
}
|
||||
}
|
||||
|
||||
async fn prepare_chat_completions(
|
||||
self_: &GeminiClient,
|
||||
client: &ReqwestClient,
|
||||
data: ChatCompletionsData,
|
||||
) -> Result<RequestData> {
|
||||
let api_key = self_.get_api_key()?;
|
||||
let api_base = self_
|
||||
.get_api_base()
|
||||
.unwrap_or_else(|_| API_BASE.to_string());
|
||||
@@ -59,26 +102,61 @@ fn prepare_chat_completions(
|
||||
);
|
||||
|
||||
let body = gemini_build_chat_completions_body(data, &self_.model)?;
|
||||
|
||||
let mut request_data = RequestData::new(url, body);
|
||||
|
||||
request_data.header("x-goog-api-key", api_key);
|
||||
let uses_oauth = self_.config.auth.as_deref() == Some("oauth");
|
||||
|
||||
if uses_oauth {
|
||||
let provider = GeminiOAuthProvider;
|
||||
let ready = oauth::prepare_oauth_access_token(client, &provider, self_.name()).await?;
|
||||
if !ready {
|
||||
bail!(
|
||||
"OAuth configured but no tokens found for '{}'. Run: 'loki --authenticate {}' or '.authenticate' in the REPL",
|
||||
self_.name(),
|
||||
self_.name()
|
||||
);
|
||||
}
|
||||
let token = get_access_token(self_.name())?;
|
||||
request_data.bearer_auth(token);
|
||||
} else if let Ok(api_key) = self_.get_api_key() {
|
||||
request_data.header("x-goog-api-key", api_key);
|
||||
} else {
|
||||
bail!(
|
||||
"No authentication configured for '{}'. Set `api_key` or use `auth: oauth` with `loki --authenticate {}`.",
|
||||
self_.name(),
|
||||
self_.name()
|
||||
);
|
||||
}
|
||||
|
||||
Ok(request_data)
|
||||
}
|
||||
|
||||
fn prepare_embeddings(self_: &GeminiClient, data: &EmbeddingsData) -> Result<RequestData> {
|
||||
let api_key = self_.get_api_key()?;
|
||||
async fn prepare_embeddings(
|
||||
self_: &GeminiClient,
|
||||
client: &ReqwestClient,
|
||||
data: &EmbeddingsData,
|
||||
) -> Result<RequestData> {
|
||||
let api_base = self_
|
||||
.get_api_base()
|
||||
.unwrap_or_else(|_| API_BASE.to_string());
|
||||
|
||||
let url = format!(
|
||||
"{}/models/{}:batchEmbedContents?key={}",
|
||||
api_base.trim_end_matches('/'),
|
||||
self_.model.real_name(),
|
||||
api_key
|
||||
);
|
||||
let uses_oauth = self_.config.auth.as_deref() == Some("oauth");
|
||||
|
||||
let url = if uses_oauth {
|
||||
format!(
|
||||
"{}/models/{}:batchEmbedContents",
|
||||
api_base.trim_end_matches('/'),
|
||||
self_.model.real_name(),
|
||||
)
|
||||
} else {
|
||||
let api_key = self_.get_api_key()?;
|
||||
format!(
|
||||
"{}/models/{}:batchEmbedContents?key={}",
|
||||
api_base.trim_end_matches('/'),
|
||||
self_.model.real_name(),
|
||||
api_key
|
||||
)
|
||||
};
|
||||
|
||||
let model_id = format!("models/{}", self_.model.real_name());
|
||||
|
||||
@@ -89,21 +167,28 @@ fn prepare_embeddings(self_: &GeminiClient, data: &EmbeddingsData) -> Result<Req
|
||||
json!({
|
||||
"model": model_id,
|
||||
"content": {
|
||||
"parts": [
|
||||
{
|
||||
"text": text
|
||||
}
|
||||
]
|
||||
"parts": [{ "text": text }]
|
||||
},
|
||||
})
|
||||
})
|
||||
.collect();
|
||||
|
||||
let body = json!({
|
||||
"requests": requests,
|
||||
});
|
||||
let body = json!({ "requests": requests });
|
||||
let mut request_data = RequestData::new(url, body);
|
||||
|
||||
let request_data = RequestData::new(url, body);
|
||||
if uses_oauth {
|
||||
let provider = GeminiOAuthProvider;
|
||||
let ready = oauth::prepare_oauth_access_token(client, &provider, self_.name()).await?;
|
||||
if !ready {
|
||||
bail!(
|
||||
"OAuth configured but no tokens found for '{}'. Run: 'loki --authenticate {}' or '.authenticate' in the REPL",
|
||||
self_.name(),
|
||||
self_.name()
|
||||
);
|
||||
}
|
||||
let token = get_access_token(self_.name())?;
|
||||
request_data.bearer_auth(token);
|
||||
}
|
||||
|
||||
Ok(request_data)
|
||||
}
|
||||
|
||||
@@ -0,0 +1,49 @@
|
||||
use super::oauth::{OAuthProvider, TokenRequestFormat};
|
||||
|
||||
pub struct GeminiOAuthProvider;
|
||||
|
||||
const GEMINI_CLIENT_ID: &str =
|
||||
"50826443741-upqcebrs4gctqht1f08ku46qlbirkdsj.apps.googleusercontent.com";
|
||||
const GEMINI_CLIENT_SECRET: &str = "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";
|
||||
|
||||
impl OAuthProvider for GeminiOAuthProvider {
|
||||
fn provider_name(&self) -> &str {
|
||||
"gemini"
|
||||
}
|
||||
|
||||
fn client_id(&self) -> &str {
|
||||
GEMINI_CLIENT_ID
|
||||
}
|
||||
|
||||
fn authorize_url(&self) -> &str {
|
||||
"https://accounts.google.com/o/oauth2/v2/auth"
|
||||
}
|
||||
|
||||
fn token_url(&self) -> &str {
|
||||
"https://oauth2.googleapis.com/token"
|
||||
}
|
||||
|
||||
fn redirect_uri(&self) -> &str {
|
||||
""
|
||||
}
|
||||
|
||||
fn scopes(&self) -> &str {
|
||||
"https://www.googleapis.com/auth/generative-language.peruserquota https://www.googleapis.com/auth/generative-language.retriever https://www.googleapis.com/auth/userinfo.email"
|
||||
}
|
||||
|
||||
fn client_secret(&self) -> Option<&str> {
|
||||
Some(GEMINI_CLIENT_SECRET)
|
||||
}
|
||||
|
||||
fn extra_authorize_params(&self) -> Vec<(&str, &str)> {
|
||||
vec![("access_type", "offline"), ("prompt", "consent")]
|
||||
}
|
||||
|
||||
fn token_request_format(&self) -> TokenRequestFormat {
|
||||
TokenRequestFormat::FormUrlEncoded
|
||||
}
|
||||
|
||||
fn uses_localhost_redirect(&self) -> bool {
|
||||
true
|
||||
}
|
||||
}
|
||||
+39
-1
@@ -90,7 +90,7 @@ macro_rules! register_client {
|
||||
pub async fn create_client_config(client: &str, vault: &$crate::vault::Vault) -> anyhow::Result<(String, serde_json::Value)> {
|
||||
$(
|
||||
if client == $client::NAME && client != $crate::client::OpenAICompatibleClient::NAME {
|
||||
return create_config(&$client::PROMPTS, $client::NAME, vault).await
|
||||
return $client::create_client_config(vault).await
|
||||
}
|
||||
)+
|
||||
if let Some(ret) = create_openai_compatible_client_config(client).await? {
|
||||
@@ -218,6 +218,44 @@ macro_rules! impl_client_trait {
|
||||
};
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! create_client_config {
|
||||
($prompts:expr) => {
|
||||
pub async fn create_client_config(
|
||||
vault: &$crate::vault::Vault,
|
||||
) -> anyhow::Result<(String, serde_json::Value)> {
|
||||
$crate::client::create_config(&$prompts, Self::NAME, vault).await
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! create_oauth_supported_client_config {
|
||||
() => {
|
||||
pub async fn create_client_config(vault: &$crate::vault::Vault) -> anyhow::Result<(String, serde_json::Value)> {
|
||||
let mut config = serde_json::json!({ "type": Self::NAME });
|
||||
|
||||
let auth_method = inquire::Select::new(
|
||||
"Authentication method:",
|
||||
vec!["API Key", "OAuth"],
|
||||
)
|
||||
.prompt()?;
|
||||
|
||||
if auth_method == "API Key" {
|
||||
let env_name = format!("{}_API_KEY", Self::NAME).to_ascii_uppercase();
|
||||
vault.add_secret(&env_name)?;
|
||||
config["api_key"] = format!("{{{{{env_name}}}}}").into();
|
||||
} else {
|
||||
config["auth"] = "oauth".into();
|
||||
}
|
||||
|
||||
let model = $crate::client::set_client_models_config(&mut config, Self::NAME).await?;
|
||||
let clients = json!(vec![config]);
|
||||
Ok((model, clients))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! config_get_fn {
|
||||
($field_name:ident, $fn_name:ident) => {
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
mod access_token;
|
||||
mod claude_oauth;
|
||||
mod common;
|
||||
mod gemini_oauth;
|
||||
mod message;
|
||||
pub mod oauth;
|
||||
#[macro_use]
|
||||
mod macros;
|
||||
mod model;
|
||||
|
||||
@@ -177,6 +177,10 @@ impl Model {
|
||||
self.data.max_output_tokens
|
||||
}
|
||||
|
||||
pub fn supports_function_calling(&self) -> bool {
|
||||
self.data.supports_function_calling
|
||||
}
|
||||
|
||||
pub fn no_stream(&self) -> bool {
|
||||
self.data.no_stream
|
||||
}
|
||||
|
||||
@@ -0,0 +1,429 @@
|
||||
use super::ClientConfig;
|
||||
use super::access_token::{is_valid_access_token, set_access_token};
|
||||
use crate::config::Config;
|
||||
use anyhow::{Result, anyhow, bail};
|
||||
use base64::Engine;
|
||||
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
|
||||
use chrono::Utc;
|
||||
use inquire::Text;
|
||||
use reqwest::{Client as ReqwestClient, RequestBuilder};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Value;
|
||||
use sha2::{Digest, Sha256};
|
||||
use std::collections::HashMap;
|
||||
use std::fs;
|
||||
use std::io::{BufRead, BufReader, Write};
|
||||
use std::net::TcpListener;
|
||||
use url::Url;
|
||||
use uuid::Uuid;
|
||||
|
||||
pub enum TokenRequestFormat {
|
||||
Json,
|
||||
FormUrlEncoded,
|
||||
}
|
||||
|
||||
pub trait OAuthProvider: Send + Sync {
|
||||
fn provider_name(&self) -> &str;
|
||||
fn client_id(&self) -> &str;
|
||||
fn authorize_url(&self) -> &str;
|
||||
fn token_url(&self) -> &str;
|
||||
fn redirect_uri(&self) -> &str;
|
||||
fn scopes(&self) -> &str;
|
||||
|
||||
fn client_secret(&self) -> Option<&str> {
|
||||
None
|
||||
}
|
||||
|
||||
fn extra_authorize_params(&self) -> Vec<(&str, &str)> {
|
||||
vec![]
|
||||
}
|
||||
|
||||
fn token_request_format(&self) -> TokenRequestFormat {
|
||||
TokenRequestFormat::Json
|
||||
}
|
||||
|
||||
fn uses_localhost_redirect(&self) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
fn extra_token_headers(&self) -> Vec<(&str, &str)> {
|
||||
vec![]
|
||||
}
|
||||
|
||||
fn extra_request_headers(&self) -> Vec<(&str, &str)> {
|
||||
vec![]
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct OAuthTokens {
|
||||
pub access_token: String,
|
||||
pub refresh_token: String,
|
||||
pub expires_at: i64,
|
||||
}
|
||||
|
||||
pub async fn run_oauth_flow(provider: &dyn OAuthProvider, client_name: &str) -> Result<()> {
|
||||
let random_bytes: [u8; 32] = rand::random::<[u8; 32]>();
|
||||
let code_verifier = URL_SAFE_NO_PAD.encode(random_bytes);
|
||||
|
||||
let mut hasher = Sha256::new();
|
||||
hasher.update(code_verifier.as_bytes());
|
||||
let code_challenge = URL_SAFE_NO_PAD.encode(hasher.finalize());
|
||||
|
||||
let state = Uuid::new_v4().to_string();
|
||||
|
||||
let redirect_uri = if provider.uses_localhost_redirect() {
|
||||
let listener = TcpListener::bind("127.0.0.1:0")?;
|
||||
let port = listener.local_addr()?.port();
|
||||
let uri = format!("http://127.0.0.1:{port}/callback");
|
||||
drop(listener);
|
||||
uri
|
||||
} else {
|
||||
provider.redirect_uri().to_string()
|
||||
};
|
||||
|
||||
let encoded_scopes = urlencoding::encode(provider.scopes());
|
||||
let encoded_redirect = urlencoding::encode(&redirect_uri);
|
||||
|
||||
let mut authorize_url = format!(
|
||||
"{}?client_id={}&response_type=code&scope={}&redirect_uri={}&code_challenge={}&code_challenge_method=S256&state={}",
|
||||
provider.authorize_url(),
|
||||
provider.client_id(),
|
||||
encoded_scopes,
|
||||
encoded_redirect,
|
||||
code_challenge,
|
||||
state
|
||||
);
|
||||
|
||||
for (key, value) in provider.extra_authorize_params() {
|
||||
authorize_url.push_str(&format!(
|
||||
"&{}={}",
|
||||
urlencoding::encode(key),
|
||||
urlencoding::encode(value)
|
||||
));
|
||||
}
|
||||
|
||||
println!(
|
||||
"\nOpen this URL to authenticate with {} (client '{}'):\n",
|
||||
provider.provider_name(),
|
||||
client_name
|
||||
);
|
||||
println!(" {authorize_url}\n");
|
||||
|
||||
let _ = open::that(&authorize_url);
|
||||
|
||||
let (code, returned_state) = if provider.uses_localhost_redirect() {
|
||||
listen_for_oauth_callback(&redirect_uri)?
|
||||
} else {
|
||||
let input = Text::new("Paste the authorization code:").prompt()?;
|
||||
let parts: Vec<&str> = input.splitn(2, '#').collect();
|
||||
if parts.len() != 2 {
|
||||
bail!("Invalid authorization code format. Expected format: <code>#<state>");
|
||||
}
|
||||
(parts[0].to_string(), parts[1].to_string())
|
||||
};
|
||||
|
||||
if returned_state != state {
|
||||
bail!(
|
||||
"OAuth state mismatch: expected '{state}', got '{returned_state}'. \
|
||||
This may indicate a CSRF attack or a stale authorization attempt."
|
||||
);
|
||||
}
|
||||
|
||||
let client = ReqwestClient::new();
|
||||
let request = build_token_request(
|
||||
&client,
|
||||
provider,
|
||||
&[
|
||||
("grant_type", "authorization_code"),
|
||||
("client_id", provider.client_id()),
|
||||
("code", &code),
|
||||
("code_verifier", &code_verifier),
|
||||
("redirect_uri", &redirect_uri),
|
||||
("state", &state),
|
||||
],
|
||||
);
|
||||
|
||||
let response: Value = request.send().await?.json().await?;
|
||||
|
||||
let access_token = response["access_token"]
|
||||
.as_str()
|
||||
.ok_or_else(|| anyhow!("Missing access_token in response: {response}"))?
|
||||
.to_string();
|
||||
let refresh_token = response["refresh_token"]
|
||||
.as_str()
|
||||
.ok_or_else(|| anyhow!("Missing refresh_token in response: {response}"))?
|
||||
.to_string();
|
||||
let expires_in = response["expires_in"]
|
||||
.as_i64()
|
||||
.ok_or_else(|| anyhow!("Missing expires_in in response: {response}"))?;
|
||||
|
||||
let expires_at = Utc::now().timestamp() + expires_in;
|
||||
|
||||
let tokens = OAuthTokens {
|
||||
access_token,
|
||||
refresh_token,
|
||||
expires_at,
|
||||
};
|
||||
|
||||
save_oauth_tokens(client_name, &tokens)?;
|
||||
|
||||
println!(
|
||||
"Successfully authenticated client '{}' with {} via OAuth. Tokens saved.",
|
||||
client_name,
|
||||
provider.provider_name()
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn load_oauth_tokens(client_name: &str) -> Option<OAuthTokens> {
|
||||
let path = Config::token_file(client_name);
|
||||
let content = fs::read_to_string(path).ok()?;
|
||||
serde_json::from_str(&content).ok()
|
||||
}
|
||||
|
||||
fn save_oauth_tokens(client_name: &str, tokens: &OAuthTokens) -> Result<()> {
|
||||
let path = Config::token_file(client_name);
|
||||
if let Some(parent) = path.parent() {
|
||||
fs::create_dir_all(parent)?;
|
||||
}
|
||||
let json = serde_json::to_string_pretty(tokens)?;
|
||||
fs::write(path, json)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn refresh_oauth_token(
|
||||
client: &ReqwestClient,
|
||||
provider: &impl OAuthProvider,
|
||||
client_name: &str,
|
||||
tokens: &OAuthTokens,
|
||||
) -> Result<OAuthTokens> {
|
||||
let request = build_token_request(
|
||||
client,
|
||||
provider,
|
||||
&[
|
||||
("grant_type", "refresh_token"),
|
||||
("client_id", provider.client_id()),
|
||||
("refresh_token", &tokens.refresh_token),
|
||||
],
|
||||
);
|
||||
|
||||
let response: Value = request.send().await?.json().await?;
|
||||
|
||||
let access_token = response["access_token"]
|
||||
.as_str()
|
||||
.ok_or_else(|| anyhow!("Missing access_token in refresh response: {response}"))?
|
||||
.to_string();
|
||||
let refresh_token = response["refresh_token"]
|
||||
.as_str()
|
||||
.map(|s| s.to_string())
|
||||
.unwrap_or_else(|| tokens.refresh_token.clone());
|
||||
let expires_in = response["expires_in"]
|
||||
.as_i64()
|
||||
.ok_or_else(|| anyhow!("Missing expires_in in refresh response: {response}"))?;
|
||||
|
||||
let expires_at = Utc::now().timestamp() + expires_in;
|
||||
|
||||
let new_tokens = OAuthTokens {
|
||||
access_token,
|
||||
refresh_token,
|
||||
expires_at,
|
||||
};
|
||||
|
||||
save_oauth_tokens(client_name, &new_tokens)?;
|
||||
|
||||
Ok(new_tokens)
|
||||
}
|
||||
|
||||
pub async fn prepare_oauth_access_token(
|
||||
client: &ReqwestClient,
|
||||
provider: &impl OAuthProvider,
|
||||
client_name: &str,
|
||||
) -> Result<bool> {
|
||||
if is_valid_access_token(client_name) {
|
||||
return Ok(true);
|
||||
}
|
||||
|
||||
let tokens = match load_oauth_tokens(client_name) {
|
||||
Some(t) => t,
|
||||
None => return Ok(false),
|
||||
};
|
||||
|
||||
let tokens = if Utc::now().timestamp() >= tokens.expires_at {
|
||||
refresh_oauth_token(client, provider, client_name, &tokens).await?
|
||||
} else {
|
||||
tokens
|
||||
};
|
||||
|
||||
set_access_token(client_name, tokens.access_token.clone(), tokens.expires_at);
|
||||
|
||||
Ok(true)
|
||||
}
|
||||
|
||||
fn build_token_request(
|
||||
client: &ReqwestClient,
|
||||
provider: &(impl OAuthProvider + ?Sized),
|
||||
params: &[(&str, &str)],
|
||||
) -> RequestBuilder {
|
||||
let mut request = match provider.token_request_format() {
|
||||
TokenRequestFormat::Json => {
|
||||
let body: serde_json::Map<String, Value> = params
|
||||
.iter()
|
||||
.map(|(k, v)| (k.to_string(), Value::String(v.to_string())))
|
||||
.collect();
|
||||
if let Some(secret) = provider.client_secret() {
|
||||
let mut body = body;
|
||||
body.insert(
|
||||
"client_secret".to_string(),
|
||||
Value::String(secret.to_string()),
|
||||
);
|
||||
client.post(provider.token_url()).json(&body)
|
||||
} else {
|
||||
client.post(provider.token_url()).json(&body)
|
||||
}
|
||||
}
|
||||
TokenRequestFormat::FormUrlEncoded => {
|
||||
let mut form: HashMap<String, String> = params
|
||||
.iter()
|
||||
.map(|(k, v)| (k.to_string(), v.to_string()))
|
||||
.collect();
|
||||
if let Some(secret) = provider.client_secret() {
|
||||
form.insert("client_secret".to_string(), secret.to_string());
|
||||
}
|
||||
client.post(provider.token_url()).form(&form)
|
||||
}
|
||||
};
|
||||
|
||||
for (key, value) in provider.extra_token_headers() {
|
||||
request = request.header(key, value);
|
||||
}
|
||||
|
||||
request
|
||||
}
|
||||
|
||||
fn listen_for_oauth_callback(redirect_uri: &str) -> Result<(String, String)> {
|
||||
let url: Url = redirect_uri.parse()?;
|
||||
let host = url.host_str().unwrap_or("127.0.0.1");
|
||||
let port = url
|
||||
.port()
|
||||
.ok_or_else(|| anyhow!("No port in redirect URI"))?;
|
||||
let path = url.path();
|
||||
|
||||
println!("Waiting for OAuth callback on {redirect_uri} ...\n");
|
||||
|
||||
let listener = TcpListener::bind(format!("{host}:{port}"))?;
|
||||
let (mut stream, _) = listener.accept()?;
|
||||
|
||||
let mut reader = BufReader::new(&stream);
|
||||
let mut request_line = String::new();
|
||||
reader.read_line(&mut request_line)?;
|
||||
|
||||
let request_path = request_line
|
||||
.split_whitespace()
|
||||
.nth(1)
|
||||
.ok_or_else(|| anyhow!("Malformed HTTP request from OAuth callback"))?;
|
||||
|
||||
let full_url = format!("http://{host}:{port}{request_path}");
|
||||
let parsed: Url = full_url.parse()?;
|
||||
|
||||
if !parsed.path().starts_with(path) {
|
||||
bail!("Unexpected callback path: {}", parsed.path());
|
||||
}
|
||||
|
||||
let code = parsed
|
||||
.query_pairs()
|
||||
.find(|(k, _)| k == "code")
|
||||
.map(|(_, v)| v.to_string())
|
||||
.ok_or_else(|| {
|
||||
let error = parsed
|
||||
.query_pairs()
|
||||
.find(|(k, _)| k == "error")
|
||||
.map(|(_, v)| v.to_string())
|
||||
.unwrap_or_else(|| "unknown".to_string());
|
||||
anyhow!("OAuth callback returned error: {error}")
|
||||
})?;
|
||||
|
||||
let returned_state = parsed
|
||||
.query_pairs()
|
||||
.find(|(k, _)| k == "state")
|
||||
.map(|(_, v)| v.to_string())
|
||||
.ok_or_else(|| anyhow!("Missing state parameter in OAuth callback"))?;
|
||||
|
||||
let response_body = "<html><body><h2>Authentication successful!</h2><p>You can close this tab and return to your terminal.</p></body></html>";
|
||||
let response = format!(
|
||||
"HTTP/1.1 200 OK\r\nContent-Type: text/html\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{}",
|
||||
response_body.len(),
|
||||
response_body
|
||||
);
|
||||
stream.write_all(response.as_bytes())?;
|
||||
|
||||
Ok((code, returned_state))
|
||||
}
|
||||
|
||||
pub fn get_oauth_provider(provider_type: &str) -> Option<Box<dyn OAuthProvider>> {
|
||||
match provider_type {
|
||||
"claude" => Some(Box::new(super::claude_oauth::ClaudeOAuthProvider)),
|
||||
"gemini" => Some(Box::new(super::gemini_oauth::GeminiOAuthProvider)),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn resolve_provider_type(client_name: &str, clients: &[ClientConfig]) -> Option<&'static str> {
|
||||
for client_config in clients {
|
||||
let (config_name, provider_type, auth) = client_config_info(client_config);
|
||||
if config_name == client_name {
|
||||
if auth == Some("oauth") && get_oauth_provider(provider_type).is_some() {
|
||||
return Some(provider_type);
|
||||
}
|
||||
return None;
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
pub fn list_oauth_capable_clients(clients: &[ClientConfig]) -> Vec<String> {
|
||||
clients
|
||||
.iter()
|
||||
.filter_map(|client_config| {
|
||||
let (name, provider_type, auth) = client_config_info(client_config);
|
||||
if auth == Some("oauth") && get_oauth_provider(provider_type).is_some() {
|
||||
Some(name.to_string())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn client_config_info(client_config: &ClientConfig) -> (&str, &'static str, Option<&str>) {
|
||||
match client_config {
|
||||
ClientConfig::ClaudeConfig(c) => (
|
||||
c.name.as_deref().unwrap_or("claude"),
|
||||
"claude",
|
||||
c.auth.as_deref(),
|
||||
),
|
||||
ClientConfig::OpenAIConfig(c) => (c.name.as_deref().unwrap_or("openai"), "openai", None),
|
||||
ClientConfig::OpenAICompatibleConfig(c) => (
|
||||
c.name.as_deref().unwrap_or("openai-compatible"),
|
||||
"openai-compatible",
|
||||
None,
|
||||
),
|
||||
ClientConfig::GeminiConfig(c) => (
|
||||
c.name.as_deref().unwrap_or("gemini"),
|
||||
"gemini",
|
||||
c.auth.as_deref(),
|
||||
),
|
||||
ClientConfig::CohereConfig(c) => (c.name.as_deref().unwrap_or("cohere"), "cohere", None),
|
||||
ClientConfig::AzureOpenAIConfig(c) => (
|
||||
c.name.as_deref().unwrap_or("azure-openai"),
|
||||
"azure-openai",
|
||||
None,
|
||||
),
|
||||
ClientConfig::VertexAIConfig(c) => {
|
||||
(c.name.as_deref().unwrap_or("vertexai"), "vertexai", None)
|
||||
}
|
||||
ClientConfig::BedrockConfig(c) => (c.name.as_deref().unwrap_or("bedrock"), "bedrock", None),
|
||||
ClientConfig::Unknown => ("unknown", "unknown", None),
|
||||
}
|
||||
}
|
||||
@@ -2,10 +2,10 @@ use super::*;
|
||||
|
||||
use crate::utils::strip_think_tag;
|
||||
|
||||
use anyhow::{bail, Context, Result};
|
||||
use anyhow::{Context, Result, bail};
|
||||
use reqwest::RequestBuilder;
|
||||
use serde::Deserialize;
|
||||
use serde_json::{json, Value};
|
||||
use serde_json::{Value, json};
|
||||
|
||||
const API_BASE: &str = "https://api.openai.com/v1";
|
||||
|
||||
@@ -25,7 +25,7 @@ impl OpenAIClient {
|
||||
config_get_fn!(api_key, get_api_key);
|
||||
config_get_fn!(api_base, get_api_base);
|
||||
|
||||
pub const PROMPTS: [PromptAction<'static>; 1] = [("api_key", "API Key", None, true)];
|
||||
create_client_config!([("api_key", "API Key", None, true)]);
|
||||
}
|
||||
|
||||
impl_client_trait!(
|
||||
@@ -114,7 +114,9 @@ pub async fn openai_chat_completions_streaming(
|
||||
function_arguments = String::from("{}");
|
||||
}
|
||||
let arguments: Value = function_arguments.parse().with_context(|| {
|
||||
format!("Tool call '{function_name}' has non-JSON arguments '{function_arguments}'")
|
||||
format!(
|
||||
"Tool call '{function_name}' has non-JSON arguments '{function_arguments}'"
|
||||
)
|
||||
})?;
|
||||
handler.tool_call(ToolCall::new(
|
||||
function_name.clone(),
|
||||
|
||||
@@ -21,7 +21,7 @@ impl OpenAICompatibleClient {
|
||||
config_get_fn!(api_base, get_api_base);
|
||||
config_get_fn!(api_key, get_api_key);
|
||||
|
||||
pub const PROMPTS: [PromptAction<'static>; 0] = [];
|
||||
create_client_config!([]);
|
||||
}
|
||||
|
||||
impl_client_trait!(
|
||||
|
||||
@@ -342,7 +342,7 @@ mod tests {
|
||||
|
||||
use bytes::Bytes;
|
||||
use futures_util::stream;
|
||||
use rand::Rng;
|
||||
use rand::random_range;
|
||||
use serde_json::json;
|
||||
|
||||
#[test]
|
||||
@@ -392,10 +392,9 @@ mod tests {
|
||||
}
|
||||
|
||||
fn split_chunks(text: &str) -> Vec<Vec<u8>> {
|
||||
let mut rng = rand::rng();
|
||||
let len = text.len();
|
||||
let cut1 = rng.random_range(1..len - 1);
|
||||
let cut2 = rng.random_range(cut1 + 1..len);
|
||||
let cut1 = random_range(1..len - 1);
|
||||
let cut2 = random_range(cut1 + 1..len);
|
||||
let chunk1 = text.as_bytes()[..cut1].to_vec();
|
||||
let chunk2 = text.as_bytes()[cut1..cut2].to_vec();
|
||||
let chunk3 = text.as_bytes()[cut2..].to_vec();
|
||||
|
||||
+24
-16
@@ -3,11 +3,11 @@ use super::claude::*;
|
||||
use super::openai::*;
|
||||
use super::*;
|
||||
|
||||
use anyhow::{anyhow, bail, Context, Result};
|
||||
use anyhow::{Context, Result, anyhow, bail};
|
||||
use chrono::{Duration, Utc};
|
||||
use reqwest::{Client as ReqwestClient, RequestBuilder};
|
||||
use serde::Deserialize;
|
||||
use serde_json::{json, Value};
|
||||
use serde_json::{Value, json};
|
||||
use std::{path::PathBuf, str::FromStr};
|
||||
|
||||
#[derive(Debug, Clone, Deserialize, Default)]
|
||||
@@ -26,10 +26,10 @@ impl VertexAIClient {
|
||||
config_get_fn!(project_id, get_project_id);
|
||||
config_get_fn!(location, get_location);
|
||||
|
||||
pub const PROMPTS: [PromptAction<'static>; 2] = [
|
||||
create_client_config!([
|
||||
("project_id", "Project ID", None, false),
|
||||
("location", "Location", None, false),
|
||||
];
|
||||
]);
|
||||
}
|
||||
|
||||
#[async_trait::async_trait]
|
||||
@@ -99,9 +99,13 @@ fn prepare_chat_completions(
|
||||
let access_token = get_access_token(self_.name())?;
|
||||
|
||||
let base_url = if location == "global" {
|
||||
format!("https://aiplatform.googleapis.com/v1/projects/{project_id}/locations/global/publishers")
|
||||
format!(
|
||||
"https://aiplatform.googleapis.com/v1/projects/{project_id}/locations/global/publishers"
|
||||
)
|
||||
} else {
|
||||
format!("https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/publishers")
|
||||
format!(
|
||||
"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/publishers"
|
||||
)
|
||||
};
|
||||
|
||||
let model_name = self_.model.real_name();
|
||||
@@ -158,9 +162,13 @@ fn prepare_embeddings(self_: &VertexAIClient, data: &EmbeddingsData) -> Result<R
|
||||
let access_token = get_access_token(self_.name())?;
|
||||
|
||||
let base_url = if location == "global" {
|
||||
format!("https://aiplatform.googleapis.com/v1/projects/{project_id}/locations/global/publishers")
|
||||
format!(
|
||||
"https://aiplatform.googleapis.com/v1/projects/{project_id}/locations/global/publishers"
|
||||
)
|
||||
} else {
|
||||
format!("https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/publishers")
|
||||
format!(
|
||||
"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/publishers"
|
||||
)
|
||||
};
|
||||
let url = format!(
|
||||
"{base_url}/google/models/{}:predict",
|
||||
@@ -220,12 +228,12 @@ pub async fn gemini_chat_completions_streaming(
|
||||
part["functionCall"]["args"].as_object(),
|
||||
) {
|
||||
let thought_signature = part["thoughtSignature"]
|
||||
.as_str()
|
||||
.or_else(|| part["thought_signature"].as_str())
|
||||
.map(|s| s.to_string());
|
||||
.as_str()
|
||||
.or_else(|| part["thought_signature"].as_str())
|
||||
.map(|s| s.to_string());
|
||||
handler.tool_call(
|
||||
ToolCall::new(name.to_string(), json!(args), None)
|
||||
.with_thought_signature(thought_signature),
|
||||
.with_thought_signature(thought_signature),
|
||||
)?;
|
||||
}
|
||||
}
|
||||
@@ -288,12 +296,12 @@ fn gemini_extract_chat_completions_text(data: &Value) -> Result<ChatCompletionsO
|
||||
part["functionCall"]["args"].as_object(),
|
||||
) {
|
||||
let thought_signature = part["thoughtSignature"]
|
||||
.as_str()
|
||||
.or_else(|| part["thought_signature"].as_str())
|
||||
.map(|s| s.to_string());
|
||||
.as_str()
|
||||
.or_else(|| part["thought_signature"].as_str())
|
||||
.map(|s| s.to_string());
|
||||
tool_calls.push(
|
||||
ToolCall::new(name.to_string(), json!(args), None)
|
||||
.with_thought_signature(thought_signature),
|
||||
.with_thought_signature(thought_signature),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -476,6 +476,11 @@ impl Agent {
|
||||
self.todo_list.mark_done(id)
|
||||
}
|
||||
|
||||
pub fn clear_todo_list(&mut self) {
|
||||
self.todo_list.clear();
|
||||
self.reset_continuation();
|
||||
}
|
||||
|
||||
pub fn continuation_prompt(&self) -> String {
|
||||
self.config.continuation_prompt.clone().unwrap_or_else(|| {
|
||||
formatdoc! {"
|
||||
|
||||
+10
-5
@@ -239,12 +239,17 @@ impl Input {
|
||||
patch_messages(&mut messages, model);
|
||||
model.guard_max_input_tokens(&messages)?;
|
||||
let (temperature, top_p) = (self.role().temperature(), self.role().top_p());
|
||||
let functions = self.config.read().select_functions(self.role());
|
||||
if let Some(vec) = &functions {
|
||||
for def in vec {
|
||||
debug!("Function definition: {:?}", def.name);
|
||||
let functions = if model.supports_function_calling() {
|
||||
let fns = self.config.read().select_functions(self.role());
|
||||
if let Some(vec) = &fns {
|
||||
for def in vec {
|
||||
debug!("Function definition: {:?}", def.name);
|
||||
}
|
||||
}
|
||||
}
|
||||
fns
|
||||
} else {
|
||||
None
|
||||
};
|
||||
Ok(ChatCompletionsData {
|
||||
messages,
|
||||
temperature,
|
||||
|
||||
+15
-1
@@ -428,6 +428,14 @@ impl Config {
|
||||
base_dir.join(env!("CARGO_CRATE_NAME"))
|
||||
}
|
||||
|
||||
pub fn oauth_tokens_path() -> PathBuf {
|
||||
Self::cache_path().join("oauth")
|
||||
}
|
||||
|
||||
pub fn token_file(client_name: &str) -> PathBuf {
|
||||
Self::oauth_tokens_path().join(format!("{client_name}_oauth_tokens.json"))
|
||||
}
|
||||
|
||||
pub fn log_path() -> PathBuf {
|
||||
Config::cache_path().join(format!("{}.log", env!("CARGO_CRATE_NAME")))
|
||||
}
|
||||
@@ -576,7 +584,7 @@ impl Config {
|
||||
}
|
||||
|
||||
pub fn agent_functions_file(name: &str) -> Result<PathBuf> {
|
||||
let allowed = ["tools.sh", "tools.py", "tools.js"];
|
||||
let allowed = ["tools.sh", "tools.py", "tools.ts", "tools.js"];
|
||||
|
||||
for entry in read_dir(Self::agent_data_dir(name))? {
|
||||
let entry = entry?;
|
||||
@@ -1834,6 +1842,12 @@ impl Config {
|
||||
bail!("Already in an agent, please run '.exit agent' first to exit the current agent.");
|
||||
}
|
||||
let agent = Agent::init(config, agent_name, abort_signal.clone()).await?;
|
||||
if !agent.model().supports_function_calling() {
|
||||
eprintln!(
|
||||
"Warning: The model '{}' does not support function calling. Agent tools (including todo, spawning, and user interaction) will not be available.",
|
||||
agent.model().id()
|
||||
);
|
||||
}
|
||||
let session = session_name.map(|v| v.to_string()).or_else(|| {
|
||||
if config.read().macro_flag {
|
||||
None
|
||||
|
||||
@@ -7,10 +7,12 @@ pub(in crate::config) const DEFAULT_TODO_INSTRUCTIONS: &str = indoc! {"
|
||||
- `todo__add`: Add individual tasks. Add all planned steps before starting work.
|
||||
- `todo__done`: Mark a task done by id. Call this immediately after completing each step.
|
||||
- `todo__list`: Show the current todo list.
|
||||
- `todo__clear`: Clear the entire todo list and reset the goal. Use when the user cancels or changes direction.
|
||||
|
||||
RULES:
|
||||
- Always create a todo list before starting work.
|
||||
- Mark each task done as soon as you finish it; do not batch.
|
||||
- If the user cancels the current task or changes direction, call `todo__clear` immediately.
|
||||
- If you stop with incomplete tasks, the system will automatically prompt you to continue."
|
||||
};
|
||||
|
||||
|
||||
@@ -67,6 +67,11 @@ impl TodoList {
|
||||
self.todos.is_empty()
|
||||
}
|
||||
|
||||
pub fn clear(&mut self) {
|
||||
self.goal.clear();
|
||||
self.todos.clear();
|
||||
}
|
||||
|
||||
pub fn render_for_model(&self) -> String {
|
||||
let mut lines = Vec::new();
|
||||
if !self.goal.is_empty() {
|
||||
@@ -149,6 +154,21 @@ mod tests {
|
||||
assert!(rendered.contains("○ 2. Map"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_clear() {
|
||||
let mut list = TodoList::new("Some goal");
|
||||
list.add("Task 1");
|
||||
list.add("Task 2");
|
||||
list.mark_done(1);
|
||||
assert!(!list.is_empty());
|
||||
|
||||
list.clear();
|
||||
assert!(list.is_empty());
|
||||
assert!(list.goal.is_empty());
|
||||
assert_eq!(list.todos.len(), 0);
|
||||
assert!(!list.has_incomplete());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_serialization_roundtrip() {
|
||||
let mut list = TodoList::new("Roundtrip");
|
||||
|
||||
+193
-55
@@ -12,7 +12,7 @@ use crate::mcp::{
|
||||
MCP_DESCRIBE_META_FUNCTION_NAME_PREFIX, MCP_INVOKE_META_FUNCTION_NAME_PREFIX,
|
||||
MCP_SEARCH_META_FUNCTION_NAME_PREFIX,
|
||||
};
|
||||
use crate::parsers::{bash, python};
|
||||
use crate::parsers::{bash, python, typescript};
|
||||
use anyhow::{Context, Result, anyhow, bail};
|
||||
use indexmap::IndexMap;
|
||||
use indoc::formatdoc;
|
||||
@@ -22,7 +22,7 @@ use serde_json::{Value, json};
|
||||
use std::collections::VecDeque;
|
||||
use std::ffi::OsStr;
|
||||
use std::fs::File;
|
||||
use std::io::Write;
|
||||
use std::io::{Read, Write};
|
||||
use std::{
|
||||
collections::{HashMap, HashSet},
|
||||
env, fs, io,
|
||||
@@ -53,6 +53,7 @@ enum BinaryType<'a> {
|
||||
enum Language {
|
||||
Bash,
|
||||
Python,
|
||||
TypeScript,
|
||||
Unsupported,
|
||||
}
|
||||
|
||||
@@ -61,6 +62,7 @@ impl From<&String> for Language {
|
||||
match s.to_lowercase().as_str() {
|
||||
"sh" => Language::Bash,
|
||||
"py" => Language::Python,
|
||||
"ts" => Language::TypeScript,
|
||||
_ => Language::Unsupported,
|
||||
}
|
||||
}
|
||||
@@ -72,6 +74,7 @@ impl Language {
|
||||
match self {
|
||||
Language::Bash => "bash",
|
||||
Language::Python => "python",
|
||||
Language::TypeScript => "npx tsx",
|
||||
Language::Unsupported => "sh",
|
||||
}
|
||||
}
|
||||
@@ -80,11 +83,32 @@ impl Language {
|
||||
match self {
|
||||
Language::Bash => "sh",
|
||||
Language::Python => "py",
|
||||
Language::TypeScript => "ts",
|
||||
_ => "sh",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn extract_shebang_runtime(path: &Path) -> Option<String> {
|
||||
let file = File::open(path).ok()?;
|
||||
let reader = io::BufReader::new(file);
|
||||
let first_line = io::BufRead::lines(reader).next()?.ok()?;
|
||||
let shebang = first_line.strip_prefix("#!")?;
|
||||
let cmd = shebang.trim();
|
||||
if cmd.is_empty() {
|
||||
return None;
|
||||
}
|
||||
if let Some(after_env) = cmd.strip_prefix("/usr/bin/env ") {
|
||||
let runtime = after_env.trim();
|
||||
if runtime.is_empty() {
|
||||
return None;
|
||||
}
|
||||
Some(runtime.to_string())
|
||||
} else {
|
||||
Some(cmd.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn eval_tool_calls(
|
||||
config: &GlobalConfig,
|
||||
mut calls: Vec<ToolCall>,
|
||||
@@ -473,6 +497,11 @@ impl Functions {
|
||||
file_name,
|
||||
tools_file_path.parent(),
|
||||
),
|
||||
Language::TypeScript => typescript::generate_typescript_declarations(
|
||||
tool_file,
|
||||
file_name,
|
||||
tools_file_path.parent(),
|
||||
),
|
||||
Language::Unsupported => {
|
||||
bail!("Unsupported tool file extension: {}", language.as_ref())
|
||||
}
|
||||
@@ -513,7 +542,14 @@ impl Functions {
|
||||
bail!("Unsupported tool file extension: {}", language.as_ref());
|
||||
}
|
||||
|
||||
Self::build_binaries(binary_name, language, BinaryType::Tool(agent_name))?;
|
||||
let tool_path = Config::global_tools_dir().join(tool);
|
||||
let custom_runtime = extract_shebang_runtime(&tool_path);
|
||||
Self::build_binaries(
|
||||
binary_name,
|
||||
language,
|
||||
BinaryType::Tool(agent_name),
|
||||
custom_runtime.as_deref(),
|
||||
)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
@@ -554,8 +590,9 @@ impl Functions {
|
||||
}
|
||||
|
||||
fn build_agent_tool_binaries(name: &str) -> Result<()> {
|
||||
let tools_file = Config::agent_functions_file(name)?;
|
||||
let language = Language::from(
|
||||
&Config::agent_functions_file(name)?
|
||||
&tools_file
|
||||
.extension()
|
||||
.and_then(OsStr::to_str)
|
||||
.map(|s| s.to_lowercase())
|
||||
@@ -568,7 +605,8 @@ impl Functions {
|
||||
bail!("Unsupported tool file extension: {}", language.as_ref());
|
||||
}
|
||||
|
||||
Self::build_binaries(name, language, BinaryType::Agent)
|
||||
let custom_runtime = extract_shebang_runtime(&tools_file);
|
||||
Self::build_binaries(name, language, BinaryType::Agent, custom_runtime.as_deref())
|
||||
}
|
||||
|
||||
#[cfg(windows)]
|
||||
@@ -576,6 +614,7 @@ impl Functions {
|
||||
binary_name: &str,
|
||||
language: Language,
|
||||
binary_type: BinaryType,
|
||||
custom_runtime: Option<&str>,
|
||||
) -> Result<()> {
|
||||
use native::runtime;
|
||||
let (binary_file, binary_script_file) = match binary_type {
|
||||
@@ -613,6 +652,7 @@ impl Functions {
|
||||
)
|
||||
})?;
|
||||
let content_template = unsafe { std::str::from_utf8_unchecked(&embedded_file.data) };
|
||||
let to_script_path = |p: &str| -> String { p.replace('\\', "/") };
|
||||
let content = match binary_type {
|
||||
BinaryType::Tool(None) => {
|
||||
let root_dir = Config::functions_dir();
|
||||
@@ -622,8 +662,8 @@ impl Functions {
|
||||
);
|
||||
content_template
|
||||
.replace("{function_name}", binary_name)
|
||||
.replace("{root_dir}", &root_dir.to_string_lossy())
|
||||
.replace("{tool_path}", &tool_path)
|
||||
.replace("{root_dir}", &to_script_path(&root_dir.to_string_lossy()))
|
||||
.replace("{tool_path}", &to_script_path(&tool_path))
|
||||
}
|
||||
BinaryType::Tool(Some(agent_name)) => {
|
||||
let root_dir = Config::agent_data_dir(agent_name);
|
||||
@@ -633,16 +673,19 @@ impl Functions {
|
||||
);
|
||||
content_template
|
||||
.replace("{function_name}", binary_name)
|
||||
.replace("{root_dir}", &root_dir.to_string_lossy())
|
||||
.replace("{tool_path}", &tool_path)
|
||||
.replace("{root_dir}", &to_script_path(&root_dir.to_string_lossy()))
|
||||
.replace("{tool_path}", &to_script_path(&tool_path))
|
||||
}
|
||||
BinaryType::Agent => content_template
|
||||
.replace("{agent_name}", binary_name)
|
||||
.replace("{config_dir}", &Config::config_dir().to_string_lossy()),
|
||||
.replace(
|
||||
"{config_dir}",
|
||||
&to_script_path(&Config::config_dir().to_string_lossy()),
|
||||
),
|
||||
}
|
||||
.replace(
|
||||
"{prompt_utils_file}",
|
||||
&Config::bash_prompt_utils_file().to_string_lossy(),
|
||||
&to_script_path(&Config::bash_prompt_utils_file().to_string_lossy()),
|
||||
);
|
||||
if binary_script_file.exists() {
|
||||
fs::remove_file(&binary_script_file)?;
|
||||
@@ -656,40 +699,48 @@ impl Functions {
|
||||
binary_file.display()
|
||||
);
|
||||
|
||||
let run = match language {
|
||||
Language::Bash => {
|
||||
let shell = runtime::bash_path().ok_or_else(|| anyhow!("Shell not found"))?;
|
||||
format!("{shell} --noprofile --norc")
|
||||
let run = if let Some(rt) = custom_runtime {
|
||||
rt.to_string()
|
||||
} else {
|
||||
match language {
|
||||
Language::Bash => {
|
||||
let shell = runtime::bash_path().ok_or_else(|| anyhow!("Shell not found"))?;
|
||||
format!("{shell} --noprofile --norc")
|
||||
}
|
||||
Language::Python if Path::new(".venv").exists() => {
|
||||
let executable_path = env::current_dir()?
|
||||
.join(".venv")
|
||||
.join("Scripts")
|
||||
.join("activate.bat");
|
||||
let canonicalized_path = dunce::canonicalize(&executable_path)?;
|
||||
format!(
|
||||
"call \"{}\" && {}",
|
||||
canonicalized_path.to_string_lossy(),
|
||||
language.to_cmd()
|
||||
)
|
||||
}
|
||||
Language::Python => {
|
||||
let executable_path = which::which("python")
|
||||
.or_else(|_| which::which("python3"))
|
||||
.map_err(|_| anyhow!("Python executable not found in PATH"))?;
|
||||
let canonicalized_path = dunce::canonicalize(&executable_path)?;
|
||||
canonicalized_path.to_string_lossy().into_owned()
|
||||
}
|
||||
Language::TypeScript => {
|
||||
let npx_path = which::which("npx").map_err(|_| {
|
||||
anyhow!("npx executable not found in PATH (required for TypeScript tools)")
|
||||
})?;
|
||||
let canonicalized_path = dunce::canonicalize(&npx_path)?;
|
||||
format!("{} tsx", canonicalized_path.to_string_lossy())
|
||||
}
|
||||
_ => bail!("Unsupported language: {}", language.as_ref()),
|
||||
}
|
||||
Language::Python if Path::new(".venv").exists() => {
|
||||
let executable_path = env::current_dir()?
|
||||
.join(".venv")
|
||||
.join("Scripts")
|
||||
.join("activate.bat");
|
||||
let canonicalized_path = fs::canonicalize(&executable_path)?;
|
||||
format!(
|
||||
"call \"{}\" && {}",
|
||||
canonicalized_path.to_string_lossy(),
|
||||
language.to_cmd()
|
||||
)
|
||||
}
|
||||
Language::Python => {
|
||||
let executable_path = which::which("python")
|
||||
.or_else(|_| which::which("python3"))
|
||||
.map_err(|_| anyhow!("Python executable not found in PATH"))?;
|
||||
let canonicalized_path = fs::canonicalize(&executable_path)?;
|
||||
canonicalized_path.to_string_lossy().into_owned()
|
||||
}
|
||||
_ => bail!("Unsupported language: {}", language.as_ref()),
|
||||
};
|
||||
let bin_dir = binary_file
|
||||
.parent()
|
||||
.expect("Failed to get parent directory of binary file")
|
||||
.canonicalize()?
|
||||
.to_string_lossy()
|
||||
.into_owned();
|
||||
let wrapper_binary = binary_script_file
|
||||
.canonicalize()?
|
||||
.expect("Failed to get parent directory of binary file");
|
||||
let canonical_bin_dir = dunce::canonicalize(bin_dir)?.to_string_lossy().into_owned();
|
||||
let wrapper_binary = dunce::canonicalize(&binary_script_file)?
|
||||
.to_string_lossy()
|
||||
.into_owned();
|
||||
let content = formatdoc!(
|
||||
@@ -697,7 +748,7 @@ impl Functions {
|
||||
@echo off
|
||||
setlocal
|
||||
|
||||
set "bin_dir={bin_dir}"
|
||||
set "bin_dir={canonical_bin_dir}"
|
||||
|
||||
{run} "{wrapper_binary}" %*"#,
|
||||
);
|
||||
@@ -713,6 +764,7 @@ impl Functions {
|
||||
binary_name: &str,
|
||||
language: Language,
|
||||
binary_type: BinaryType,
|
||||
custom_runtime: Option<&str>,
|
||||
) -> Result<()> {
|
||||
use std::os::unix::prelude::PermissionsExt;
|
||||
|
||||
@@ -741,7 +793,7 @@ impl Functions {
|
||||
)
|
||||
})?;
|
||||
let content_template = unsafe { std::str::from_utf8_unchecked(&embedded_file.data) };
|
||||
let content = match binary_type {
|
||||
let mut content = match binary_type {
|
||||
BinaryType::Tool(None) => {
|
||||
let root_dir = Config::functions_dir();
|
||||
let tool_path = format!(
|
||||
@@ -772,13 +824,44 @@ impl Functions {
|
||||
"{prompt_utils_file}",
|
||||
&Config::bash_prompt_utils_file().to_string_lossy(),
|
||||
);
|
||||
if binary_file.exists() {
|
||||
fs::remove_file(&binary_file)?;
|
||||
}
|
||||
let mut file = File::create(&binary_file)?;
|
||||
file.write_all(content.as_bytes())?;
|
||||
|
||||
fs::set_permissions(&binary_file, fs::Permissions::from_mode(0o755))?;
|
||||
if let Some(rt) = custom_runtime
|
||||
&& let Some(newline_pos) = content.find('\n')
|
||||
{
|
||||
content = format!("#!/usr/bin/env {rt}{}", &content[newline_pos..]);
|
||||
}
|
||||
|
||||
if language == Language::TypeScript {
|
||||
let bin_dir = binary_file
|
||||
.parent()
|
||||
.expect("Failed to get parent directory of binary file");
|
||||
let script_file = bin_dir.join(format!("run-{binary_name}.ts"));
|
||||
if script_file.exists() {
|
||||
fs::remove_file(&script_file)?;
|
||||
}
|
||||
let mut sf = File::create(&script_file)?;
|
||||
sf.write_all(content.as_bytes())?;
|
||||
fs::set_permissions(&script_file, fs::Permissions::from_mode(0o755))?;
|
||||
|
||||
let ts_runtime = custom_runtime.unwrap_or("tsx");
|
||||
let wrapper = format!(
|
||||
"#!/bin/sh\nexec {ts_runtime} \"{}\" \"$@\"\n",
|
||||
script_file.display()
|
||||
);
|
||||
if binary_file.exists() {
|
||||
fs::remove_file(&binary_file)?;
|
||||
}
|
||||
let mut wf = File::create(&binary_file)?;
|
||||
wf.write_all(wrapper.as_bytes())?;
|
||||
fs::set_permissions(&binary_file, fs::Permissions::from_mode(0o755))?;
|
||||
} else {
|
||||
if binary_file.exists() {
|
||||
fs::remove_file(&binary_file)?;
|
||||
}
|
||||
let mut file = File::create(&binary_file)?;
|
||||
file.write_all(content.as_bytes())?;
|
||||
fs::set_permissions(&binary_file, fs::Permissions::from_mode(0o755))?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -1064,7 +1147,7 @@ impl ToolCall {
|
||||
function_name.clone(),
|
||||
function_name,
|
||||
vec![],
|
||||
Default::default(),
|
||||
agent.variable_envs(),
|
||||
))
|
||||
}
|
||||
}
|
||||
@@ -1117,18 +1200,73 @@ pub fn run_llm_function(
|
||||
#[cfg(windows)]
|
||||
let cmd_name = polyfill_cmd_name(&cmd_name, &bin_dirs);
|
||||
|
||||
let output = Command::new(&cmd_name)
|
||||
#[cfg(windows)]
|
||||
let cmd_args = {
|
||||
let mut args = cmd_args;
|
||||
if let Some(json_data) = args.pop() {
|
||||
let tool_data_file = temp_file("-tool-data-", ".json");
|
||||
fs::write(&tool_data_file, &json_data)?;
|
||||
envs.insert(
|
||||
"LLM_TOOL_DATA_FILE".into(),
|
||||
tool_data_file.display().to_string(),
|
||||
);
|
||||
}
|
||||
args
|
||||
};
|
||||
|
||||
envs.insert("CLICOLOR_FORCE".into(), "1".into());
|
||||
envs.insert("FORCE_COLOR".into(), "1".into());
|
||||
|
||||
let mut child = Command::new(&cmd_name)
|
||||
.args(&cmd_args)
|
||||
.envs(envs)
|
||||
.stdout(Stdio::inherit())
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::piped())
|
||||
.spawn()
|
||||
.and_then(|child| child.wait_with_output())
|
||||
.map_err(|err| anyhow!("Unable to run {command_name}, {err}"))?;
|
||||
|
||||
let exit_code = output.status.code().unwrap_or_default();
|
||||
let stdout = child.stdout.take().expect("Failed to capture stdout");
|
||||
let mut stderr = child.stderr.take().expect("Failed to capture stderr");
|
||||
|
||||
let stdout_thread = std::thread::spawn(move || {
|
||||
let mut buffer = [0; 1024];
|
||||
let mut reader = stdout;
|
||||
let mut out = io::stdout();
|
||||
while let Ok(n) = reader.read(&mut buffer) {
|
||||
if n == 0 {
|
||||
break;
|
||||
}
|
||||
let chunk = &buffer[0..n];
|
||||
let mut last_pos = 0;
|
||||
for (i, &byte) in chunk.iter().enumerate() {
|
||||
if byte == b'\n' {
|
||||
let _ = out.write_all(&chunk[last_pos..i]);
|
||||
let _ = out.write_all(b"\r\n");
|
||||
last_pos = i + 1;
|
||||
}
|
||||
}
|
||||
if last_pos < n {
|
||||
let _ = out.write_all(&chunk[last_pos..n]);
|
||||
}
|
||||
let _ = out.flush();
|
||||
}
|
||||
});
|
||||
|
||||
let stderr_thread = std::thread::spawn(move || {
|
||||
let mut buf = Vec::new();
|
||||
let _ = stderr.read_to_end(&mut buf);
|
||||
buf
|
||||
});
|
||||
|
||||
let status = child
|
||||
.wait()
|
||||
.map_err(|err| anyhow!("Unable to run {command_name}, {err}"))?;
|
||||
let _ = stdout_thread.join();
|
||||
let stderr_bytes = stderr_thread.join().unwrap_or_default();
|
||||
|
||||
let exit_code = status.code().unwrap_or_default();
|
||||
if exit_code != 0 {
|
||||
let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string();
|
||||
let stderr = String::from_utf8_lossy(&stderr_bytes).trim().to_string();
|
||||
if !stderr.is_empty() {
|
||||
eprintln!("{stderr}");
|
||||
}
|
||||
|
||||
@@ -76,6 +76,16 @@ pub fn todo_function_declarations() -> Vec<FunctionDeclaration> {
|
||||
},
|
||||
agent: false,
|
||||
},
|
||||
FunctionDeclaration {
|
||||
name: format!("{TODO_FUNCTION_PREFIX}clear"),
|
||||
description: "Clear the entire todo list and reset the goal. Use when the current task has been canceled or invalidated.".to_string(),
|
||||
parameters: JsonSchema {
|
||||
type_value: Some("object".to_string()),
|
||||
properties: Some(IndexMap::new()),
|
||||
..Default::default()
|
||||
},
|
||||
agent: false,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
@@ -156,6 +166,17 @@ pub fn handle_todo_tool(config: &GlobalConfig, cmd_name: &str, args: &Value) ->
|
||||
None => bail!("No active agent"),
|
||||
}
|
||||
}
|
||||
"clear" => {
|
||||
let mut cfg = config.write();
|
||||
let agent = cfg.agent.as_mut();
|
||||
match agent {
|
||||
Some(agent) => {
|
||||
agent.clear_todo_list();
|
||||
Ok(json!({"status": "ok", "message": "Todo list cleared"}))
|
||||
}
|
||||
None => bail!("No active agent"),
|
||||
}
|
||||
}
|
||||
_ => bail!("Unknown todo action: {action}"),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -12,6 +12,7 @@ use tokio::sync::oneshot;
|
||||
pub const USER_FUNCTION_PREFIX: &str = "user__";
|
||||
|
||||
const DEFAULT_ESCALATION_TIMEOUT_SECS: u64 = 300;
|
||||
const CUSTOM_MULTI_CHOICE_ANSWER_OPTION: &str = "Other (custom)";
|
||||
|
||||
pub fn user_interaction_function_declarations() -> Vec<FunctionDeclaration> {
|
||||
vec![
|
||||
@@ -151,9 +152,14 @@ fn handle_direct_ask(args: &Value) -> Result<Value> {
|
||||
.get("question")
|
||||
.and_then(Value::as_str)
|
||||
.ok_or_else(|| anyhow!("'question' is required"))?;
|
||||
let options = parse_options(args)?;
|
||||
let mut options = parse_options(args)?;
|
||||
options.push(CUSTOM_MULTI_CHOICE_ANSWER_OPTION.to_string());
|
||||
|
||||
let answer = Select::new(question, options).prompt()?;
|
||||
let mut answer = Select::new(question, options).prompt()?;
|
||||
|
||||
if answer == CUSTOM_MULTI_CHOICE_ANSWER_OPTION {
|
||||
answer = Text::new("Custom response:").prompt()?
|
||||
}
|
||||
|
||||
Ok(json!({ "answer": answer }))
|
||||
}
|
||||
@@ -175,7 +181,7 @@ fn handle_direct_input(args: &Value) -> Result<Value> {
|
||||
.and_then(Value::as_str)
|
||||
.ok_or_else(|| anyhow!("'question' is required"))?;
|
||||
|
||||
let answer = Text::new(question).prompt()?;
|
||||
let answer = Text::new(&format!("{question}\nYour answer: ")).prompt()?;
|
||||
|
||||
Ok(json!({ "answer": answer }))
|
||||
}
|
||||
|
||||
+51
-4
@@ -16,28 +16,30 @@ mod vault;
|
||||
extern crate log;
|
||||
|
||||
use crate::client::{
|
||||
ModelType, call_chat_completions, call_chat_completions_streaming, list_models,
|
||||
ModelType, call_chat_completions, call_chat_completions_streaming, list_models, oauth,
|
||||
};
|
||||
use crate::config::{
|
||||
Agent, CODE_ROLE, Config, EXPLAIN_SHELL_ROLE, GlobalConfig, Input, SHELL_ROLE,
|
||||
TEMP_SESSION_NAME, WorkingMode, ensure_parent_exists, list_agents, load_env_file,
|
||||
macro_execute,
|
||||
};
|
||||
use crate::render::render_error;
|
||||
use crate::render::{prompt_theme, render_error};
|
||||
use crate::repl::Repl;
|
||||
use crate::utils::*;
|
||||
|
||||
use crate::cli::Cli;
|
||||
use crate::vault::Vault;
|
||||
use anyhow::{Result, bail};
|
||||
use anyhow::{Result, anyhow, bail};
|
||||
use clap::{CommandFactory, Parser};
|
||||
use clap_complete::CompleteEnv;
|
||||
use inquire::Text;
|
||||
use client::ClientConfig;
|
||||
use inquire::{Select, Text, set_global_render_config};
|
||||
use log::LevelFilter;
|
||||
use log4rs::append::console::ConsoleAppender;
|
||||
use log4rs::append::file::FileAppender;
|
||||
use log4rs::config::{Appender, Logger, Root};
|
||||
use log4rs::encode::pattern::PatternEncoder;
|
||||
use oauth::OAuthProvider;
|
||||
use parking_lot::RwLock;
|
||||
use std::path::PathBuf;
|
||||
use std::{env, mem, process, sync::Arc};
|
||||
@@ -81,6 +83,13 @@ async fn main() -> Result<()> {
|
||||
|
||||
let log_path = setup_logger()?;
|
||||
|
||||
if let Some(client_arg) = &cli.authenticate {
|
||||
let config = Config::init_bare()?;
|
||||
let (client_name, provider) = resolve_oauth_client(client_arg.as_deref(), &config.clients)?;
|
||||
oauth::run_oauth_flow(&*provider, &client_name).await?;
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if vault_flags {
|
||||
return Vault::handle_vault_flags(cli, Config::init_bare()?);
|
||||
}
|
||||
@@ -97,6 +106,14 @@ async fn main() -> Result<()> {
|
||||
)
|
||||
.await?,
|
||||
));
|
||||
|
||||
{
|
||||
let cfg = config.read();
|
||||
if cfg.highlight {
|
||||
set_global_render_config(prompt_theme(cfg.render_options()?)?)
|
||||
}
|
||||
}
|
||||
|
||||
if let Err(err) = run(config, cli, text, abort_signal).await {
|
||||
render_error(err);
|
||||
process::exit(1);
|
||||
@@ -504,3 +521,33 @@ fn init_console_logger(
|
||||
.build(Root::builder().appender("console").build(root_log_level))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn resolve_oauth_client(
|
||||
explicit: Option<&str>,
|
||||
clients: &[ClientConfig],
|
||||
) -> Result<(String, Box<dyn OAuthProvider>)> {
|
||||
if let Some(name) = explicit {
|
||||
let provider_type = oauth::resolve_provider_type(name, clients)
|
||||
.ok_or_else(|| anyhow!("Client '{name}' not found or doesn't support OAuth"))?;
|
||||
let provider = oauth::get_oauth_provider(provider_type).unwrap();
|
||||
return Ok((name.to_string(), provider));
|
||||
}
|
||||
|
||||
let candidates = oauth::list_oauth_capable_clients(clients);
|
||||
match candidates.len() {
|
||||
0 => bail!("No OAuth-capable clients configured."),
|
||||
1 => {
|
||||
let name = &candidates[0];
|
||||
let provider_type = oauth::resolve_provider_type(name, clients).unwrap();
|
||||
let provider = oauth::get_oauth_provider(provider_type).unwrap();
|
||||
Ok((name.clone(), provider))
|
||||
}
|
||||
_ => {
|
||||
let choice =
|
||||
Select::new("Select a client to authenticate:", candidates.clone()).prompt()?;
|
||||
let provider_type = oauth::resolve_provider_type(&choice, clients).unwrap();
|
||||
let provider = oauth::get_oauth_provider(provider_type).unwrap();
|
||||
Ok((choice, provider))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
+76
-51
@@ -158,27 +158,31 @@ impl McpRegistry {
|
||||
}
|
||||
|
||||
pub async fn reinit(
|
||||
registry: McpRegistry,
|
||||
mut registry: McpRegistry,
|
||||
enabled_mcp_servers: Option<String>,
|
||||
abort_signal: AbortSignal,
|
||||
) -> Result<Self> {
|
||||
debug!("Reinitializing MCP registry");
|
||||
debug!("Stopping all MCP servers");
|
||||
let mut new_registry = abortable_run_with_spinner(
|
||||
registry.stop_all_servers(),
|
||||
"Stopping MCP servers",
|
||||
|
||||
let desired_ids = registry.resolve_server_ids(enabled_mcp_servers.clone());
|
||||
let desired_set: HashSet<String> = desired_ids.iter().cloned().collect();
|
||||
|
||||
debug!("Stopping unused MCP servers");
|
||||
abortable_run_with_spinner(
|
||||
registry.stop_unused_servers(&desired_set),
|
||||
"Stopping unused MCP servers",
|
||||
abort_signal.clone(),
|
||||
)
|
||||
.await?;
|
||||
|
||||
abortable_run_with_spinner(
|
||||
new_registry.start_select_mcp_servers(enabled_mcp_servers),
|
||||
registry.start_select_mcp_servers(enabled_mcp_servers),
|
||||
"Loading MCP servers",
|
||||
abort_signal,
|
||||
)
|
||||
.await?;
|
||||
|
||||
Ok(new_registry)
|
||||
Ok(registry)
|
||||
}
|
||||
|
||||
async fn start_select_mcp_servers(
|
||||
@@ -192,43 +196,30 @@ impl McpRegistry {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if let Some(servers) = enabled_mcp_servers {
|
||||
debug!("Starting selected MCP servers: {:?}", servers);
|
||||
let config = self
|
||||
.config
|
||||
.as_ref()
|
||||
.with_context(|| "MCP Config not defined. Cannot start servers")?;
|
||||
let mcp_servers = config.mcp_servers.clone();
|
||||
let desired_ids = self.resolve_server_ids(enabled_mcp_servers);
|
||||
let ids_to_start: Vec<String> = desired_ids
|
||||
.into_iter()
|
||||
.filter(|id| !self.servers.contains_key(id))
|
||||
.collect();
|
||||
|
||||
let enabled_servers: HashSet<String> =
|
||||
servers.split(',').map(|s| s.trim().to_string()).collect();
|
||||
let server_ids: Vec<String> = if servers == "all" {
|
||||
mcp_servers.into_keys().collect()
|
||||
} else {
|
||||
mcp_servers
|
||||
.into_keys()
|
||||
.filter(|id| enabled_servers.contains(id))
|
||||
.collect()
|
||||
};
|
||||
if ids_to_start.is_empty() {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let results: Vec<(String, Arc<_>, ServerCatalog)> = stream::iter(
|
||||
server_ids
|
||||
.into_iter()
|
||||
.map(|id| async { self.start_server(id).await }),
|
||||
)
|
||||
.buffer_unordered(num_cpus::get())
|
||||
.try_collect()
|
||||
.await?;
|
||||
debug!("Starting selected MCP servers: {:?}", ids_to_start);
|
||||
|
||||
self.servers = results
|
||||
.clone()
|
||||
let results: Vec<(String, Arc<_>, ServerCatalog)> = stream::iter(
|
||||
ids_to_start
|
||||
.into_iter()
|
||||
.map(|(id, server, _)| (id, server))
|
||||
.collect();
|
||||
self.catalogs = results
|
||||
.into_iter()
|
||||
.map(|(id, _, catalog)| (id, catalog))
|
||||
.collect();
|
||||
.map(|id| async { self.start_server(id).await }),
|
||||
)
|
||||
.buffer_unordered(num_cpus::get())
|
||||
.try_collect()
|
||||
.await?;
|
||||
|
||||
for (id, server, catalog) in results {
|
||||
self.servers.insert(id.clone(), server);
|
||||
self.catalogs.insert(id, catalog);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
@@ -309,19 +300,53 @@ impl McpRegistry {
|
||||
Ok((id.to_string(), service, catalog))
|
||||
}
|
||||
|
||||
pub async fn stop_all_servers(mut self) -> Result<Self> {
|
||||
for (id, server) in self.servers {
|
||||
Arc::try_unwrap(server)
|
||||
.map_err(|_| anyhow!("Failed to unwrap Arc for MCP server: {id}"))?
|
||||
.cancel()
|
||||
.await
|
||||
.with_context(|| format!("Failed to stop MCP server: {id}"))?;
|
||||
info!("Stopped MCP server: {id}");
|
||||
fn resolve_server_ids(&self, enabled_mcp_servers: Option<String>) -> Vec<String> {
|
||||
if let Some(config) = &self.config
|
||||
&& let Some(servers) = enabled_mcp_servers
|
||||
{
|
||||
if servers == "all" {
|
||||
config.mcp_servers.keys().cloned().collect()
|
||||
} else {
|
||||
let enabled_servers: HashSet<String> =
|
||||
servers.split(',').map(|s| s.trim().to_string()).collect();
|
||||
config
|
||||
.mcp_servers
|
||||
.keys()
|
||||
.filter(|id| enabled_servers.contains(*id))
|
||||
.cloned()
|
||||
.collect()
|
||||
}
|
||||
} else {
|
||||
vec![]
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn stop_unused_servers(&mut self, keep_ids: &HashSet<String>) -> Result<()> {
|
||||
let mut ids_to_remove = Vec::new();
|
||||
for (id, _) in self.servers.iter() {
|
||||
if !keep_ids.contains(id) {
|
||||
ids_to_remove.push(id.clone());
|
||||
}
|
||||
}
|
||||
|
||||
self.servers = HashMap::new();
|
||||
|
||||
Ok(self)
|
||||
for id in ids_to_remove {
|
||||
if let Some(server) = self.servers.remove(&id) {
|
||||
match Arc::try_unwrap(server) {
|
||||
Ok(server_inner) => {
|
||||
server_inner
|
||||
.cancel()
|
||||
.await
|
||||
.with_context(|| format!("Failed to stop MCP server: {id}"))?;
|
||||
info!("Stopped MCP server: {id}");
|
||||
}
|
||||
Err(_) => {
|
||||
info!("Detaching from MCP server: {id} (still in use)");
|
||||
}
|
||||
}
|
||||
self.catalogs.remove(&id);
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn list_started_servers(&self) -> Vec<String> {
|
||||
|
||||
@@ -0,0 +1,236 @@
|
||||
use crate::function::{FunctionDeclaration, JsonSchema};
|
||||
use anyhow::{Context, Result, anyhow, bail};
|
||||
use indexmap::IndexMap;
|
||||
use serde_json::Value;
|
||||
use tree_sitter::Node;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct Param {
|
||||
pub name: String,
|
||||
pub ty_hint: String,
|
||||
pub required: bool,
|
||||
pub default: Option<Value>,
|
||||
pub doc_type: Option<String>,
|
||||
pub doc_desc: Option<String>,
|
||||
}
|
||||
|
||||
pub(crate) trait ScriptedLanguage {
|
||||
fn ts_language(&self) -> tree_sitter::Language;
|
||||
|
||||
fn lang_name(&self) -> &str;
|
||||
|
||||
fn find_functions<'a>(&self, root: Node<'a>, src: &str) -> Vec<(Node<'a>, Node<'a>)>;
|
||||
|
||||
fn function_name<'a>(&self, func_node: Node<'a>, src: &'a str) -> Result<&'a str>;
|
||||
|
||||
fn extract_description(
|
||||
&self,
|
||||
wrapper_node: Node<'_>,
|
||||
func_node: Node<'_>,
|
||||
src: &str,
|
||||
) -> Option<String>;
|
||||
|
||||
fn extract_params(
|
||||
&self,
|
||||
func_node: Node<'_>,
|
||||
src: &str,
|
||||
description: &str,
|
||||
) -> Result<Vec<Param>>;
|
||||
}
|
||||
|
||||
pub(crate) fn build_param(
|
||||
name: &str,
|
||||
mut ty: String,
|
||||
mut required: bool,
|
||||
default: Option<Value>,
|
||||
) -> Param {
|
||||
if ty.ends_with('?') {
|
||||
ty.pop();
|
||||
required = false;
|
||||
}
|
||||
|
||||
Param {
|
||||
name: name.to_string(),
|
||||
ty_hint: ty,
|
||||
required,
|
||||
default,
|
||||
doc_type: None,
|
||||
doc_desc: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn build_parameters_schema(params: &[Param], _description: &str) -> JsonSchema {
|
||||
let mut props: IndexMap<String, JsonSchema> = IndexMap::new();
|
||||
let mut req: Vec<String> = Vec::new();
|
||||
|
||||
for p in params {
|
||||
let name = p.name.replace('-', "_");
|
||||
let mut schema = JsonSchema::default();
|
||||
|
||||
let ty = if !p.ty_hint.is_empty() {
|
||||
p.ty_hint.as_str()
|
||||
} else if let Some(t) = &p.doc_type {
|
||||
t.as_str()
|
||||
} else {
|
||||
"str"
|
||||
};
|
||||
|
||||
if let Some(d) = &p.doc_desc
|
||||
&& !d.is_empty()
|
||||
{
|
||||
schema.description = Some(d.clone());
|
||||
}
|
||||
|
||||
apply_type_to_schema(ty, &mut schema);
|
||||
|
||||
if p.default.is_none() && p.required {
|
||||
req.push(name.clone());
|
||||
}
|
||||
|
||||
props.insert(name, schema);
|
||||
}
|
||||
|
||||
JsonSchema {
|
||||
type_value: Some("object".into()),
|
||||
description: None,
|
||||
properties: Some(props),
|
||||
items: None,
|
||||
any_of: None,
|
||||
enum_value: None,
|
||||
default: None,
|
||||
required: if req.is_empty() { None } else { Some(req) },
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn apply_type_to_schema(ty: &str, s: &mut JsonSchema) {
|
||||
let t = ty.trim_end_matches('?');
|
||||
if let Some(rest) = t.strip_prefix("list[") {
|
||||
s.type_value = Some("array".into());
|
||||
let inner = rest.trim_end_matches(']');
|
||||
let mut item = JsonSchema::default();
|
||||
|
||||
apply_type_to_schema(inner, &mut item);
|
||||
|
||||
if item.type_value.is_none() {
|
||||
item.type_value = Some("string".into());
|
||||
}
|
||||
s.items = Some(Box::new(item));
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
if let Some(rest) = t.strip_prefix("literal:") {
|
||||
s.type_value = Some("string".into());
|
||||
let vals = rest
|
||||
.split('|')
|
||||
.map(|x| x.trim().trim_matches('"').trim_matches('\'').to_string())
|
||||
.collect::<Vec<_>>();
|
||||
if !vals.is_empty() {
|
||||
s.enum_value = Some(vals);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
s.type_value = Some(
|
||||
match t {
|
||||
"bool" => "boolean",
|
||||
"int" => "integer",
|
||||
"float" => "number",
|
||||
"str" | "any" | "" => "string",
|
||||
_ => "string",
|
||||
}
|
||||
.into(),
|
||||
);
|
||||
}
|
||||
|
||||
pub(crate) fn underscore(s: &str) -> String {
|
||||
s.chars()
|
||||
.map(|c| {
|
||||
if c.is_ascii_alphanumeric() {
|
||||
c.to_ascii_lowercase()
|
||||
} else {
|
||||
'_'
|
||||
}
|
||||
})
|
||||
.collect::<String>()
|
||||
.split('_')
|
||||
.filter(|t| !t.is_empty())
|
||||
.collect::<Vec<_>>()
|
||||
.join("_")
|
||||
}
|
||||
|
||||
pub(crate) fn node_text<'a>(node: Node<'_>, src: &'a str) -> Result<&'a str> {
|
||||
node.utf8_text(src.as_bytes())
|
||||
.map_err(|err| anyhow!("invalid utf-8 in source: {err}"))
|
||||
}
|
||||
|
||||
pub(crate) fn named_child(node: Node<'_>, index: usize) -> Option<Node<'_>> {
|
||||
let mut cursor = node.walk();
|
||||
node.named_children(&mut cursor).nth(index)
|
||||
}
|
||||
|
||||
pub(crate) fn generate_declarations<L: ScriptedLanguage>(
|
||||
lang: &L,
|
||||
src: &str,
|
||||
file_name: &str,
|
||||
is_tool: bool,
|
||||
) -> Result<Vec<FunctionDeclaration>> {
|
||||
let mut parser = tree_sitter::Parser::new();
|
||||
let language = lang.ts_language();
|
||||
parser.set_language(&language).with_context(|| {
|
||||
format!(
|
||||
"failed to initialize {} tree-sitter parser",
|
||||
lang.lang_name()
|
||||
)
|
||||
})?;
|
||||
|
||||
let tree = parser
|
||||
.parse(src.as_bytes(), None)
|
||||
.ok_or_else(|| anyhow!("failed to parse {}: {file_name}", lang.lang_name()))?;
|
||||
|
||||
if tree.root_node().has_error() {
|
||||
bail!(
|
||||
"failed to parse {}: syntax error in {file_name}",
|
||||
lang.lang_name()
|
||||
);
|
||||
}
|
||||
|
||||
let mut out = Vec::new();
|
||||
for (wrapper, func) in lang.find_functions(tree.root_node(), src) {
|
||||
let func_name = lang.function_name(func, src)?;
|
||||
|
||||
if func_name.starts_with('_') && func_name != "_instructions" {
|
||||
continue;
|
||||
}
|
||||
if is_tool && func_name != "run" {
|
||||
continue;
|
||||
}
|
||||
|
||||
let description = lang
|
||||
.extract_description(wrapper, func, src)
|
||||
.unwrap_or_default();
|
||||
let params = lang
|
||||
.extract_params(func, src, &description)
|
||||
.with_context(|| format!("in function '{func_name}' in {file_name}"))?;
|
||||
let schema = build_parameters_schema(¶ms, &description);
|
||||
|
||||
let name = if is_tool && func_name == "run" {
|
||||
underscore(file_name)
|
||||
} else {
|
||||
underscore(func_name)
|
||||
};
|
||||
|
||||
let desc_trim = description.trim().to_string();
|
||||
if desc_trim.is_empty() {
|
||||
bail!("Missing or empty description on function: {func_name}");
|
||||
}
|
||||
|
||||
out.push(FunctionDeclaration {
|
||||
name,
|
||||
description: desc_trim,
|
||||
parameters: schema,
|
||||
agent: !is_tool,
|
||||
});
|
||||
}
|
||||
Ok(out)
|
||||
}
|
||||
@@ -1,2 +1,4 @@
|
||||
pub(crate) mod bash;
|
||||
pub(crate) mod common;
|
||||
pub(crate) mod python;
|
||||
pub(crate) mod typescript;
|
||||
|
||||
+680
-334
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,789 @@
|
||||
use crate::function::FunctionDeclaration;
|
||||
use crate::parsers::common::{self, Param, ScriptedLanguage};
|
||||
use anyhow::{Context, Result, anyhow, bail};
|
||||
use indexmap::IndexMap;
|
||||
use serde_json::Value;
|
||||
use std::fs::File;
|
||||
use std::io::Read;
|
||||
use std::path::Path;
|
||||
use tree_sitter::Node;
|
||||
|
||||
pub(crate) struct TypeScriptLanguage;
|
||||
|
||||
impl ScriptedLanguage for TypeScriptLanguage {
|
||||
fn ts_language(&self) -> tree_sitter::Language {
|
||||
tree_sitter_typescript::LANGUAGE_TYPESCRIPT.into()
|
||||
}
|
||||
|
||||
fn lang_name(&self) -> &str {
|
||||
"typescript"
|
||||
}
|
||||
|
||||
fn find_functions<'a>(&self, root: Node<'a>, _src: &str) -> Vec<(Node<'a>, Node<'a>)> {
|
||||
let mut cursor = root.walk();
|
||||
root.named_children(&mut cursor)
|
||||
.filter_map(|stmt| match stmt.kind() {
|
||||
"export_statement" => unwrap_exported_function(stmt).map(|fd| (stmt, fd)),
|
||||
_ => None,
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn function_name<'a>(&self, func_node: Node<'a>, src: &'a str) -> Result<&'a str> {
|
||||
let name_node = func_node
|
||||
.child_by_field_name("name")
|
||||
.ok_or_else(|| anyhow!("function_declaration missing name"))?;
|
||||
common::node_text(name_node, src)
|
||||
}
|
||||
|
||||
fn extract_description(
|
||||
&self,
|
||||
wrapper_node: Node<'_>,
|
||||
func_node: Node<'_>,
|
||||
src: &str,
|
||||
) -> Option<String> {
|
||||
let text = jsdoc_text(wrapper_node, func_node, src)?;
|
||||
let lines = clean_jsdoc_lines(text);
|
||||
let mut description = Vec::new();
|
||||
for line in lines {
|
||||
if line.starts_with('@') {
|
||||
break;
|
||||
}
|
||||
description.push(line);
|
||||
}
|
||||
|
||||
let description = description.join("\n").trim().to_string();
|
||||
(!description.is_empty()).then_some(description)
|
||||
}
|
||||
|
||||
fn extract_params(
|
||||
&self,
|
||||
func_node: Node<'_>,
|
||||
src: &str,
|
||||
_description: &str,
|
||||
) -> Result<Vec<Param>> {
|
||||
let parameters = func_node
|
||||
.child_by_field_name("parameters")
|
||||
.ok_or_else(|| anyhow!("function_declaration missing parameters"))?;
|
||||
let mut out = Vec::new();
|
||||
let mut cursor = parameters.walk();
|
||||
|
||||
for param in parameters.named_children(&mut cursor) {
|
||||
match param.kind() {
|
||||
"required_parameter" | "optional_parameter" => {
|
||||
let name = parameter_name(param, src)?;
|
||||
let ty = get_arg_type(param.child_by_field_name("type"), src)?;
|
||||
let required = param.kind() == "required_parameter"
|
||||
&& param.child_by_field_name("value").is_none();
|
||||
let default = param.child_by_field_name("value").map(|_| Value::Null);
|
||||
out.push(common::build_param(name, ty, required, default));
|
||||
}
|
||||
"rest_parameter" => {
|
||||
let line = param.start_position().row + 1;
|
||||
bail!("line {line}: rest parameters (...) are not supported in tool functions")
|
||||
}
|
||||
"object_pattern" => {
|
||||
let line = param.start_position().row + 1;
|
||||
bail!(
|
||||
"line {line}: destructured object parameters (e.g. '{{ a, b }}: {{ a: string }}') \
|
||||
are not supported in tool functions. Use flat parameters instead (e.g. 'a: string, b: string')."
|
||||
)
|
||||
}
|
||||
other => {
|
||||
let line = param.start_position().row + 1;
|
||||
bail!("line {line}: unsupported parameter type: {other}")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let wrapper = match func_node.parent() {
|
||||
Some(parent) if parent.kind() == "export_statement" => parent,
|
||||
_ => func_node,
|
||||
};
|
||||
if let Some(doc) = jsdoc_text(wrapper, func_node, src) {
|
||||
let meta = parse_jsdoc_params(doc);
|
||||
for p in &mut out {
|
||||
if let Some(desc) = meta.get(&p.name)
|
||||
&& !desc.is_empty()
|
||||
{
|
||||
p.doc_desc = Some(desc.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(out)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn generate_typescript_declarations(
|
||||
mut tool_file: File,
|
||||
file_name: &str,
|
||||
parent: Option<&Path>,
|
||||
) -> Result<Vec<FunctionDeclaration>> {
|
||||
let mut src = String::new();
|
||||
tool_file
|
||||
.read_to_string(&mut src)
|
||||
.with_context(|| format!("Failed to load script at '{tool_file:?}'"))?;
|
||||
|
||||
let is_tool = parent
|
||||
.and_then(|p| p.file_name())
|
||||
.is_some_and(|n| n == "tools");
|
||||
|
||||
common::generate_declarations(&TypeScriptLanguage, &src, file_name, is_tool)
|
||||
}
|
||||
|
||||
fn unwrap_exported_function(node: Node<'_>) -> Option<Node<'_>> {
|
||||
node.child_by_field_name("declaration")
|
||||
.filter(|child| child.kind() == "function_declaration")
|
||||
.or_else(|| {
|
||||
let mut cursor = node.walk();
|
||||
node.named_children(&mut cursor)
|
||||
.find(|child| child.kind() == "function_declaration")
|
||||
})
|
||||
}
|
||||
|
||||
fn jsdoc_text<'a>(wrapper_node: Node<'_>, func_node: Node<'_>, src: &'a str) -> Option<&'a str> {
|
||||
wrapper_node
|
||||
.prev_named_sibling()
|
||||
.or_else(|| func_node.prev_named_sibling())
|
||||
.filter(|node| node.kind() == "comment")
|
||||
.and_then(|node| common::node_text(node, src).ok())
|
||||
.filter(|text| text.trim_start().starts_with("/**"))
|
||||
}
|
||||
|
||||
fn clean_jsdoc_lines(doc: &str) -> Vec<String> {
|
||||
let trimmed = doc.trim();
|
||||
let inner = trimmed
|
||||
.strip_prefix("/**")
|
||||
.unwrap_or(trimmed)
|
||||
.strip_suffix("*/")
|
||||
.unwrap_or(trimmed);
|
||||
|
||||
inner
|
||||
.lines()
|
||||
.map(|line| {
|
||||
let line = line.trim();
|
||||
let line = line.strip_prefix('*').unwrap_or(line).trim_start();
|
||||
line.to_string()
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn parse_jsdoc_params(doc: &str) -> IndexMap<String, String> {
|
||||
let mut out = IndexMap::new();
|
||||
|
||||
for line in clean_jsdoc_lines(doc) {
|
||||
let Some(rest) = line.strip_prefix("@param") else {
|
||||
continue;
|
||||
};
|
||||
|
||||
let mut rest = rest.trim();
|
||||
if rest.starts_with('{')
|
||||
&& let Some(end) = rest.find('}')
|
||||
{
|
||||
rest = rest[end + 1..].trim_start();
|
||||
}
|
||||
|
||||
if rest.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
let name_end = rest.find(char::is_whitespace).unwrap_or(rest.len());
|
||||
let mut name = rest[..name_end].trim();
|
||||
if let Some(stripped) = name.strip_suffix('?') {
|
||||
name = stripped;
|
||||
}
|
||||
|
||||
if name.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
let mut desc = rest[name_end..].trim();
|
||||
if let Some(stripped) = desc.strip_prefix('-') {
|
||||
desc = stripped.trim_start();
|
||||
}
|
||||
|
||||
out.insert(name.to_string(), desc.to_string());
|
||||
}
|
||||
|
||||
out
|
||||
}
|
||||
|
||||
fn parameter_name<'a>(node: Node<'_>, src: &'a str) -> Result<&'a str> {
|
||||
if let Some(name) = node.child_by_field_name("name") {
|
||||
return match name.kind() {
|
||||
"identifier" => common::node_text(name, src),
|
||||
"rest_pattern" => {
|
||||
let line = node.start_position().row + 1;
|
||||
bail!("line {line}: rest parameters (...) are not supported in tool functions")
|
||||
}
|
||||
"object_pattern" | "array_pattern" => {
|
||||
let line = node.start_position().row + 1;
|
||||
bail!(
|
||||
"line {line}: destructured parameters are not supported in tool functions. \
|
||||
Use flat parameters instead (e.g. 'a: string, b: string')."
|
||||
)
|
||||
}
|
||||
other => {
|
||||
let line = node.start_position().row + 1;
|
||||
bail!("line {line}: unsupported parameter type: {other}")
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
let pattern = node
|
||||
.child_by_field_name("pattern")
|
||||
.ok_or_else(|| anyhow!("parameter missing pattern"))?;
|
||||
|
||||
match pattern.kind() {
|
||||
"identifier" => common::node_text(pattern, src),
|
||||
"rest_pattern" => {
|
||||
let line = node.start_position().row + 1;
|
||||
bail!("line {line}: rest parameters (...) are not supported in tool functions")
|
||||
}
|
||||
"object_pattern" | "array_pattern" => {
|
||||
let line = node.start_position().row + 1;
|
||||
bail!(
|
||||
"line {line}: destructured parameters are not supported in tool functions. \
|
||||
Use flat parameters instead (e.g. 'a: string, b: string')."
|
||||
)
|
||||
}
|
||||
other => {
|
||||
let line = node.start_position().row + 1;
|
||||
bail!("line {line}: unsupported parameter type: {other}")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn get_arg_type(annotation: Option<Node<'_>>, src: &str) -> Result<String> {
|
||||
let Some(annotation) = annotation else {
|
||||
return Ok(String::new());
|
||||
};
|
||||
|
||||
match annotation.kind() {
|
||||
"type_annotation" | "type" => get_arg_type(common::named_child(annotation, 0), src),
|
||||
"predefined_type" => Ok(match common::node_text(annotation, src)? {
|
||||
"string" => "str",
|
||||
"number" => "float",
|
||||
"boolean" => "bool",
|
||||
"any" | "unknown" | "void" | "undefined" => "any",
|
||||
_ => "any",
|
||||
}
|
||||
.to_string()),
|
||||
"type_identifier" | "nested_type_identifier" => Ok("any".to_string()),
|
||||
"generic_type" => {
|
||||
let name = annotation
|
||||
.child_by_field_name("name")
|
||||
.ok_or_else(|| anyhow!("generic_type missing name"))?;
|
||||
let type_name = common::node_text(name, src)?;
|
||||
let type_args = annotation
|
||||
.child_by_field_name("type_arguments")
|
||||
.ok_or_else(|| anyhow!("generic_type missing type arguments"))?;
|
||||
let inner = common::named_child(type_args, 0)
|
||||
.ok_or_else(|| anyhow!("generic_type missing inner type"))?;
|
||||
|
||||
match type_name {
|
||||
"Array" => Ok(format!("list[{}]", get_arg_type(Some(inner), src)?)),
|
||||
_ => Ok("any".to_string()),
|
||||
}
|
||||
}
|
||||
"array_type" => {
|
||||
let inner = common::named_child(annotation, 0)
|
||||
.ok_or_else(|| anyhow!("array_type missing inner type"))?;
|
||||
Ok(format!("list[{}]", get_arg_type(Some(inner), src)?))
|
||||
}
|
||||
"union_type" => resolve_union_type(annotation, src),
|
||||
"literal_type" => resolve_literal_type(annotation, src),
|
||||
"parenthesized_type" => get_arg_type(common::named_child(annotation, 0), src),
|
||||
_ => Ok("any".to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
fn resolve_union_type(annotation: Node<'_>, src: &str) -> Result<String> {
|
||||
let members = flatten_union_members(annotation);
|
||||
let has_null = members.iter().any(|member| is_nullish_type(*member, src));
|
||||
|
||||
let mut literal_values = Vec::new();
|
||||
let mut all_string_literals = true;
|
||||
for member in &members {
|
||||
match string_literal_member(*member, src) {
|
||||
Some(value) => literal_values.push(value),
|
||||
None => {
|
||||
all_string_literals = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if all_string_literals && !literal_values.is_empty() {
|
||||
return Ok(format!("literal:{}", literal_values.join("|")));
|
||||
}
|
||||
|
||||
let mut first_non_null = None;
|
||||
for member in members {
|
||||
if is_nullish_type(member, src) {
|
||||
continue;
|
||||
}
|
||||
first_non_null = Some(get_arg_type(Some(member), src)?);
|
||||
break;
|
||||
}
|
||||
|
||||
let mut ty = first_non_null.unwrap_or_else(|| "any".to_string());
|
||||
if has_null && !ty.ends_with('?') {
|
||||
ty.push('?');
|
||||
}
|
||||
Ok(ty)
|
||||
}
|
||||
|
||||
fn flatten_union_members(node: Node<'_>) -> Vec<Node<'_>> {
|
||||
let node = if node.kind() == "type" {
|
||||
match common::named_child(node, 0) {
|
||||
Some(inner) => inner,
|
||||
None => return vec![],
|
||||
}
|
||||
} else {
|
||||
node
|
||||
};
|
||||
|
||||
if node.kind() != "union_type" {
|
||||
return vec![node];
|
||||
}
|
||||
|
||||
let mut cursor = node.walk();
|
||||
let mut out = Vec::new();
|
||||
for child in node.named_children(&mut cursor) {
|
||||
out.extend(flatten_union_members(child));
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
fn resolve_literal_type(annotation: Node<'_>, src: &str) -> Result<String> {
|
||||
let inner = common::named_child(annotation, 0)
|
||||
.ok_or_else(|| anyhow!("literal_type missing inner literal"))?;
|
||||
|
||||
match inner.kind() {
|
||||
"string" | "number" | "true" | "false" | "unary_expression" => {
|
||||
Ok(format!("literal:{}", common::node_text(inner, src)?.trim()))
|
||||
}
|
||||
"null" | "undefined" => Ok("any".to_string()),
|
||||
_ => Ok("any".to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
fn string_literal_member(node: Node<'_>, src: &str) -> Option<String> {
|
||||
let node = if node.kind() == "type" {
|
||||
common::named_child(node, 0)?
|
||||
} else {
|
||||
node
|
||||
};
|
||||
|
||||
if node.kind() != "literal_type" {
|
||||
return None;
|
||||
}
|
||||
|
||||
let inner = common::named_child(node, 0)?;
|
||||
if inner.kind() != "string" {
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(common::node_text(inner, src).ok()?.to_string())
|
||||
}
|
||||
|
||||
fn is_nullish_type(node: Node<'_>, src: &str) -> bool {
|
||||
let node = if node.kind() == "type" {
|
||||
match common::named_child(node, 0) {
|
||||
Some(inner) => inner,
|
||||
None => return false,
|
||||
}
|
||||
} else {
|
||||
node
|
||||
};
|
||||
|
||||
match node.kind() {
|
||||
"literal_type" => common::named_child(node, 0)
|
||||
.is_some_and(|inner| matches!(inner.kind(), "null" | "undefined")),
|
||||
"predefined_type" => common::node_text(node, src)
|
||||
.map(|text| matches!(text, "undefined" | "void"))
|
||||
.unwrap_or(false),
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::function::JsonSchema;
|
||||
use std::fs;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
fn parse_ts_source(
|
||||
source: &str,
|
||||
file_name: &str,
|
||||
parent: &Path,
|
||||
) -> Result<Vec<FunctionDeclaration>> {
|
||||
let unique = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("time")
|
||||
.as_nanos();
|
||||
let path = std::env::temp_dir().join(format!("loki_ts_parser_{file_name}_{unique}.ts"));
|
||||
fs::write(&path, source).expect("write");
|
||||
let file = File::open(&path).expect("open");
|
||||
let result = generate_typescript_declarations(file, file_name, Some(parent));
|
||||
let _ = fs::remove_file(&path);
|
||||
result
|
||||
}
|
||||
|
||||
fn properties(schema: &JsonSchema) -> &IndexMap<String, JsonSchema> {
|
||||
schema
|
||||
.properties
|
||||
.as_ref()
|
||||
.expect("missing schema properties")
|
||||
}
|
||||
|
||||
fn property<'a>(schema: &'a JsonSchema, name: &str) -> &'a JsonSchema {
|
||||
properties(schema)
|
||||
.get(name)
|
||||
.unwrap_or_else(|| panic!("missing property: {name}"))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_tool_demo() {
|
||||
let source = r#"
|
||||
/**
|
||||
* Demonstrates how to create a tool using TypeScript.
|
||||
*
|
||||
* @param query - The search query string
|
||||
* @param format - Output format
|
||||
* @param count - Maximum results to return
|
||||
* @param verbose - Enable verbose output
|
||||
* @param tags - List of tags to filter by
|
||||
* @param language - Optional language filter
|
||||
* @param extra_tags - Optional extra tags
|
||||
*/
|
||||
export function run(
|
||||
query: string,
|
||||
format: "json" | "csv" | "xml",
|
||||
count: number,
|
||||
verbose: boolean,
|
||||
tags: string[],
|
||||
language?: string,
|
||||
extra_tags?: Array<string>,
|
||||
): string {
|
||||
return "result";
|
||||
}
|
||||
"#;
|
||||
|
||||
let declarations = parse_ts_source(source, "demo_ts", Path::new("tools")).unwrap();
|
||||
assert_eq!(declarations.len(), 1);
|
||||
|
||||
let decl = &declarations[0];
|
||||
assert_eq!(decl.name, "demo_ts");
|
||||
assert!(!decl.agent);
|
||||
|
||||
let params = &decl.parameters;
|
||||
assert_eq!(params.type_value.as_deref(), Some("object"));
|
||||
assert_eq!(
|
||||
params.required.as_ref().unwrap(),
|
||||
&vec![
|
||||
"query".to_string(),
|
||||
"format".to_string(),
|
||||
"count".to_string(),
|
||||
"verbose".to_string(),
|
||||
"tags".to_string(),
|
||||
]
|
||||
);
|
||||
|
||||
assert_eq!(
|
||||
property(params, "query").type_value.as_deref(),
|
||||
Some("string")
|
||||
);
|
||||
|
||||
let format = property(params, "format");
|
||||
assert_eq!(format.type_value.as_deref(), Some("string"));
|
||||
assert_eq!(
|
||||
format.enum_value.as_ref().unwrap(),
|
||||
&vec!["json".to_string(), "csv".to_string(), "xml".to_string()]
|
||||
);
|
||||
|
||||
assert_eq!(
|
||||
property(params, "count").type_value.as_deref(),
|
||||
Some("number")
|
||||
);
|
||||
assert_eq!(
|
||||
property(params, "verbose").type_value.as_deref(),
|
||||
Some("boolean")
|
||||
);
|
||||
|
||||
let tags = property(params, "tags");
|
||||
assert_eq!(tags.type_value.as_deref(), Some("array"));
|
||||
assert_eq!(
|
||||
tags.items.as_ref().unwrap().type_value.as_deref(),
|
||||
Some("string")
|
||||
);
|
||||
|
||||
let language = property(params, "language");
|
||||
assert_eq!(language.type_value.as_deref(), Some("string"));
|
||||
assert!(
|
||||
!params
|
||||
.required
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains(&"language".to_string())
|
||||
);
|
||||
|
||||
let extra_tags = property(params, "extra_tags");
|
||||
assert_eq!(extra_tags.type_value.as_deref(), Some("array"));
|
||||
assert_eq!(
|
||||
extra_tags.items.as_ref().unwrap().type_value.as_deref(),
|
||||
Some("string")
|
||||
);
|
||||
assert!(
|
||||
!params
|
||||
.required
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains(&"extra_tags".to_string())
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_tool_simple() {
|
||||
let source = r#"
|
||||
/**
|
||||
* Execute the given code.
|
||||
*
|
||||
* @param code - The code to execute
|
||||
*/
|
||||
export function run(code: string): string {
|
||||
return eval(code);
|
||||
}
|
||||
"#;
|
||||
|
||||
let declarations = parse_ts_source(source, "execute_code", Path::new("tools")).unwrap();
|
||||
assert_eq!(declarations.len(), 1);
|
||||
|
||||
let decl = &declarations[0];
|
||||
assert_eq!(decl.name, "execute_code");
|
||||
assert!(!decl.agent);
|
||||
|
||||
let params = &decl.parameters;
|
||||
assert_eq!(params.required.as_ref().unwrap(), &vec!["code".to_string()]);
|
||||
assert_eq!(
|
||||
property(params, "code").type_value.as_deref(),
|
||||
Some("string")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_agent_tools() {
|
||||
let source = r#"
|
||||
/** Get user info by ID */
|
||||
export function get_user(id: string): string {
|
||||
return "";
|
||||
}
|
||||
|
||||
/** List all users */
|
||||
export function list_users(): string {
|
||||
return "";
|
||||
}
|
||||
"#;
|
||||
|
||||
let declarations = parse_ts_source(source, "tools", Path::new("demo")).unwrap();
|
||||
assert_eq!(declarations.len(), 2);
|
||||
assert_eq!(declarations[0].name, "get_user");
|
||||
assert_eq!(declarations[1].name, "list_users");
|
||||
assert!(declarations[0].agent);
|
||||
assert!(declarations[1].agent);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_reject_rest_params() {
|
||||
let source = r#"
|
||||
/**
|
||||
* Has rest params
|
||||
*/
|
||||
export function run(...args: string[]): string {
|
||||
return "";
|
||||
}
|
||||
"#;
|
||||
|
||||
let err = parse_ts_source(source, "rest_params", Path::new("tools")).unwrap_err();
|
||||
let msg = format!("{err:#}");
|
||||
assert!(msg.contains("rest parameters"));
|
||||
assert!(msg.contains("in function 'run'"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_missing_jsdoc() {
|
||||
let source = r#"
|
||||
export function run(x: string): string {
|
||||
return x;
|
||||
}
|
||||
"#;
|
||||
|
||||
let err = parse_ts_source(source, "missing_jsdoc", Path::new("tools")).unwrap_err();
|
||||
assert!(
|
||||
err.to_string()
|
||||
.contains("Missing or empty description on function: run")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_syntax_error() {
|
||||
let source = "export function run(: broken";
|
||||
let err = parse_ts_source(source, "syntax_error", Path::new("tools")).unwrap_err();
|
||||
assert!(err.to_string().contains("failed to parse typescript"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_underscore_skipped() {
|
||||
let source = r#"
|
||||
/** Private helper */
|
||||
function _helper(): void {}
|
||||
|
||||
/** Public function */
|
||||
export function do_stuff(): string {
|
||||
return "";
|
||||
}
|
||||
"#;
|
||||
|
||||
let declarations = parse_ts_source(source, "tools", Path::new("demo")).unwrap();
|
||||
assert_eq!(declarations.len(), 1);
|
||||
assert_eq!(declarations[0].name, "do_stuff");
|
||||
assert!(declarations[0].agent);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_non_exported_helpers_skipped() {
|
||||
let source = r#"
|
||||
#!/usr/bin/env tsx
|
||||
|
||||
import { appendFileSync } from 'fs';
|
||||
|
||||
/**
|
||||
* Get the current weather in a given location
|
||||
* @param location - The city
|
||||
*/
|
||||
export function get_current_weather(location: string): string {
|
||||
return fetchSync("https://example.com/" + location);
|
||||
}
|
||||
|
||||
function fetchSync(url: string): string {
|
||||
return "sunny";
|
||||
}
|
||||
"#;
|
||||
|
||||
let declarations = parse_ts_source(source, "tools", Path::new("demo")).unwrap();
|
||||
assert_eq!(declarations.len(), 1);
|
||||
assert_eq!(declarations[0].name, "get_current_weather");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_instructions_not_skipped() {
|
||||
let source = r#"
|
||||
/** Help text for the agent */
|
||||
export function _instructions(): string {
|
||||
return "";
|
||||
}
|
||||
"#;
|
||||
|
||||
let declarations = parse_ts_source(source, "tools", Path::new("demo")).unwrap();
|
||||
assert_eq!(declarations.len(), 1);
|
||||
assert_eq!(declarations[0].name, "instructions");
|
||||
assert!(declarations[0].agent);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_optional_with_null_union() {
|
||||
let source = r#"
|
||||
/**
|
||||
* Fetch data with optional filter
|
||||
*
|
||||
* @param url - The URL to fetch
|
||||
* @param filter - Optional filter string
|
||||
*/
|
||||
export function run(url: string, filter: string | null): string {
|
||||
return "";
|
||||
}
|
||||
"#;
|
||||
|
||||
let declarations = parse_ts_source(source, "fetch_data", Path::new("tools")).unwrap();
|
||||
let params = &declarations[0].parameters;
|
||||
assert!(
|
||||
params
|
||||
.required
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains(&"url".to_string())
|
||||
);
|
||||
assert!(
|
||||
!params
|
||||
.required
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains(&"filter".to_string())
|
||||
);
|
||||
assert_eq!(
|
||||
property(params, "filter").type_value.as_deref(),
|
||||
Some("string")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_optional_with_default() {
|
||||
let source = r#"
|
||||
/**
|
||||
* Search with limit
|
||||
*
|
||||
* @param query - Search query
|
||||
* @param limit - Max results
|
||||
*/
|
||||
export function run(query: string, limit: number = 10): string {
|
||||
return "";
|
||||
}
|
||||
"#;
|
||||
|
||||
let declarations =
|
||||
parse_ts_source(source, "search_with_limit", Path::new("tools")).unwrap();
|
||||
let params = &declarations[0].parameters;
|
||||
assert!(
|
||||
params
|
||||
.required
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains(&"query".to_string())
|
||||
);
|
||||
assert!(
|
||||
!params
|
||||
.required
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains(&"limit".to_string())
|
||||
);
|
||||
assert_eq!(
|
||||
property(params, "limit").type_value.as_deref(),
|
||||
Some("number")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ts_shebang_parses() {
|
||||
let source = r#"#!/usr/bin/env tsx
|
||||
|
||||
/**
|
||||
* Get weather
|
||||
* @param location - The city
|
||||
*/
|
||||
export function run(location: string): string {
|
||||
return location;
|
||||
}
|
||||
"#;
|
||||
|
||||
let result = parse_ts_source(source, "get_weather", Path::new("tools"));
|
||||
eprintln!("shebang parse result: {result:?}");
|
||||
assert!(result.is_ok(), "shebang should not cause parse failure");
|
||||
let declarations = result.unwrap();
|
||||
assert_eq!(declarations.len(), 1);
|
||||
assert_eq!(declarations[0].name, "get_weather");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,53 @@
|
||||
use crate::render::RenderOptions;
|
||||
use anyhow::Result;
|
||||
use inquire::ui::{Attributes, Color, RenderConfig, StyleSheet};
|
||||
use syntect::highlighting::{Highlighter, Theme};
|
||||
use syntect::parsing::Scope;
|
||||
|
||||
const DEFAULT_INQUIRE_PROMPT_THEME: Color = Color::DarkYellow;
|
||||
|
||||
pub fn prompt_theme<'a>(render_options: RenderOptions) -> Result<RenderConfig<'a>> {
|
||||
let theme = render_options.theme.as_ref();
|
||||
let mut render_config = RenderConfig::default();
|
||||
|
||||
if let Some(theme_ref) = theme {
|
||||
let prompt_color = resolve_foreground(theme_ref, "markup.heading")?
|
||||
.unwrap_or(DEFAULT_INQUIRE_PROMPT_THEME);
|
||||
|
||||
render_config.prompt = StyleSheet::new()
|
||||
.with_fg(prompt_color)
|
||||
.with_attr(Attributes::BOLD);
|
||||
render_config.selected_option = Some(
|
||||
render_config
|
||||
.selected_option
|
||||
.unwrap_or(render_config.option)
|
||||
.with_attr(
|
||||
render_config
|
||||
.selected_option
|
||||
.unwrap_or(render_config.option)
|
||||
.att
|
||||
| Attributes::BOLD,
|
||||
),
|
||||
);
|
||||
render_config.selected_checkbox = render_config
|
||||
.selected_checkbox
|
||||
.with_attr(render_config.selected_checkbox.style.att | Attributes::BOLD);
|
||||
render_config.option = render_config
|
||||
.option
|
||||
.with_attr(render_config.option.att | Attributes::BOLD);
|
||||
}
|
||||
|
||||
Ok(render_config)
|
||||
}
|
||||
|
||||
fn resolve_foreground(theme: &Theme, scope_str: &str) -> Result<Option<Color>> {
|
||||
let scope = Scope::new(scope_str)?;
|
||||
let style_mod = Highlighter::new(theme).style_mod_for_stack(&[scope]);
|
||||
let fg = style_mod.foreground.or(theme.settings.foreground);
|
||||
|
||||
Ok(fg.map(|c| Color::Rgb {
|
||||
r: c.r,
|
||||
g: c.g,
|
||||
b: c.b,
|
||||
}))
|
||||
}
|
||||
@@ -1,6 +1,9 @@
|
||||
mod inquire;
|
||||
mod markdown;
|
||||
mod stream;
|
||||
|
||||
pub use inquire::prompt_theme;
|
||||
|
||||
pub use self::markdown::{MarkdownRender, RenderOptions};
|
||||
use self::stream::{markdown_stream, raw_stream};
|
||||
|
||||
|
||||
@@ -111,12 +111,14 @@ fn create_suggestion(value: &str, description: &str, span: Span) -> Suggestion {
|
||||
Some(description.to_string())
|
||||
};
|
||||
Suggestion {
|
||||
display_override: None,
|
||||
value: value.to_string(),
|
||||
description,
|
||||
style: None,
|
||||
extra: None,
|
||||
span,
|
||||
append_whitespace: false,
|
||||
match_indices: None,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
+45
-2
@@ -6,7 +6,7 @@ use self::completer::ReplCompleter;
|
||||
use self::highlighter::ReplHighlighter;
|
||||
use self::prompt::ReplPrompt;
|
||||
|
||||
use crate::client::{call_chat_completions, call_chat_completions_streaming};
|
||||
use crate::client::{call_chat_completions, call_chat_completions_streaming, init_client, oauth};
|
||||
use crate::config::{
|
||||
AgentVariables, AssertState, Config, GlobalConfig, Input, LastMessage, StateFlags,
|
||||
macro_execute,
|
||||
@@ -17,6 +17,7 @@ use crate::utils::{
|
||||
};
|
||||
|
||||
use crate::mcp::McpRegistry;
|
||||
use crate::resolve_oauth_client;
|
||||
use anyhow::{Context, Result, bail};
|
||||
use crossterm::cursor::SetCursorStyle;
|
||||
use fancy_regex::Regex;
|
||||
@@ -32,10 +33,15 @@ use std::{env, mem, process};
|
||||
|
||||
const MENU_NAME: &str = "completion_menu";
|
||||
|
||||
static REPL_COMMANDS: LazyLock<[ReplCommand; 37]> = LazyLock::new(|| {
|
||||
static REPL_COMMANDS: LazyLock<[ReplCommand; 39]> = LazyLock::new(|| {
|
||||
[
|
||||
ReplCommand::new(".help", "Show this help guide", AssertState::pass()),
|
||||
ReplCommand::new(".info", "Show system info", AssertState::pass()),
|
||||
ReplCommand::new(
|
||||
".authenticate",
|
||||
"Authenticate the current model client via OAuth (if configured)",
|
||||
AssertState::pass(),
|
||||
),
|
||||
ReplCommand::new(
|
||||
".edit config",
|
||||
"Modify configuration file",
|
||||
@@ -131,6 +137,11 @@ static REPL_COMMANDS: LazyLock<[ReplCommand; 37]> = LazyLock::new(|| {
|
||||
"Leave agent",
|
||||
AssertState::True(StateFlags::AGENT),
|
||||
),
|
||||
ReplCommand::new(
|
||||
".clear todo",
|
||||
"Clear the todo list and stop auto-continuation",
|
||||
AssertState::True(StateFlags::AGENT),
|
||||
),
|
||||
ReplCommand::new(
|
||||
".rag",
|
||||
"Initialize or access RAG",
|
||||
@@ -421,6 +432,19 @@ pub async fn run_repl_command(
|
||||
}
|
||||
None => println!("Usage: .model <name>"),
|
||||
},
|
||||
".authenticate" => {
|
||||
let current_model = config.read().current_model().clone();
|
||||
let client = init_client(config, Some(current_model))?;
|
||||
if !client.supports_oauth() {
|
||||
bail!(
|
||||
"Client '{}' doesn't either support OAuth or isn't configured to use it (i.e. uses an API key instead)",
|
||||
client.name()
|
||||
);
|
||||
}
|
||||
let clients = config.read().clients.clone();
|
||||
let (client_name, provider) = resolve_oauth_client(Some(client.name()), &clients)?;
|
||||
oauth::run_oauth_flow(&*provider, &client_name).await?;
|
||||
}
|
||||
".prompt" => match args {
|
||||
Some(text) => {
|
||||
config.write().use_prompt(text)?;
|
||||
@@ -785,6 +809,25 @@ pub async fn run_repl_command(
|
||||
Some("messages") => {
|
||||
bail!("Use '.empty session' instead");
|
||||
}
|
||||
Some("todo") => {
|
||||
let mut cfg = config.write();
|
||||
match cfg.agent.as_mut() {
|
||||
Some(agent) => {
|
||||
if !agent.auto_continue_enabled() {
|
||||
bail!(
|
||||
"The todo system is not enabled for this agent. Set 'auto_continue: true' in the agent's config.yaml to enable it."
|
||||
);
|
||||
}
|
||||
if agent.todo_list().is_empty() {
|
||||
println!("Todo list is already empty.");
|
||||
} else {
|
||||
agent.clear_todo_list();
|
||||
println!("Todo list cleared.");
|
||||
}
|
||||
}
|
||||
None => bail!("No active agent"),
|
||||
}
|
||||
}
|
||||
_ => unknown_command()?,
|
||||
},
|
||||
".vault" => match split_first_arg(args) {
|
||||
|
||||
+22
-11
@@ -6,7 +6,6 @@ use gman::providers::local::LocalProvider;
|
||||
use indoc::formatdoc;
|
||||
use inquire::validator::Validation;
|
||||
use inquire::{Confirm, Password, PasswordDisplayMode, Text, min_length, required};
|
||||
use std::borrow::Cow;
|
||||
use std::path::PathBuf;
|
||||
|
||||
pub fn ensure_password_file_initialized(local_provider: &mut LocalProvider) -> Result<()> {
|
||||
@@ -166,18 +165,30 @@ pub fn create_vault_password_file(vault: &mut Vault) -> Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn interpolate_secrets<'a>(content: &'a str, vault: &Vault) -> (Cow<'a, str>, Vec<String>) {
|
||||
pub fn interpolate_secrets(content: &str, vault: &Vault) -> (String, Vec<String>) {
|
||||
let mut missing_secrets = vec![];
|
||||
let parsed_content = SECRET_RE.replace_all(content, |caps: &fancy_regex::Captures<'_>| {
|
||||
let secret = vault.get_secret(caps[1].trim(), false);
|
||||
match secret {
|
||||
Ok(s) => s,
|
||||
Err(_) => {
|
||||
missing_secrets.push(caps[1].to_string());
|
||||
"".to_string()
|
||||
let parsed_content: String = content
|
||||
.lines()
|
||||
.map(|line| {
|
||||
if line.trim_start().starts_with('#') {
|
||||
return line.to_string();
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
SECRET_RE
|
||||
.replace_all(line, |caps: &fancy_regex::Captures<'_>| {
|
||||
let secret = vault.get_secret(caps[1].trim(), false);
|
||||
match secret {
|
||||
Ok(s) => s,
|
||||
Err(_) => {
|
||||
missing_secrets.push(caps[1].to_string());
|
||||
"".to_string()
|
||||
}
|
||||
}
|
||||
})
|
||||
.to_string()
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n");
|
||||
|
||||
(parsed_content, missing_secrets)
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user