Tool Reference
The partner submission pipeline is five MCP tools. You call them in order, branching on the response of each. This page documents each tool's inputs, outputs, and one realistic failure mode.
Pipeline order:
admin_submit_skilloradmin_submit_workflow— create a draft submission.admin_validate_submission— run Stage-1 lint (and optionally a dry-run AI review).admin_test_submission— execute the submission in a sandbox to catch runtime errors.admin_request_review— run the full gate (lint + sandbox + AI review) and, onverdict === "pass", publish the artifact.
Operational admin tools — admin_list_users, admin_invite_user, admin_get_consumption, admin_list_integrations, and the others — are not part of the submission pipeline and are documented elsewhere.
All examples below use JSON-RPC 2.0 (the MCP wire format). Every request has the same envelope:
{
"jsonrpc": "2.0",
"id": "<request-id>",
"method": "tools/call",
"params": {
"name": "<tool_name>",
"arguments": { /* tool-specific */ }
}
}
admin_submit_skill
Creates or upserts a draft skill submission. Skills are reusable knowledge or task modules — a markdown body plus an optional tool allowlist — that any agent can compose with.
Pass submissionId on resubmission to update an existing draft or rejected row; omit it the first time to create a new submission. Submissions in submitted, in_review, or approved state are locked and cannot be edited — attempting to upsert against them returns HTTP 409 submission_locked (see Troubleshooting).
Inputs
| Field | Type | Required | Description |
|---|---|---|---|
submissionId | string | no | Server-assigned id from a prior call. Omit on first submit; include to upsert a draft or rejected submission. |
name | string (1–255) | yes | Human-readable name shown in the catalog. |
slug | string (1–200, ^[a-z0-9]+(?:-[a-z0-9]+)*$) | yes | URL-safe identifier; must be unique across the catalog. |
description | string (1–20000) | yes | Long-form description for the catalog detail page. |
heroCopy | string (1–500) | yes | One-line tagline shown in catalog cards. |
markdownBody | string (1–50000) | yes | The SKILL.md body. Distributed verbatim to consumers. |
toolAllowlist | string[] | null | no | Tools the skill is allowed to call. Each entry must be a valid tool name (validated in Stage 1). |
marketingUseCases | object[] | no | { title, description } entries displayed on the catalog detail page. |
useCases | string[] | no | Free-text use-case tags. |
meshCategory | string (≤50) | no | Catalog mesh category for navigation. |
screenshotUrl | URI | null | no | Optional hosted screenshot. |
version | int32 (≥1) | no | Caller-managed version number. |
changelog | string | no | Changelog entry for this version. |
Output (happy path)
{
"submissionId": "9d2a4b8c-...",
"status": "draft",
"validationSummary": {
"status": "pass",
"issues": []
}
}
status is always "draft" on a fresh create (or "submitted" if you used the reserved shortcut path — not generally available at launch). The optional validationSummary block runs the same Stage-1 lint as admin_validate_submission; the draft is structurally valid when validationSummary.status === "pass" and issues is empty.
submissionId is null only in one case: a pre-write slug_collision was detected, so no row was inserted. In that case validationSummary carries a single slug_collision error and your client should pick a different slug before retrying.
Example — happy path
Request:
{
"jsonrpc": "2.0",
"id": "1",
"method": "tools/call",
"params": {
"name": "admin_submit_skill",
"arguments": {
"name": "Competitor lookup",
"slug": "competitor-lookup",
"description": "Returns a short briefing on a named competitor, including HQ, employee count, and recent funding events.",
"heroCopy": "Quick competitor briefings on demand.",
"markdownBody": "# Competitor lookup\n\nGiven a company name, returns ...",
"toolAllowlist": ["company_lookup", "company_funding_events"]
}
}
}
Response:
{
"jsonrpc": "2.0",
"id": "1",
"result": {
"submissionId": "9d2a4b8c-4f17-4e51-9b8e-7b3a5c2e1d04",
"status": "draft",
"validationSummary": {
"status": "pass",
"issues": []
}
}
}
Example — failure (unknown_tool_in_allowlist)
If toolAllowlist includes a tool name that isn't a known Phoenix tool, Stage 1 reports it as an error and validationSummary.status is fail. The draft is still persisted (the row is written before lint runs); you just need to fix the offending tool name and resubmit with the same submissionId.
{
"jsonrpc": "2.0",
"id": "2",
"result": {
"submissionId": "9d2a4b8c-4f17-4e51-9b8e-7b3a5c2e1d04",
"status": "draft",
"validationSummary": {
"status": "fail",
"issues": [
{
"code": "unknown_tool_in_allowlist",
"severity": "error",
"field": "toolAllowlist[2]",
"message": "Tool \"company_secret_lookup\" is not a known Phoenix tool. Drop it from toolAllowlist or use a valid tool name."
}
]
}
}
}
Other Stage-1 failure codes (slug collisions, schema violations) follow the same shape — see Troubleshooting → Stage-1 lint failures.
admin_submit_workflow
Creates or upserts a draft workflow submission. Workflows are prompt-driven multi-step agents — they have a prompt body, a list of MCP servers they need at runtime, and optionally a list of skills they compose with.
Inputs
Common to skills: submissionId, name, slug, description, heroCopy, useCases, meshCategory, screenshotUrl, version, changelog. (Note: marketingUseCases is also shared but with a stricter rule on workflows — see the workflow-specific row below.)
Workflow-specific:
| Field | Type | Required | Description |
|---|---|---|---|
promptBody | string (1–6000) | yes | Raw markdown with YAML frontmatter. The frontmatter block must declare three keys: description (non-empty string), alwaysApply (boolean), and parameters (array of { name, description, example, required }). Missing or malformed frontmatter causes admin_test_submission and admin_request_review to reject with a parse error. Composed-with-skills length is also capped at 60,000 bytes — enforced at validate time, not by the input schema. |
requiredMcpServers | string[] | no | Slugs of MCP integrations the workflow needs at runtime. Cross-referenced against the partner org's connected integrations during validation. |
recommendedSkills | string[] (≤8) | no | Slugs of HG knowledge skills the workflow composes in. Cross-referenced against published skills. |
marketingUseCases | object[] | yes (≥1) | Must include at least one entry titled exactly "Sample output" — the preview surface the catalog detail page renders. Missing it triggers missing_sample_output. |
allowedTools | string[] | no | Tools the workflow may call at runtime. |
preferredModel | string | no | Provider-prefixed model id (e.g. anthropic/claude-sonnet-4.6, openai/gpt-4.6). |
defaultParams | object | no | Default values for the workflow's prompt arguments. |
outputSchema | object | no | JSON Schema describing the workflow's structured output. |
Output (happy path)
Same envelope as admin_submit_skill:
{
"submissionId": "1f3b62c7-...",
"status": "draft",
"validationSummary": {
"status": "pass",
"issues": []
}
}
Example — happy path
{
"jsonrpc": "2.0",
"id": "3",
"method": "tools/call",
"params": {
"name": "admin_submit_workflow",
"arguments": {
"name": "Buying-committee briefing",
"slug": "buying-committee-briefing",
"description": "Produces a one-page briefing on the likely buying committee at a target account.",
"heroCopy": "Map the buying committee in under a minute.",
"promptBody": "---\ndescription: Buying-committee briefing for a target account\nalwaysApply: false\nparameters:\n - name: domain\n description: Company domain to research\n example: acme.com\n required: true\n---\n# Buying-committee briefing\n\nGiven a domain, produce ...",
"requiredMcpServers": ["zoominfo", "hg-insights"],
"recommendedSkills": ["company-overview", "key-contacts"],
"marketingUseCases": [
{
"title": "Sample output",
"description": "**Acme Corp** buying committee: VP Eng (champion), CFO (economic buyer), ..."
}
],
"preferredModel": "anthropic/claude-sonnet-4.6"
}
}
}
Example — failure (missing_sample_output)
If marketingUseCases doesn't include an entry titled exactly "Sample output":
{
"jsonrpc": "2.0",
"id": "4",
"result": {
"submissionId": "1f3b62c7-...",
"status": "draft",
"validationSummary": {
"status": "fail",
"issues": [
{
"code": "missing_sample_output",
"severity": "error",
"field": "marketingUseCases",
"message": "Workflow submissions must include a marketing use case titled 'Sample output' for the catalog preview surface."
}
]
}
}
}
admin_validate_submission
Runs Stage-1 lint against an existing submission and, optionally, a dry-run AI review. Stage 1 is fast (synchronous, no LLM calls). The AI-review dry-run lets you see the rubric verdict without committing to the final review gate.
Inputs
| Field | Type | Required | Description |
|---|---|---|---|
submissionId | string | yes | Id returned by admin_submit_skill or admin_submit_workflow. |
includeAiReview | boolean | no | When true, also run the AI review. Defaults to false (Stage 1 only). |
Output (happy path)
{
"status": "pass",
"issues": [],
"aiReview": {
"status": "completed",
"runId": "ai-...",
"verdict": "pass",
"findings": [],
"summary": "No issues found.",
"modelVersion": "anthropic/claude-sonnet-4.6"
}
}
status summarizes the Stage-1 result (pass, warnings, fail). The aiReview block is present only when includeAiReview: true. Its own status reports the run's lifecycle:
aiReview.status | Meaning |
|---|---|
completed | Verdict ready in verdict field. |
queued | The synchronous timeout was hit; a background job is running. Poll later by re-validating or calling admin_request_review. |
failed | The review agent errored. Re-run; if it persists, contact support. |
skipped | The review agent isn't configured (operator alarm — not a partner-fixable condition). |
Example — happy path
{
"jsonrpc": "2.0",
"id": "5",
"method": "tools/call",
"params": {
"name": "admin_validate_submission",
"arguments": {
"submissionId": "9d2a4b8c-4f17-4e51-9b8e-7b3a5c2e1d04",
"includeAiReview": true
}
}
}
{
"jsonrpc": "2.0",
"id": "5",
"result": {
"status": "pass",
"issues": [],
"aiReview": {
"status": "completed",
"runId": "ai-2e9d...",
"verdict": "pass",
"findings": [],
"summary": "Skill meets all rubric criteria."
}
}
}
Example — failure (validation_schema_error)
When the submission's stored payload no longer matches the TypeSpec schema (e.g., because a required field was removed in an earlier upsert):
{
"jsonrpc": "2.0",
"id": "6",
"result": {
"status": "fail",
"issues": [
{
"code": "validation_schema_error",
"severity": "error",
"field": "description",
"message": "Required field 'description' is missing or empty."
}
]
}
}
admin_test_submission
Executes the submission in an isolated sandbox using the partner-supplied sample inputs. Produces a trace plus the final output. Catches runtime issues (missing integration credentials, downstream tool errors, prompt logic bugs) before they reach the AI-review stage.
Inputs
| Field | Type | Required | Description |
|---|---|---|---|
submissionId | string | yes | Id from submit/validate. |
sampleInputs | object | yes | Keys must be a subset of the submission's derived prompt-argument names. Values are restricted to string, number, boolean, or null. |
timeoutSeconds | int32 (10–120, default 60) | no | Wall-clock budget. Sandbox enforces a hard cap of 120s regardless. |
Output
{
"status": "succeeded",
"trace": [
{ "kind": "llm", "ts": "2026-05-14T19:32:14Z", "data": { /* ... */ } },
{ "kind": "tool_call", "ts": "...", "data": { "tool": "company_lookup", "args": {/*...*/} } },
{ "kind": "tool_result", "ts": "...", "data": { "ok": true, "result": {/*...*/} } }
],
"finalOutput": "**Acme Corp** is headquartered in ...",
"durationMs": 14238
}
status is succeeded, failed, timed_out, or denied. On failed/timed_out/denied, an error object describes the cause.
Example — happy path
{
"jsonrpc": "2.0",
"id": "7",
"method": "tools/call",
"params": {
"name": "admin_test_submission",
"arguments": {
"submissionId": "1f3b62c7-...",
"sampleInputs": { "domain": "acme.com" },
"timeoutSeconds": 60
}
}
}
{
"jsonrpc": "2.0",
"id": "7",
"result": {
"status": "succeeded",
"trace": [/* ... */],
"finalOutput": "**Acme Corp** buying committee: VP Eng ...",
"durationMs": 14238
}
}
Example — failure (timed_out)
{
"jsonrpc": "2.0",
"id": "8",
"result": {
"status": "timed_out",
"trace": [/* partial */],
"durationMs": 60012,
"error": {
"code": "sandbox_timed_out",
"message": "Sandbox run exceeded the 60s budget."
}
}
}
Other failure shapes are documented in Troubleshooting → Test failures.
admin_request_review
Composes Stage-1 lint, a fresh sandbox run, and the AI review into a single gate. This is the only call that can transition a submission to approved and publish it to the catalog.
Inputs
| Field | Type | Required | Description |
|---|---|---|---|
submissionId | string | yes | Id from submit/validate. |
Output
{
"status": "approved",
"gate": {
"stage1": { "status": "pass", "issues": [] },
"sandbox": { "status": "succeeded", "runId": "test-...", "durationMs": 14238 },
"aiReview": { "status": "completed", "verdict": "pass", "runId": "ai-...", "summary": "..." }
},
"publishedBlueprintId": "blueprint-uuid"
}
Top-level fields:
| Field | Type | When present | Description |
|---|---|---|---|
status | "approved" | "rejected" | "in_review" | always | Terminal gate decision (see table below). |
gate | object | always | Sub-shape per stage: stage1, sandbox, aiReview. The aiReview sub-shape carries only status, verdict, runId, summary (the full finding list isn't embedded — fetch it via admin_validate_submission). |
publishedBlueprintId | string | only on status: "approved" | UUID of the newly minted agent_blueprints row that the catalog now serves. |
rejectionReason | string | always on status: "rejected" | Human-readable explanation of why the submission was rejected. This is your primary diagnostic signal — especially for skill submissions, where the sandbox stage is skipped so the per-stage objects carry no detail. The same text is also persisted to partner_submissions.validationSummary, so polling admin_validate_submission echoes the reason. |
status values:
status | Meaning | Action |
|---|---|---|
approved | All three gates passed AND aiReview.verdict === "pass". The submission is now live in the catalog, attributed to your org. publishedBlueprintId is the id of the newly minted agent_blueprints row. | None — it's published. |
rejected | At least one gate failed, or aiReview.verdict !== "pass". The submission is back to an editable state. | Read rejectionReason for the high-level cause; inspect gate.* blocks for per-stage detail; fetch detailed AI-review findings via admin_validate_submission. |
in_review | AI review hit the synchronous timeout (look for gate.aiReview.status === "queued"), or advisory mode is on and the run is held for human review (gate.aiReview.advisoryMode === true). Phoenix will finalize the verdict asynchronously. | Poll by calling admin_request_review again, or admin_validate_submission with includeAiReview: true and read aiReview.status. |
Launch autopublish gate: At launch, only aiReview.verdict === "pass" autopublishes. A warnings verdict does not autopublish — it currently returns status: "rejected" with the findings, so you can address the warnings and resubmit. (An operator-only advisory-mode toggle exists but isn't exposed to partners.)
Example — happy path
{
"jsonrpc": "2.0",
"id": "9",
"method": "tools/call",
"params": {
"name": "admin_request_review",
"arguments": {
"submissionId": "9d2a4b8c-..."
}
}
}
{
"jsonrpc": "2.0",
"id": "9",
"result": {
"status": "approved",
"gate": {
"stage1": { "status": "pass", "issues": [] },
"sandbox": { "status": "succeeded", "runId": "test-7a2c", "durationMs": 11823 },
"aiReview": { "status": "completed", "verdict": "pass", "runId": "ai-3c8d", "summary": "Skill meets all rubric criteria." }
},
"publishedBlueprintId": "ab78c2e1-9f04-4d62-8e7a-1f3b62c75d04"
}
}
Example — failure (rejection on rubric_brand_alignment)
{
"jsonrpc": "2.0",
"id": "10",
"result": {
"status": "rejected",
"gate": {
"stage1": { "status": "pass", "issues": [] },
"sandbox": { "status": "succeeded", "runId": "test-8b3d", "durationMs": 12451 },
"aiReview": {
"status": "completed",
"verdict": "fail",
"runId": "ai-9c4e",
"summary": "Skill description references a competitor product in the sample output."
}
},
"rejectionReason": "AI review failed: brand-alignment finding on marketingUseCases[0].description (references competitor product)."
}
}
The gate response intentionally carries only summary fields for the AI review (status, verdict, runId, summary); the full finding list is not embedded here. To inspect individual findings, call admin_validate_submission with includeAiReview: true against the same submissionId — the response's top-level aiReview.findings[] contains each finding's code, severity, field, message, and optional evidence. See Lifecycle → AI-review rubric for what each rubric code means and how to fix it.