Best Practices
This guide covers best practices for using Phoenix MCP tools effectively and efficiently.
Rate Limiting
Understanding Limits
Phoenix enforces the following rate limits per API key:
- 100 requests per minute
- 1,000 requests per hour
- 10,000 requests per day
Rate limits apply across all tools and REST API endpoints combined.
Best Practices
1. Batch Your Requests
Instead of making sequential calls, batch independent requests:
❌ Bad - Sequential:
const companies = ['example1.com', 'example2.com', 'example3.com'];
for (const domain of companies) {
const data = await mcp.call('company_firmographic', { companyDomain: domain });
console.log(data);
}
// Takes 3+ seconds if each request takes 1s
✅ Good - Parallel:
const companies = ['example1.com', 'example2.com', 'example3.com'];
const promises = companies.map(domain =>
mcp.call('company_firmographic', { companyDomain: domain })
);
const results = await Promise.all(promises);
// Takes ~1 second total
2. Implement Exponential Backoff
When you hit rate limits, use exponential backoff:
async function callWithRetry(tool, params, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await mcp.call(tool, params);
} catch (error) {
if (error.code === 'RATE_LIMIT_EXCEEDED' && i < maxRetries - 1) {
const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}
3. Use Company Search Wisely
company_search can return hundreds of results. Limit results to what you need:
❌ Bad:
{
"tool": "company_search",
"parameters": {
"filters": { "industry": "Technology" },
"limit": 1000
}
}
// Returns 1000 companies, uses 1000 quota units
✅ Good:
{
"tool": "company_search",
"parameters": {
"filters": {
"industry": "Technology",
"employeeRange": "1000-5000"
},
"limit": 50
}
}
// Returns 50 targeted companies, uses 50 quota units
Caching Strategy
Understanding Phoenix Caching
Phoenix automatically caches responses:
- Company data: 1 hour
- Product catalogs: 24 hours
- Search results: 15 minutes
Cached responses still count toward rate limits on first request only.
Best Practices
1. Implement Client-Side Caching
Add your own caching layer for frequently accessed data:
const cache = new Map();
const CACHE_TTL = 60 * 60 * 1000; // 1 hour
async function getCachedCompanyData(domain) {
const cacheKey = `firmographic:${domain}`;
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await mcp.call('company_firmographic', {
companyDomain: domain
});
cache.set(cacheKey, { data, timestamp: Date.now() });
return data;
}
2. Avoid Redundant Calls
Track what you've already fetched:
const fetchedCompanies = new Set();
async function fetchIfNeeded(domain) {
if (fetchedCompanies.has(domain)) {
console.log(`Already fetched ${domain}, skipping`);
return null;
}
fetchedCompanies.add(domain);
return await mcp.call('company_firmographic', { companyDomain: domain });
}
Error Handling
Common Errors
| Error Code | Meaning | Solution |
|---|---|---|
MISSING_INTEGRATION | Required integration not configured | Configure integration in settings |
INVALID_PARAMETERS | Parameters don't match schema | Check parameter types and requirements |
RATE_LIMIT_EXCEEDED | Too many requests | Implement backoff, reduce request rate |
COMPANY_NOT_FOUND | Company doesn't exist in database | Verify domain is correct |
INTERNAL_ERROR | Server error | Retry once, then contact support |
Robust Error Handling
async function robustToolCall(tool, params) {
try {
return await mcp.call(tool, params);
} catch (error) {
// Log error details
console.error(`Tool ${tool} failed:`, {
code: error.code,
message: error.message,
params: params
});
// Handle specific errors
switch (error.code) {
case 'COMPANY_NOT_FOUND':
return { notFound: true, domain: params.companyDomain };
case 'RATE_LIMIT_EXCEEDED':
// Wait and retry
await new Promise(r => setTimeout(r, 5000));
return await mcp.call(tool, params);
case 'MISSING_INTEGRATION':
throw new Error(
`Required integration missing. Please configure: ${error.requiredIntegrations.join(', ')}`
);
default:
// Re-throw unknown errors
throw error;
}
}
}
Tool Composition
Effective Tool Chaining
Chain tools logically from broad to specific:
Research Workflow Pattern
1. company_search (broad discovery)
↓
2. company_firmographic (basic validation)
↓
3. company_technographic (tech fit check)
↓
4. company_intent (timing check)
↓
5. company_spend (budget validation)
Implementation
async function qualifyCompany(domain) {
// Step 1: Basic info
const firmographic = await mcp.call('company_firmographic', {
companyDomain: domain
});
// Early exit if company too small
if (firmographic.data.employeeCount < 100) {
return { qualified: false, reason: 'Too small' };
}
// Step 2: Technology fit
const technographic = await mcp.call('company_technographic', {
companyDomain: domain
});
const hasTargetTech = technographic.data.technologies.some(
tech => tech.name === 'Salesforce'
);
if (!hasTargetTech) {
return { qualified: false, reason: 'No target technology' };
}
// Step 3: Buying signals
const intent = await mcp.call('company_intent', {
companyDomain: domain
});
if (intent.data.overallIntentScore > 70) {
return {
qualified: true,
score: intent.data.overallIntentScore,
firmographic: firmographic.data,
technologies: technographic.data
};
}
return { qualified: false, reason: 'Low intent' };
}
Parallel Processing
When data isn't dependent, fetch in parallel:
async function getCompleteProfile(domain) {
// These calls don't depend on each other
const [firmographic, technographic, intent, spending] = await Promise.all([
mcp.call('company_firmographic', { companyDomain: domain }),
mcp.call('company_technographic', { companyDomain: domain }),
mcp.call('company_intent', { companyDomain: domain }),
mcp.call('company_spend', { companyDomain: domain })
]);
return {
firmographic: firmographic.data,
technographic: technographic.data,
intent: intent.data,
spending: spending.data
};
}
Data Quality
Validating Results
Always validate data before using it:
function validateFirmographic(data) {
const required = ['companyName', 'domain', 'employeeCount'];
const missing = required.filter(field => !data[field]);
if (missing.length > 0) {
console.warn(`Missing fields: ${missing.join(', ')}`);
return false;
}
// Validate ranges
if (data.employeeCount < 0) {
console.warn('Invalid employee count');
return false;
}
return true;
}
const result = await mcp.call('company_firmographic', {
companyDomain: 'example.com'
});
if (validateFirmographic(result.data)) {
// Use the data
processCompany(result.data);
} else {
// Handle invalid data
logDataQualityIssue(result);
}
Handling Missing Data
Not all companies have complete data. Handle missing fields gracefully:
function safelyAccessData(firmographic) {
return {
name: firmographic.companyName || 'Unknown',
employees: firmographic.employeeCount || 'Not available',
revenue: firmographic.revenueRange || 'Not disclosed',
industry: firmographic.industry || 'Unknown',
location: firmographic.location?.city
? `${firmographic.location.city}, ${firmographic.location.state}`
: 'Location not available'
};
}
Performance Optimization
1. Request Only What You Need
Use specific filters to reduce data transfer:
❌ Bad:
const all = await mcp.call('company_technographic', {
companyDomain: 'example.com'
});
// Returns all 500+ technologies
const salesforceOnly = all.data.technologies.filter(
t => t.name === 'Salesforce'
);
✅ Good:
// Use list tools first to get IDs, then fetch specific data
const categories = await mcp.call('list_product_categories', {});
const salesforceCat = categories.data.find(c => c.name === 'CRM');
const tech = await mcp.call('company_technographic', {
companyDomain: 'example.com',
categoryId: salesforceCat.id
});
2. Monitor Performance
Track tool execution times:
async function timedCall(tool, params) {
const start = Date.now();
try {
const result = await mcp.call(tool, params);
const duration = Date.now() - start;
console.log(`${tool} completed in ${duration}ms`);
// Alert on slow calls
if (duration > 5000) {
console.warn(`Slow call detected: ${tool} took ${duration}ms`);
}
return result;
} catch (error) {
const duration = Date.now() - start;
console.error(`${tool} failed after ${duration}ms`);
throw error;
}
}
3. Use Streaming for Large Datasets
For large result sets, process results as they arrive:
async function* streamCompanySearch(filters) {
const BATCH_SIZE = 50;
let offset = 0;
let hasMore = true;
while (hasMore) {
const result = await mcp.call('company_search', {
filters: filters,
limit: BATCH_SIZE,
offset: offset
});
for (const company of result.data.companies) {
yield company;
}
offset += BATCH_SIZE;
hasMore = result.data.hasMore;
}
}
// Usage
for await (const company of streamCompanySearch({ industry: 'Technology' })) {
await processCompany(company);
}
Security
1. Protect API Keys
Never expose API keys in client-side code:
❌ Bad:
// Client-side code
const apiKey = 'pk_live_abc123'; // Exposed in browser!
✅ Good:
// Server-side only
const apiKey = process.env.PHOENIX_API_KEY;
// Client calls your backend
fetch('/api/company-data', {
method: 'POST',
body: JSON.stringify({ domain: 'example.com' })
});
2. Validate User Input
Always validate domains and parameters from users:
function isValidDomain(domain) {
// Basic validation
const domainRegex = /^[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,}$/i;
return domainRegex.test(domain);
}
async function safeCompanyLookup(userInput) {
const domain = userInput.trim().toLowerCase();
if (!isValidDomain(domain)) {
throw new Error('Invalid domain format');
}
return await mcp.call('company_firmographic', {
companyDomain: domain
});
}
3. Implement Usage Limits
Add application-level rate limiting:
const userLimits = new Map();
async function rateLimitedCall(userId, tool, params) {
const key = `${userId}:${Date.now() / 1000 / 60 | 0}`; // Per minute
const count = userLimits.get(key) || 0;
if (count >= 10) { // 10 requests per minute per user
throw new Error('User rate limit exceeded');
}
userLimits.set(key, count + 1);
return await mcp.call(tool, params);
}