Anthropic Claude Provider
Use Anthropic's Claude models through SimplerLLM's unified interface with consistent, easy-to-use methods.
Overview
SimplerLLM provides seamless integration with Anthropic's Claude API, allowing you to use their models with just a few lines of code. The unified interface handles authentication, request formatting, and response parsing automatically.
Using Claude Models
Anthropic regularly releases new models and updates. You can use any Claude model by specifying its name in the model_name
parameter. SimplerLLM works with all Claude models through the same unified interface.
Model Examples
Here are some commonly used models (as examples):
# Claude 3.5 Sonnet (example)
model_name="claude-3-5-sonnet-20241022"
# Claude 3 Opus (example)
model_name="claude-3-opus-20240229"
# Claude 3 Haiku (example)
model_name="claude-3-haiku-20240307"
# You can use any model name from Anthropic's catalog
Finding Available Models
For the complete list of available models and their capabilities, visit Anthropic's official model documentation. New models are released regularly, so check their docs for the latest options.
Setup and Authentication
To use Anthropic Claude with SimplerLLM, you need an API key from Anthropic:
1. Get Your API Key
1. Go to Anthropic Console
2. Sign up or log in to your account
3. Navigate to API Keys section
4. Create a new API key
2. Configure Environment Variables
Add your API key to a .env
file in your project root:
# .env file
ANTHROPIC_API_KEY=sk-ant-your-api-key-here
Security Best Practice
Never commit your .env
file to version control. Add it to .gitignore
to keep your API keys secure.
Basic Usage
Here's how to use Claude models with SimplerLLM:
Simple Text Generation
from SimplerLLM.language.llm import LLM, LLMProvider
# Create Claude LLM instance
llm = LLM.create(
provider=LLMProvider.ANTHROPIC,
model_name="claude-3-5-sonnet-20241022"
)
# Generate a response
response = llm.generate_response(
prompt="Explain the concept of recursion in programming with a simple example"
)
print(response)
With Custom Parameters
llm = LLM.create(
provider=LLMProvider.ANTHROPIC,
model_name="claude-3-5-sonnet-20241022", # Use any Claude model name
temperature=0.7, # Controls creativity (0.0 to 1.0)
max_tokens=1000, # Maximum response length
top_p=0.9 # Nucleus sampling parameter
)
response = llm.generate_response(
prompt="Write a creative short story about AI"
)
print(response)
With System Message
response = llm.generate_response(
prompt="How do I optimize database queries?",
system_message="You are an expert database administrator. Provide detailed, technical answers with examples."
)
print(response)
Advanced Features
Structured JSON Output
from pydantic import BaseModel, Field
from SimplerLLM.language.llm_addons import generate_pydantic_json_model
class ProductInfo(BaseModel):
name: str = Field(description="Product name")
category: str = Field(description="Product category")
price: float = Field(description="Price in USD")
features: list[str] = Field(description="Key features")
llm = LLM.create(provider=LLMProvider.ANTHROPIC, model_name="claude-3-5-sonnet-20241022")
product = generate_pydantic_json_model(
llm_instance=llm,
prompt="Generate information for a premium wireless headphone",
model_class=ProductInfo
)
print(f"Name: {product.name}")
print(f"Category: {product.category}")
print(f"Price: ${product.price}")
print(f"Features: {', '.join(product.features)}")
Conversation with Context
from SimplerLLM.prompts.messages_template import MessagesTemplate
# Create conversation
messages = MessagesTemplate()
messages.add_system_message("You are a helpful Python programming tutor")
messages.add_user_message("What is a list comprehension?")
messages.add_assistant_message("A list comprehension is a concise way to create lists in Python...")
messages.add_user_message("Can you show me an example?")
# Generate response with conversation context
llm = LLM.create(provider=LLMProvider.ANTHROPIC, model_name="claude-3-5-sonnet-20241022")
response = llm.generate_response(messages=messages.messages)
print(response)
Prompt Caching
Anthropic supports prompt caching to reduce costs for repeated prompts:
llm = LLM.create(
provider=LLMProvider.ANTHROPIC,
model_name="claude-3-5-sonnet-20241022"
)
# Enable prompt caching for large, repeated context
response = llm.generate_response(
prompt="Based on the documentation above, how do I implement feature X?",
prompt_caching=True,
cached_input="[Large documentation text that will be reused across multiple requests]"
)
print(response)
Prompt Caching Benefits
When you have large context that's reused across multiple requests (like documentation or knowledge bases), prompt caching can significantly reduce costs by caching that context on Anthropic's servers.
Configuration Options
Fine-tune Claude model behavior with these parameters:
Parameter Reference
temperature (0.0 to 1.0, default: 0.7)
Controls randomness. Lower values = focused, higher values = creative
temperature=0.7
max_tokens (default: 300)
Maximum number of tokens to generate in the response
max_tokens=1000
top_p (0.0 to 1.0, default: 1.0)
Nucleus sampling: considers tokens with top_p probability mass
top_p=0.9
prompt_caching (optional, default: False)
Enable prompt caching for repeated context
prompt_caching=True
api_key (optional)
Override the environment variable API key
api_key="your-key-here"
Pricing Considerations
Anthropic charges based on token usage (both input and output tokens). Different models have different pricing tiers. For current pricing information, visit Anthropic's pricing page.
Cost Optimization Tips
- Use prompt caching for large, repeated context to reduce costs by up to 90%
- Set reasonable
max_tokens
limits to control response length and costs - Use
temperature=0
for deterministic outputs (reduces need for retries) - Cache responses when possible to avoid duplicate API calls
- Monitor token usage through Anthropic's console
Best Practices
1. Configure Temperature Appropriately
Use lower temperature (0-0.3) for factual tasks, medium (0.7) for balanced responses, and higher (0.9-1.0) for creative writing.
2. Use System Messages Effectively
Provide clear instructions via system messages to guide Claude's behavior and tone.
3. Leverage Prompt Caching
When working with large context that's reused (documentation, knowledge bases), use prompt caching to significantly reduce costs.
4. Set Reasonable Token Limits
Use the max_tokens
parameter to control response length and manage costs effectively.
5. Implement Error Handling
Always wrap API calls in try-except blocks and implement retry logic for production applications.
Error Handling
from SimplerLLM.language.llm import LLM, LLMProvider
try:
llm = LLM.create(
provider=LLMProvider.ANTHROPIC,
model_name="claude-3-5-sonnet-20241022"
)
response = llm.generate_response(
prompt="Your prompt here",
max_tokens=500
)
print(f"Response: {response}")
except ValueError as e:
# Configuration errors
print(f"Configuration error: {e}")
except Exception as e:
# API errors (rate limits, network issues, etc.)
print(f"API error: {e}")
# Implement retry logic or fallback here
What's Next?
← OpenAI Provider
Learn about OpenAI integration
Google Gemini →
Use Gemini models with large context windows
Reliable LLM →
Add automatic failover between providers
← LLM Interface
Learn about the unified interface
Official Resources
For more details about Claude models and pricing, visit Anthropic's official documentation.