Cohere Provider
Use Cohere's Command models through SimplerLLM's unified interface.
Overview
SimplerLLM provides seamless integration with Cohere's API, allowing you to use their Command models with just a few lines of code.
Using Cohere Models
Model Examples
# Command R+ (example)
model_name="command-r-plus"
# Command R (example)
model_name="command-r"
# You can use any model from Cohere's catalog
Finding Available Models
Visit Cohere's official model documentation for the complete list.
Setup and Authentication
Get Your API Key
Configure Environment Variables
# .env file
COHERE_API_KEY=your-cohere-api-key-here
Basic Usage
from SimplerLLM.language.llm import LLM, LLMProvider
# Create Cohere LLM instance
llm = LLM.create(
provider=LLMProvider.COHERE,
model_name="command-r-plus"
)
# Generate a response
response = llm.generate_response(
prompt="Explain machine learning in simple terms"
)
print(response)
With Custom Parameters
llm = LLM.create(
provider=LLMProvider.COHERE,
model_name="command-r-plus", # Use any Cohere model name
temperature=0.7,
max_tokens=1000
)
response = llm.generate_response(
prompt="Write a creative short story"
)
print(response)
Advanced Features
Structured JSON Output
from pydantic import BaseModel, Field
from SimplerLLM.language.llm_addons import generate_pydantic_json_model
class Summary(BaseModel):
title: str = Field(description="Title")
points: list[str] = Field(description="Key points")
llm = LLM.create(provider=LLMProvider.COHERE, model_name="command-r-plus")
result = generate_pydantic_json_model(
llm_instance=llm,
prompt="Summarize the benefits of cloud computing",
model_class=Summary
)
print(f"Title: {result.title}")
Pricing Considerations
Cohere charges based on token usage. For current pricing, visit Cohere's pricing page.
Best Practices
1. Configure Temperature
Lower for factual, higher for creative tasks
2. Set Token Limits
Use max_tokens
to control costs
3. Implement Error Handling
Wrap API calls in try-except blocks