Google Gemini Provider
Use Google's Gemini models through SimplerLLM's unified interface with consistent, easy-to-use methods.
Overview
SimplerLLM provides seamless integration with Google's Gemini API, allowing you to use their models with just a few lines of code. The unified interface handles authentication, request formatting, and response parsing automatically.
Using Gemini Models
Google regularly releases new Gemini models and updates. You can use any Gemini model by specifying its name in the model_name
parameter.
Model Examples
# Gemini 1.5 Pro (example)
model_name="gemini-1.5-pro"
# Gemini 1.5 Flash (example)
model_name="gemini-1.5-flash"
# You can use any model name from Google's catalog
Finding Available Models
For the complete list of available models, visit Google's official model documentation.
Setup and Authentication
1. Get Your API Key
2. Configure Environment Variables
# .env file
GEMINI_API_KEY=your-gemini-api-key-here
Basic Usage
Simple Text Generation
from SimplerLLM.language.llm import LLM, LLMProvider
# Create Gemini LLM instance
llm = LLM.create(
provider=LLMProvider.GEMINI,
model_name="gemini-1.5-pro"
)
# Generate a response
response = llm.generate_response(
prompt="Explain the concept of recursion in programming"
)
print(response)
With Custom Parameters
llm = LLM.create(
provider=LLMProvider.GEMINI,
model_name="gemini-1.5-pro", # Use any Gemini model name
temperature=0.7, # Controls creativity
max_tokens=1000, # Maximum response length
top_p=0.9 # Nucleus sampling
)
response = llm.generate_response(
prompt="Write a creative short story about AI"
)
print(response)
Advanced Features
Structured JSON Output
from pydantic import BaseModel, Field
from SimplerLLM.language.llm_addons import generate_pydantic_json_model
class ArticleSummary(BaseModel):
title: str = Field(description="Article title")
summary: str = Field(description="Brief summary")
key_points: list[str] = Field(description="Main points")
llm = LLM.create(provider=LLMProvider.GEMINI, model_name="gemini-1.5-pro")
result = generate_pydantic_json_model(
llm_instance=llm,
prompt="Summarize an article about renewable energy",
model_class=ArticleSummary
)
print(f"Title: {result.title}")
print(f"Summary: {result.summary}")
Configuration Options
Parameter Reference
temperature (0.0 to 1.0, default: 0.7)
Controls randomness in generation
max_tokens (default: 300)
Maximum tokens to generate
top_p (0.0 to 1.0, default: 1.0)
Nucleus sampling parameter
Pricing Considerations
Google Gemini charges based on token usage. For current pricing, visit Google's pricing page.
Best Practices
1. Configure Temperature Appropriately
Lower for factual tasks, higher for creative tasks
2. Set Reasonable Token Limits
Use max_tokens
to control costs
3. Implement Error Handling
Always wrap API calls in try-except blocks
Error Handling
from SimplerLLM.language.llm import LLM, LLMProvider
try:
llm = LLM.create(
provider=LLMProvider.GEMINI,
model_name="gemini-1.5-pro"
)
response = llm.generate_response(prompt="Your prompt here")
print(f"Response: {response}")
except ValueError as e:
print(f"Configuration error: {e}")
except Exception as e:
print(f"API error: {e}")