DeepSeek Provider

Use DeepSeek models through SimplerLLM's unified interface.

Overview

SimplerLLM provides seamless integration with DeepSeek's API, allowing you to use their models with just a few lines of code.

Using DeepSeek Models

Model Examples

# DeepSeek Chat (example)
model_name="deepseek-chat"

# DeepSeek Coder (example)
model_name="deepseek-coder"

# You can use any model from DeepSeek's catalog

Finding Available Models

Visit DeepSeek's official documentation for the complete list.

Setup and Authentication

Get Your API Key

1. Go to DeepSeek Platform

2. Sign up or log in

3. Create a new API key

Configure Environment Variables

# .env file
DEEPSEEK_API_KEY=your-deepseek-api-key-here

Basic Usage

from SimplerLLM.language.llm import LLM, LLMProvider

# Create DeepSeek LLM instance
llm = LLM.create(
    provider=LLMProvider.DEEPSEEK,
    model_name="deepseek-chat"
)

# Generate a response
response = llm.generate_response(
    prompt="Explain machine learning in simple terms"
)

print(response)

With Custom Parameters

llm = LLM.create(
    provider=LLMProvider.DEEPSEEK,
    model_name="deepseek-chat",  # Use any DeepSeek model name
    temperature=0.7,
    max_tokens=1000
)

response = llm.generate_response(
    prompt="Write a Python function to sort a list"
)

print(response)

Advanced Features

Structured JSON Output

from pydantic import BaseModel, Field
from SimplerLLM.language.llm_addons import generate_pydantic_json_model

class CodeAnalysis(BaseModel):
    language: str = Field(description="Programming language")
    complexity: str = Field(description="Code complexity level")
    suggestions: list[str] = Field(description="Improvement suggestions")

llm = LLM.create(provider=LLMProvider.DEEPSEEK, model_name="deepseek-chat")

result = generate_pydantic_json_model(
    llm_instance=llm,
    prompt="Analyze this Python code: def add(a, b): return a + b",
    model_class=CodeAnalysis
)

print(f"Language: {result.language}")

Pricing Considerations

DeepSeek charges based on token usage. For current pricing, visit DeepSeek's pricing page.

Best Practices

1. Configure Temperature

Lower for factual, higher for creative tasks

2. Set Token Limits

Use max_tokens to control costs

3. Implement Error Handling

Wrap API calls in try-except blocks

What's Next?

Official Resources