OpenAI Provider

Use OpenAI's GPT models through SimplerLLM's unified interface with consistent, easy-to-use methods.

Overview

SimplerLLM provides seamless integration with OpenAI's API, allowing you to use their GPT models with just a few lines of code. The unified interface handles authentication, request formatting, and response parsing automatically.

Using OpenAI Models

OpenAI regularly releases new models and updates. You can use any OpenAI model by specifying its name in the model_name parameter. SimplerLLM works with all OpenAI models through the same unified interface.

Model Examples

Here are some commonly used models (as examples):

# GPT-4 series
model_name="gpt-4o"          # Example: Current flagship model
model_name="gpt-4-turbo"     # Example: Optimized for speed

# GPT-3.5 series
model_name="gpt-3.5-turbo"   # Example: Cost-effective option

# You can use any model name from OpenAI's catalog

Finding Available Models

For the complete list of available models and their capabilities, visit OpenAI's official model documentation. New models are released regularly, so check their docs for the latest options.

Setup and Authentication

To use OpenAI with SimplerLLM, you need an API key from OpenAI:

1. Get Your API Key

1. Go to OpenAI Platform

2. Sign up or log in to your account

3. Navigate to API Keys section

4. Create a new secret key

2. Configure Environment Variables

Add your API key to a .env file in your project root:

# .env file
OPENAI_API_KEY=sk-proj-your-api-key-here

Security Best Practice

Never commit your .env file to version control. Add it to .gitignore to keep your API keys secure.

Basic Usage

Here's how to use OpenAI models with SimplerLLM:

Simple Text Generation

from SimplerLLM.language.llm import LLM, LLMProvider

# Create OpenAI LLM instance
llm = LLM.create(
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o"
)

# Generate a response
response = llm.generate_response(
    prompt="Explain the concept of recursion in programming with a simple example"
)

print(response)

With Custom Parameters

llm = LLM.create(
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o",  # Use any OpenAI model name
    temperature=0.7,      # Controls creativity (0.0 to 2.0)
    max_tokens=1000,      # Maximum response length
    top_p=0.9            # Nucleus sampling parameter
)

response = llm.generate_response(
    prompt="Write a creative short story about AI"
)

print(response)

With System Message

response = llm.generate_response(
    prompt="How do I optimize database queries?",
    system_message="You are an expert database administrator. Provide detailed, technical answers with examples."
)

print(response)

Advanced Features

Structured JSON Output

from pydantic import BaseModel, Field
from SimplerLLM.language.llm_addons import generate_pydantic_json_model

class ProductInfo(BaseModel):
    name: str = Field(description="Product name")
    category: str = Field(description="Product category")
    price: float = Field(description="Price in USD")
    features: list[str] = Field(description="Key features")

llm = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-4o")

product = generate_pydantic_json_model(
    llm_instance=llm,
    prompt="Generate information for a premium wireless headphone",
    model_class=ProductInfo
)

print(f"Name: {product.name}")
print(f"Category: {product.category}")
print(f"Price: ${product.price}")
print(f"Features: {', '.join(product.features)}")

Conversation with Context

from SimplerLLM.prompts.messages_template import MessagesTemplate

# Create conversation
messages = MessagesTemplate()
messages.add_system_message("You are a helpful Python programming tutor")
messages.add_user_message("What is a list comprehension?")
messages.add_assistant_message("A list comprehension is a concise way to create lists in Python...")
messages.add_user_message("Can you show me an example?")

# Generate response with conversation context
llm = LLM.create(provider=LLMProvider.OPENAI, model_name="gpt-4o")
response = llm.generate_response(messages=messages.messages)

print(response)

Configuration Options

Fine-tune OpenAI model behavior with these parameters:

Parameter Reference

temperature (0.0 to 2.0, default: 0.7)

Controls randomness. Lower values (0.2) = focused, higher values (1.5) = creative

temperature=0.7

max_tokens (default: 300)

Maximum number of tokens to generate in the response

max_tokens=1000

top_p (0.0 to 1.0, default: 1.0)

Nucleus sampling: considers tokens with top_p probability mass

top_p=0.9

api_key (optional)

Override the environment variable API key

api_key="your-key-here"

Pricing Considerations

OpenAI charges based on token usage (both input and output tokens). Different models have different pricing tiers. For current pricing information, visit OpenAI's pricing page.

Cost Optimization Tips

  • Set reasonable max_tokens limits to control response length and costs
  • Use temperature=0 for deterministic outputs (reduces need for retries)
  • Cache responses when possible to avoid duplicate API calls
  • Monitor token usage through OpenAI's dashboard
  • Set up billing alerts to track spending

Best Practices

1. Configure Temperature Appropriately

Use lower temperature (0-0.3) for factual tasks, medium (0.7) for balanced responses, and higher (1.0-1.5) for creative writing.

2. Use System Messages Effectively

Provide clear instructions via system messages to guide the model's behavior and tone.

3. Set Reasonable Token Limits

Use the max_tokens parameter to control response length and manage costs effectively.

4. Implement Error Handling

Always wrap API calls in try-except blocks and implement retry logic for production applications.

5. Monitor Usage and Costs

Track token usage and set up billing alerts in your OpenAI dashboard to avoid unexpected charges.

Error Handling

from SimplerLLM.language.llm import LLM, LLMProvider

try:
    llm = LLM.create(
        provider=LLMProvider.OPENAI,
        model_name="gpt-4o"
    )

    response = llm.generate_response(
        prompt="Your prompt here",
        max_tokens=500
    )

    print(f"Response: {response}")

except ValueError as e:
    # Configuration errors
    print(f"Configuration error: {e}")

except Exception as e:
    # API errors (rate limits, network issues, etc.)
    print(f"API error: {e}")
    # Implement retry logic or fallback here

What's Next?

Official Resources

For more details about OpenAI's models and pricing, visit their official documentation.