Quick Start: Creating Your First LLM Instance
Learn how to create and use an LLM instance with SimplerLLM in just a few lines of code.
Installation
First, install SimplerLLM using pip:
pip install simplerllm
Setting Up API Keys
Create a .env
file in your project root and add your API keys:
# .env file
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GEMINI_API_KEY=your_gemini_api_key
# Add other provider keys as needed
💡 Tip
You can also pass API keys directly when creating an LLM instance using the api_key
parameter.
Creating Your First LLM Instance
Here's how to create an LLM instance and generate your first response:
from SimplerLLM.language.llm import LLM, LLMProvider
# Create an LLM instance with OpenAI
llm = LLM.create(
provider=LLMProvider.OPENAI,
model_name="gpt-4o"
)
# Generate a response
response = llm.generate_response(
prompt="Explain quantum computing in simple terms"
)
print(response)
Step-by-Step Explanation
1. Import Required Classes
from SimplerLLM.language.llm import LLM, LLMProvider
Import the LLM
class for creating instances and LLMProvider
enum for specifying providers.
2. Create LLM Instance
llm = LLM.create(
provider=LLMProvider.OPENAI,
model_name="gpt-4o"
)
Use LLM.create()
to instantiate an LLM with your chosen provider and model.
3. Generate Response
response = llm.generate_response(
prompt="Explain quantum computing in simple terms"
)
Call generate_response()
with your prompt to get a response from the LLM.
Trying Different Providers
SimplerLLM makes it easy to switch between providers. Here are examples for each supported provider:
OpenAI
llm = LLM.create(
provider=LLMProvider.OPENAI,
model_name="gpt-4o"
)
Anthropic Claude
llm = LLM.create(
provider=LLMProvider.ANTHROPIC,
model_name="claude-3-5-sonnet-20241022"
)
Google Gemini
llm = LLM.create(
provider=LLMProvider.GEMINI,
model_name="gemini-1.5-pro"
)
Cohere
llm = LLM.create(
provider=LLMProvider.COHERE,
model_name="command-r-plus"
)
DeepSeek
llm = LLM.create(
provider=LLMProvider.DEEPSEEK,
model_name="deepseek-chat"
)
OpenRouter (100+ Models)
llm = LLM.create(
provider=LLMProvider.OPENROUTER,
model_name="openai/gpt-4o"
)
Ollama (Local Models)
llm = LLM.create(
provider=LLMProvider.OLLAMA,
model_name="llama2"
)
Customizing Parameters
You can customize the LLM behavior with additional parameters:
llm = LLM.create(
provider=LLMProvider.OPENAI,
model_name="gpt-4o",
temperature=0.7, # Controls randomness (0.0 to 2.0)
top_p=1.0, # Controls diversity via nucleus sampling
api_key="your_key" # Optional: Override environment variable
)
Parameter Guide
- temperature: Higher values (e.g., 1.2) make output more creative, lower values (e.g., 0.2) make it more focused and deterministic.
- top_p: Controls diversity by considering tokens with top_p probability mass. Default is 1.0.
- api_key: Override the API key from environment variables.
Complete Example
Here's a complete working example you can run:
from SimplerLLM.language.llm import LLM, LLMProvider
def main():
# Create LLM instance
llm = LLM.create(
provider=LLMProvider.OPENAI,
model_name="gpt-4o",
temperature=0.7
)
# Define your prompt
prompt = """
Write a short poem about artificial intelligence
and its impact on humanity. Make it inspiring.
"""
# Generate response
print("Generating response...")
response = llm.generate_response(prompt=prompt)
# Display result
print("\nResponse:")
print(response)
if __name__ == "__main__":
main()
Expected Output
When you run the example above, you'll get a response like:
Generating response...
Response:
In circuits bright and minds of steel,
A future dawns with dreams surreal.
Artificial thoughts that learn and grow,
Expanding all we'll ever know.
Through data streams and neural fire,
We build machines that never tire.
Not to replace the human heart,
But amplify our creative art...
What's Next?
Now that you know how to create an LLM instance, explore these advanced features:
Reliable LLM →
Add automatic failover between multiple providers
Structured Output →
Generate validated JSON responses with Pydantic
Async Support →
Use async/await for better performance
Vector Operations →
Work with embeddings and semantic search
🎓 Need More Help?
Check out our full documentation, join the Discord community, or browse example code on GitHub.