Plexe: Build Machine Learning Models from Natural Language Prompts

Summary
Plexe is an innovative Python library that empowers developers to build machine learning models using natural language descriptions. It automates the entire model creation process, from intent to deployment, through an intelligent multi-agent architecture. This allows for rapid development and experimentation, making ML accessible and efficient.
Repository Info
Tags
Click on any tag to explore related repositories
Introduction
Plexe is an innovative Python library that dramatically simplifies the creation of machine learning models. With Plexe, you can build complex models by describing them in natural language, eliminating the need for extensive manual coding. It leverages an intelligent multi-agent architecture to automate the entire model development lifecycle, from intent definition to deployment.
Whether you're predicting sentiment, housing prices, or customer churn, Plexe transforms your description into a fully functional ML model, making AI development more accessible and efficient.
Installation
Getting started with Plexe is straightforward. You can install it via pip:
pip install plexe # Standard installation, minimal dependencies
pip install plexe[transformers] # Support for transformers, tokenizers, etc.
pip install plexe[chatui] # Local chat UI for model interaction
pip install plexe[all] # All optional dependencies
Plexe integrates with various LLM providers. Ensure you set your preferred provider's API key as an environment variable:
export OPENAI_API_KEY=<your-key>
export ANTHROPIC_API_KEY=<your-key>
export GEMINI_API_KEY=<your-key>
For a comprehensive list of providers and their environment variables, refer to the LiteLLM documentation.
Examples
Defining and using a model with Plexe is intuitive:
import plexe
# 1. Define the model using natural language and schemas
model = plexe.Model(
intent="Predict sentiment from news articles",
input_schema={"headline": str, "content": str},
output_schema={"sentiment": str}
)
# 2. Build and train the model
model.build(
datasets=[your_dataset],
provider="openai/gpt-4o-mini",
max_iterations=10
)
# 3. Use the model to make predictions
prediction = model.predict({
"headline": "New breakthrough in renewable energy",
"content": "Scientists announced a major advancement..."
})
print(f"Sentiment prediction: {prediction['sentiment']}")
# 4. Save and load the model
plexe.save_model(model, "sentiment-model")
loaded_model = plexe.load_model("sentiment-model.tar.gz")
You can also define models with more complex intents, and Plexe will infer the schema if not provided:
model = plexe.Model(
intent="Predict housing prices based on features like size, location, number of bedrooms, etc."
)
# Schema can be automatically inferred during build if not explicitly provided
model.build(provider="openai/gpt-4o-mini")
Why Use Plexe?
Plexe offers a suite of powerful features that make it an indispensable tool for ML development:
- Natural Language Model Definition: Describe your models in plain English, and Plexe handles the implementation complexity.
- Multi-Agent Architecture: A system of specialized AI agents analyzes requirements, plans solutions, generates code, tests and evaluates, and packages the model for deployment.
- Automated Model Building: Build complete models with a single method call, significantly accelerating the experimentation and development process.
- Distributed Training with Ray: Leverage Ray for distributed model training and evaluation, enabling faster parallel processing and efficient exploration of model variants.
- Data Generation & Schema Inference: Generate synthetic data or let Plexe automatically infer input and output schemas based on your intent.
- Multi-Provider Support: Use your preferred LLM provider, including OpenAI, Anthropic, Ollama, and Hugging Face, with seamless integration via LiteLLM.
Links
- GitHub Repository: plexe-ai/plexe
- Official Documentation: docs.plexe.ai
- Discord Community: Join our Discord
- YouTube Demo: Watch the demo on YouTube