gpt-engineer: AI-Powered CLI for Code Generation and Experimentation

gpt-engineer: AI-Powered CLI for Code Generation and Experimentation

Summary

gpt-engineer is a powerful CLI platform designed for experimenting with AI-driven code generation. It enables users to specify software requirements in natural language, then observes as an AI writes, executes, and refines the code. This tool serves as a precursor to lovable.dev, offering robust capabilities for both new project creation and existing code improvement.

Repository Info

Updated on October 13, 2025
View on GitHub

Introduction

gpt-engineer is a highly popular and innovative CLI platform designed to revolutionize code generation through artificial intelligence. With over 54,935 stars on GitHub, this project empowers developers to experiment with autonomous code generation, acting as a precursor to advanced platforms like lovable.dev. It allows users to describe desired software in natural language, then observes as an AI writes, executes, and refines the code automatically.

While gptengineer.app offers a managed service for web app generation and aider provides a hackable CLI, gpt-engineer remains the original experimentation platform for AI-driven code creation.

Key capabilities include:

  • Specifying software requirements using natural language.
  • Automated code writing and execution by an AI.
  • Iterative improvements to existing code based on AI suggestions.

Installation

Getting started with gpt-engineer is straightforward.

For a stable release:

python -m pip install gpt-engineer

For development purposes:

git clone https://github.com/gpt-engineer-org/gpt-engineer.git
cd gpt-engineer
poetry install
poetry shell

gpt-engineer actively supports Python versions 3.10 to 3.12.

Setting up your API Key:
You will need an OpenAI API key. Choose one of the following methods:

  • Environment Variable: export OPENAI_API_KEY=[your api key]
  • .env file: Create a .env file from .env.template and add your OPENAI_API_KEY.
  • Custom Model: Refer to the official documentation for details on using local, Azure, or other models.

Other ways to run:

Examples

gpt-engineer offers versatile usage patterns for both new and existing projects.

Creating new code (default usage):

  • Create an empty folder for your project.
  • Inside, create a file named prompt (no extension) and fill it with your instructions.
  • Run gpte <project_dir> (e.g., gpte projects/my-new-project).

Improving existing code:

  • Locate a folder with code you wish to improve.
  • Create a prompt file within that folder, detailing your improvement instructions.
  • Run gpte <project_dir> -i (e.g., gpte projects/my-old-project -i).

Benchmarking custom agents:
The bench binary, installed with gpt-engineer, provides an interface for benchmarking your agent implementations against datasets like APPS and MBPP. Check out the template repository for detailed instructions.

Advanced Features:

  • Pre Prompts: Customize the AI agent's "identity" by overriding the preprompts folder.
  • Vision: Provide image inputs (e.g., UX or architecture diagrams) for vision-capable models using the --image_directory flag.
  • Open Source and Local Models: Beyond OpenAI and Anthropic, gpt-engineer supports open-source models like WizardCoder. See the documentation for setup.

Why Use gpt-engineer?

gpt-engineer stands out as an essential tool for anyone interested in the cutting edge of AI-driven software development. Its ability to translate natural language into functional code significantly accelerates the development process, allowing for rapid prototyping and iterative refinement. The platform's flexibility, supporting both new project generation and improvements to existing codebases, makes it invaluable. Furthermore, its commitment to open-source principles, support for various AI models, and active community contributions ensure it remains a powerful and evolving tool for coding agent builders.

Links