Installation¶
RL4CO is now available for installation on pip
!
pip install rl4co
Local install and development¶
If you want to develop RL4CO or access the latest builds, you may install it locally after downloading the repo:
git clone https://github.com/ai4co/rl4co && cd rl4co
The simplest way is via pip
in editable mode with
pip install -e .
To install optional dependencies, you may specify them as follows pip install -e ".[dev,graph,routing,docs]"
.
We recommend installing in virtual environments with a package manager such as the blazing-fast uv
, poetry
, or conda
, with quickstart commands below:
Install with `uv`
You first need to install `uv`, i.e., with `pip`:pip install uv
uv sync
source .venv/bin/activate
Install with `poetry`
Make sure that you have `poetry` installed from the [official website](https://python-poetry.org/docs/). Then, you can create a virtual environment locally:poetry install
poetry env activate # poetry shell removed in poetry 2.0.0
Install with `conda`
After [installing `conda`](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html), you can create a virtual environment locally with:conda create -n rl4co python=3.12
conda activate rl4co
Minimalistic Example¶
Here is a minimalistic example training the Attention Model with greedy rollout baseline on TSP in less than 30 lines of code:
from rl4co.envs.routing import TSPEnv, TSPGenerator
from rl4co.models import AttentionModelPolicy, POMO
from rl4co.utils import RL4COTrainer
# Instantiate generator and environment
generator = TSPGenerator(num_loc=50, loc_distribution="uniform")
env = TSPEnv(generator)
# Create policy and RL model
policy = AttentionModelPolicy(env_name=env.name, num_encoder_layers=6)
model = POMO(env, policy, batch_size=64, optimizer_kwargs={"lr": 1e-4})
# Instantiate Trainer and fit
trainer = RL4COTrainer(max_epochs=10, accelerator="gpu", precision="16-mixed")
trainer.fit(model)
Tip
We recommend checking out our quickstart notebook!