Constructive Autoregressive Methods¶
Attention Model (AM)¶
Classes:
-
AttentionModel
–Attention Model based on REINFORCE: https://arxiv.org/abs/1803.08475.
AttentionModel
¶
AttentionModel(
env: RL4COEnvBase,
policy: AttentionModelPolicy = None,
baseline: Union[REINFORCEBaseline, str] = "rollout",
policy_kwargs={},
baseline_kwargs={},
**kwargs
)
Bases: REINFORCE
Attention Model based on REINFORCE: https://arxiv.org/abs/1803.08475.
Check :class:REINFORCE
and :class:rl4co.models.RL4COLitModule
for more details such as additional parameters including batch size.
Parameters:
-
env
(RL4COEnvBase
) –Environment to use for the algorithm
-
policy
(AttentionModelPolicy
, default:None
) –Policy to use for the algorithm
-
baseline
(Union[REINFORCEBaseline, str]
, default:'rollout'
) –REINFORCE baseline. Defaults to rollout (1 epoch of exponential, then greedy rollout baseline)
-
policy_kwargs
–Keyword arguments for policy
-
baseline_kwargs
–Keyword arguments for baseline
-
**kwargs
–Keyword arguments passed to the superclass
Source code in rl4co/models/zoo/am/model.py
22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
Classes:
-
AttentionModelPolicy
–Attention Model Policy based on Kool et al. (2019): https://arxiv.org/abs/1803.08475.
AttentionModelPolicy
¶
AttentionModelPolicy(
encoder: Module = None,
decoder: Module = None,
embed_dim: int = 128,
num_encoder_layers: int = 3,
num_heads: int = 8,
normalization: str = "batch",
feedforward_hidden: int = 512,
env_name: str = "tsp",
encoder_network: Module = None,
init_embedding: Module = None,
context_embedding: Module = None,
dynamic_embedding: Module = None,
use_graph_context: bool = True,
linear_bias_decoder: bool = False,
sdpa_fn: Callable = None,
sdpa_fn_encoder: Callable = None,
sdpa_fn_decoder: Callable = None,
mask_inner: bool = True,
out_bias_pointer_attn: bool = False,
check_nan: bool = True,
temperature: float = 1.0,
tanh_clipping: float = 10.0,
mask_logits: bool = True,
train_decode_type: str = "sampling",
val_decode_type: str = "greedy",
test_decode_type: str = "greedy",
moe_kwargs: dict = {"encoder": None, "decoder": None},
**unused_kwargs
)
Bases: AutoregressivePolicy
Attention Model Policy based on Kool et al. (2019): https://arxiv.org/abs/1803.08475.
This model first encodes the input graph using a Graph Attention Network (GAT) (:class:AttentionModelEncoder
)
and then decodes the solution using a pointer network (:class:AttentionModelDecoder
). Cache is used to store the
embeddings of the nodes to be used by the decoder to save computation.
See :class:rl4co.models.common.constructive.autoregressive.policy.AutoregressivePolicy
for more details on the inference process.
Parameters:
-
encoder
(Module
, default:None
) –Encoder module, defaults to :class:
AttentionModelEncoder
-
decoder
(Module
, default:None
) –Decoder module, defaults to :class:
AttentionModelDecoder
-
embed_dim
(int
, default:128
) –Dimension of the node embeddings
-
num_encoder_layers
(int
, default:3
) –Number of layers in the encoder
-
num_heads
(int
, default:8
) –Number of heads in the attention layers
-
normalization
(str
, default:'batch'
) –Normalization type in the attention layers
-
feedforward_hidden
(int
, default:512
) –Dimension of the hidden layer in the feedforward network
-
env_name
(str
, default:'tsp'
) –Name of the environment used to initialize embeddings
-
encoder_network
(Module
, default:None
) –Network to use for the encoder
-
init_embedding
(Module
, default:None
) –Module to use for the initialization of the embeddings
-
context_embedding
(Module
, default:None
) –Module to use for the context embedding
-
dynamic_embedding
(Module
, default:None
) –Module to use for the dynamic embedding
-
use_graph_context
(bool
, default:True
) –Whether to use the graph context
-
linear_bias_decoder
(bool
, default:False
) –Whether to use a bias in the linear layer of the decoder
-
sdpa_fn_encoder
(Callable
, default:None
) –Function to use for the scaled dot product attention in the encoder
-
sdpa_fn_decoder
(Callable
, default:None
) –Function to use for the scaled dot product attention in the decoder
-
sdpa_fn
(Callable
, default:None
) –(deprecated) Function to use for the scaled dot product attention
-
mask_inner
(bool
, default:True
) –Whether to mask the inner product
-
out_bias_pointer_attn
(bool
, default:False
) –Whether to use a bias in the pointer attention
-
check_nan
(bool
, default:True
) –Whether to check for nan values during decoding
-
temperature
(float
, default:1.0
) –Temperature for the softmax
-
tanh_clipping
(float
, default:10.0
) –Tanh clipping value (see Bello et al., 2016)
-
mask_logits
(bool
, default:True
) –Whether to mask the logits during decoding
-
train_decode_type
(str
, default:'sampling'
) –Type of decoding to use during training
-
val_decode_type
(str
, default:'greedy'
) –Type of decoding to use during validation
-
test_decode_type
(str
, default:'greedy'
) –Type of decoding to use during testing
-
moe_kwargs
(dict
, default:{'encoder': None, 'decoder': None}
) –Keyword arguments for MoE, e.g., {"encoder": {"hidden_act": "ReLU", "num_experts": 4, "k": 2, "noisy_gating": True}, "decoder": {"light_version": True, ...}}
Source code in rl4co/models/zoo/am/policy.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
Attention Model - PPO (AM-PPO)¶
Classes:
-
AMPPO
–PPO Model based on Proximal Policy Optimization (PPO) with an attention model policy.
AMPPO
¶
AMPPO(
env: RL4COEnvBase,
policy: Module = None,
critic: CriticNetwork = None,
policy_kwargs: dict = {},
critic_kwargs: dict = {},
**kwargs
)
Bases: PPO
PPO Model based on Proximal Policy Optimization (PPO) with an attention model policy. We default to the attention model policy and the Attention Critic Network.
Parameters:
-
env
(RL4COEnvBase
) –Environment to use for the algorithm
-
policy
(Module
, default:None
) –Policy to use for the algorithm
-
critic
(CriticNetwork
, default:None
) –Critic to use for the algorithm
-
policy_kwargs
(dict
, default:{}
) –Keyword arguments for policy
-
critic_kwargs
(dict
, default:{}
) –Keyword arguments for critic
Source code in rl4co/models/zoo/amppo/model.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
Heterogeneous Attention Model (HAM)¶
Classes:
-
HeterogeneousAttentionModel
–Heterogenous Attention Model for solving the Pickup and Delivery Problem based on
HeterogeneousAttentionModel
¶
HeterogeneousAttentionModel(
env: RL4COEnvBase,
policy: HeterogeneousAttentionModelPolicy = None,
baseline: Union[REINFORCEBaseline, str] = "rollout",
policy_kwargs={},
baseline_kwargs={},
**kwargs
)
Bases: REINFORCE
Heterogenous Attention Model for solving the Pickup and Delivery Problem based on REINFORCE: https://arxiv.org/abs/2110.02634.
Parameters:
-
env
(RL4COEnvBase
) –Environment to use for the algorithm
-
policy
(HeterogeneousAttentionModelPolicy
, default:None
) –Policy to use for the algorithm
-
baseline
(Union[REINFORCEBaseline, str]
, default:'rollout'
) –REINFORCE baseline. Defaults to rollout (1 epoch of exponential, then greedy rollout baseline)
-
policy_kwargs
–Keyword arguments for policy
-
baseline_kwargs
–Keyword arguments for baseline
-
**kwargs
–Keyword arguments passed to the superclass
Source code in rl4co/models/zoo/ham/model.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
Classes:
-
HeterogeneousAttentionModelPolicy
–Heterogeneous Attention Model Policy based on https://ieeexplore.ieee.org/document/9352489.
HeterogeneousAttentionModelPolicy
¶
HeterogeneousAttentionModelPolicy(
encoder: Module = None,
env_name: str = "pdp",
init_embedding: Module = None,
embed_dim: int = 128,
num_encoder_layers: int = 3,
num_heads: int = 8,
normalization: str = "batch",
feedforward_hidden: int = 512,
sdpa_fn: Optional[Callable] = None,
**kwargs
)
Bases: AttentionModelPolicy
Heterogeneous Attention Model Policy based on https://ieeexplore.ieee.org/document/9352489.
We re-declare the most important arguments here for convenience as in the paper.
See :class:rl4co.models.zoo.am.AttentionModelPolicy
for more details.
Parameters:
-
encoder
(Module
, default:None
) –Encoder module. Can be passed by sub-classes
-
env_name
(str
, default:'pdp'
) –Name of the environment used to initialize embeddings
-
init_embedding
(Module
, default:None
) –Model to use for the initial embedding. If None, use the default embedding for the environment
-
embed_dim
(int
, default:128
) –Dimension of the embeddings
-
num_encoder_layers
(int
, default:3
) –Number of layers in the encoder
-
num_heads
(int
, default:8
) –Number of heads for the attention in encoder
-
normalization
(str
, default:'batch'
) –Normalization to use for the attention layers
-
feedforward_hidden
(int
, default:512
) –Dimension of the hidden layer in the feedforward network
-
sdpa_fn
(Optional[Callable]
, default:None
) –Function to use for the scaled dot product attention
-
**kwargs
–keyword arguments passed to the :class:
rl4co.models.zoo.am.AttentionModelPolicy
Source code in rl4co/models/zoo/ham/policy.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
Classes:
HeterogenousMHA
¶
HeterogenousMHA(
num_heads,
input_dim,
embed_dim=None,
val_dim=None,
key_dim=None,
)
Bases: Module
Methods:
-
forward
–Args:
Source code in rl4co/models/zoo/ham/attention.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
forward
¶
forward(q, h=None, mask=None)
Parameters:
-
q
–queries (batch_size, n_query, input_dim)
-
h
–data (batch_size, graph_size, input_dim)
-
mask
–mask (batch_size, n_query, graph_size) or viewable as that (i.e. can be 2 dim if n_query == 1)
Mask should contain 1 if attention is not possible (i.e. mask is negative adjacency)
Source code in rl4co/models/zoo/ham/attention.py
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 |
|
Matrix Encoding Network (MatNet)¶
Classes:
-
MatNetPolicy
–MatNet Policy from Kwon et al., 2021.
-
MultiStageFFSPPolicy
–Policy for solving the FFSP using a seperate encoder and decoder for each
MatNetPolicy
¶
MatNetPolicy(
env_name: str = "atsp",
embed_dim: int = 256,
num_encoder_layers: int = 5,
num_heads: int = 16,
normalization: str = "instance",
init_embedding_kwargs: dict = {"mode": "RandomOneHot"},
use_graph_context: bool = False,
bias: bool = False,
**kwargs
)
Bases: AutoregressivePolicy
MatNet Policy from Kwon et al., 2021. Reference: https://arxiv.org/abs/2106.11113
Warning
This implementation is under development and subject to change.
Parameters:
-
env_name
(str
, default:'atsp'
) –Name of the environment used to initialize embeddings
-
embed_dim
(int
, default:256
) –Dimension of the node embeddings
-
num_encoder_layers
(int
, default:5
) –Number of layers in the encoder
-
num_heads
(int
, default:16
) –Number of heads in the attention layers
-
normalization
(str
, default:'instance'
) –Normalization type in the attention layers
-
**kwargs
–keyword arguments passed to the
AutoregressivePolicy
Default paarameters are adopted from the original implementation.
Source code in rl4co/models/zoo/matnet/policy.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
|
MultiStageFFSPPolicy
¶
MultiStageFFSPPolicy(
stage_cnt: int,
embed_dim: int = 512,
num_heads: int = 16,
num_encoder_layers: int = 5,
use_graph_context: bool = False,
normalization: str = "instance",
feedforward_hidden: int = 512,
bias: bool = False,
train_decode_type: str = "sampling",
val_decode_type: str = "sampling",
test_decode_type: str = "sampling",
)
Bases: Module
Policy for solving the FFSP using a seperate encoder and decoder for each stage. This requires the 'while not td["done"].all()'-loop to be on policy level (instead of decoder level).
Source code in rl4co/models/zoo/matnet/policy.py
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
|
Classes:
MixedScoresSDPA
¶
MixedScoresSDPA(
num_heads: int,
num_scores: int = 1,
mixer_hidden_dim: int = 16,
mix1_init: float = 1 / 2**1 / 2,
mix2_init: float = 1 / 16**1 / 2,
)
Bases: Module
Methods:
-
forward
–Scaled Dot-Product Attention with MatNet Scores Mixer
Source code in rl4co/models/zoo/matnet/encoder.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|
forward
¶
forward(q, k, v, attn_mask=None, dmat=None, dropout_p=0.0)
Scaled Dot-Product Attention with MatNet Scores Mixer
Source code in rl4co/models/zoo/matnet/encoder.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|
MatNetMHA
¶
Bases: Module
Methods:
-
forward
–Args:
Source code in rl4co/models/zoo/matnet/encoder.py
116 117 118 119 |
|
forward
¶
forward(row_emb, col_emb, dmat, attn_mask=None)
Parameters:
-
row_emb
(Tensor
) –[b, m, d]
-
col_emb
(Tensor
) –[b, n, d]
-
dmat
(Tensor
) –[b, m, n]
Returns:
-
–
Updated row_emb (Tensor): [b, m, d]
-
–
Updated col_emb (Tensor): [b, n, d]
Source code in rl4co/models/zoo/matnet/encoder.py
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
|
MatNetLayer
¶
MatNetLayer(
embed_dim: int,
num_heads: int,
bias: bool = False,
feedforward_hidden: int = 512,
normalization: Optional[str] = "instance",
)
Bases: Module
Methods:
-
forward
–Args:
Source code in rl4co/models/zoo/matnet/encoder.py
146 147 148 149 150 151 152 153 154 155 156 157 |
|
forward
¶
forward(row_emb, col_emb, dmat, attn_mask=None)
Parameters:
-
row_emb
(Tensor
) –[b, m, d]
-
col_emb
(Tensor
) –[b, n, d]
-
dmat
(Tensor
) –[b, m, n]
Returns:
-
–
Updated row_emb (Tensor): [b, m, d]
-
–
Updated col_emb (Tensor): [b, n, d]
Source code in rl4co/models/zoo/matnet/encoder.py
159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 |
|
Classes:
-
MultiStageFFSPDecoder
–Decoder class for the solving the FFSP using a seperate MatNet decoder for each stage
MultiStageFFSPDecoder
¶
MultiStageFFSPDecoder(
embed_dim: int,
num_heads: int,
use_graph_context: bool = True,
tanh_clipping: float = 10,
**kwargs
)
Bases: MatNetFFSPDecoder
Decoder class for the solving the FFSP using a seperate MatNet decoder for each stage as originally implemented by Kwon et al. (2021)
Source code in rl4co/models/zoo/matnet/decoder.py
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
|
Multi-Decoder Attention Model (MDAM)¶
Classes:
-
MDAM
–Multi-Decoder Attention Model (MDAM) is a model
Functions:
-
rollout
–In this case the reward from the model is [batch, num_paths]
MDAM
¶
MDAM(
env: RL4COEnvBase,
policy: MDAMPolicy = None,
baseline: Union[REINFORCEBaseline, str] = "rollout",
policy_kwargs={},
baseline_kwargs={},
**kwargs
)
Bases: REINFORCE
Multi-Decoder Attention Model (MDAM) is a model to train multiple diverse policies, which effectively increases the chance of finding good solutions compared with existing methods that train only one policy. Reference link: https://arxiv.org/abs/2012.10638; Implementation reference: https://github.com/liangxinedu/MDAM.
Parameters:
-
env
(RL4COEnvBase
) –Environment to use for the algorithm
-
policy
(MDAMPolicy
, default:None
) –Policy to use for the algorithm
-
baseline
(Union[REINFORCEBaseline, str]
, default:'rollout'
) –REINFORCE baseline. Defaults to rollout (1 epoch of exponential, then greedy rollout baseline)
-
policy_kwargs
–Keyword arguments for policy
-
baseline_kwargs
–Keyword arguments for baseline
-
**kwargs
–Keyword arguments passed to the superclass
Methods:
-
calculate_loss
–Calculate loss for REINFORCE algorithm.
Source code in rl4co/models/zoo/mdam/model.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
|
calculate_loss
¶
calculate_loss(
td, batch, policy_out, reward=None, log_likelihood=None
)
Calculate loss for REINFORCE algorithm.
Same as in :class:REINFORCE
, but the bl_val is calculated is simply unsqueezed to match
the reward shape (i.e., [batch, num_paths])
Parameters:
-
td
–TensorDict containing the current state of the environment
-
batch
–Batch of data. This is used to get the extra loss terms, e.g., REINFORCE baseline
-
policy_out
–Output of the policy network
-
reward
–Reward tensor. If None, it is taken from
policy_out
-
log_likelihood
–Log-likelihood tensor. If None, it is taken from
policy_out
Source code in rl4co/models/zoo/mdam/model.py
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|
rollout
¶
rollout(
self,
model,
env,
batch_size=64,
device="cpu",
dataset=None,
)
In this case the reward from the model is [batch, num_paths] and the baseline takes the maximum reward from the model as the baseline. https://github.com/liangxinedu/MDAM/blob/19b0bf813fb2dbec2fcde9e22eb50e04675400cd/train.py#L38C29-L38C33
Source code in rl4co/models/zoo/mdam/model.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
Classes:
-
MDAMPolicy
–Multi-Decoder Attention Model (MDAM) policy.
MDAMPolicy
¶
MDAMPolicy(
encoder: MDAMGraphAttentionEncoder = None,
decoder: MDAMDecoder = None,
embed_dim: int = 128,
env_name: str = "tsp",
num_encoder_layers: int = 3,
num_heads: int = 8,
normalization: str = "batch",
**decoder_kwargs
)
Bases: AutoregressivePolicy
Multi-Decoder Attention Model (MDAM) policy. Args:
Source code in rl4co/models/zoo/mdam/policy.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
Classes:
MDAMGraphAttentionEncoder
¶
MDAMGraphAttentionEncoder(
num_heads,
embed_dim,
num_layers,
node_dim=None,
normalization="batch",
feedforward_hidden=512,
sdpa_fn: Optional[Callable] = None,
)
Bases: Module
Methods:
-
forward
–Returns:
Source code in rl4co/models/zoo/mdam/encoder.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
forward
¶
forward(x, mask=None, return_transform_loss=False)
Returns:
-
–
- h [batch_size, graph_size, embed_dim]
-
–
- attn [num_head, batch_size, graph_size, graph_size]
-
–
- V [num_head, batch_size, graph_size, key_dim]
-
–
- h_old [batch_size, graph_size, embed_dim]
Source code in rl4co/models/zoo/mdam/encoder.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
|
POMO¶
Classes:
-
POMO
–POMO Model for neural combinatorial optimization based on REINFORCE
POMO
¶
POMO(
env: RL4COEnvBase,
policy: Module = None,
policy_kwargs={},
baseline: str = "shared",
num_augment: int = 8,
augment_fn: Union[str, callable] = "dihedral8",
first_aug_identity: bool = True,
feats: list = None,
num_starts: int = None,
**kwargs
)
Bases: REINFORCE
POMO Model for neural combinatorial optimization based on REINFORCE Based on Kwon et al. (2020) http://arxiv.org/abs/2010.16011.
Note
If no policy kwargs is passed, we use the Attention Model policy with the following arguments: Differently to the base class:
num_encoder_layers=6
(instead of 3)normalization="instance"
(instead of "batch")use_graph_context=False
(instead of True) The latter is due to the fact that the paper does not use the graph context in the policy, which seems to be helpful in overfitting to the training graph size.
Parameters:
-
env
(RL4COEnvBase
) –TorchRL Environment
-
policy
(Module
, default:None
) –Policy to use for the algorithm
-
policy_kwargs
–Keyword arguments for policy
-
baseline
(str
, default:'shared'
) –Baseline to use for the algorithm. Note that POMO only supports shared baseline, so we will throw an error if anything else is passed.
-
num_augment
(int
, default:8
) –Number of augmentations (used only for validation and test)
-
augment_fn
(Union[str, callable]
, default:'dihedral8'
) –Function to use for augmentation, defaulting to dihedral8
-
first_aug_identity
(bool
, default:True
) –Whether to include the identity augmentation in the first position
-
feats
(list
, default:None
) –List of features to augment
-
num_starts
(int
, default:None
) –Number of starts for multi-start. If None, use the number of available actions
-
**kwargs
–Keyword arguments passed to the superclass
Source code in rl4co/models/zoo/pomo/model.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
|
Pointer Network (PtrNet)¶
Classes:
-
PointerNetwork
–Pointer Network for neural combinatorial optimization based on REINFORCE
PointerNetwork
¶
PointerNetwork(
env: RL4COEnvBase,
policy: PointerNetworkPolicy = None,
baseline: Union[REINFORCEBaseline, str] = "rollout",
policy_kwargs={},
baseline_kwargs={},
**kwargs
)
Bases: REINFORCE
Pointer Network for neural combinatorial optimization based on REINFORCE Based on Vinyals et al. (2015) https://arxiv.org/abs/1506.03134 Refactored from reference implementation: https://github.com/wouterkool/attention-learn-to-route
Parameters:
-
env
(RL4COEnvBase
) –Environment to use for the algorithm
-
policy
(PointerNetworkPolicy
, default:None
) –Policy to use for the algorithm
-
baseline
(Union[REINFORCEBaseline, str]
, default:'rollout'
) –REINFORCE baseline. Defaults to rollout (1 epoch of exponential, then greedy rollout baseline)
-
policy_kwargs
–Keyword arguments for policy
-
baseline_kwargs
–Keyword arguments for baseline
-
**kwargs
–Keyword arguments passed to the superclass
Source code in rl4co/models/zoo/ptrnet/model.py
23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
Classes:
-
Encoder
–Maps a graph represented as an input sequence
Encoder
¶
Encoder(input_dim, hidden_dim)
Bases: Module
Maps a graph represented as an input sequence to a hidden vector
Methods:
-
init_hidden
–Trainable initial hidden state
Source code in rl4co/models/zoo/ptrnet/encoder.py
11 12 13 14 15 |
|
init_hidden
¶
init_hidden(hidden_dim)
Trainable initial hidden state
Source code in rl4co/models/zoo/ptrnet/encoder.py
21 22 23 24 25 26 27 28 29 |
|
Classes:
-
SimpleAttention
–A generic attention module for a decoder in seq2seq
-
Decoder
–
SimpleAttention
¶
SimpleAttention(dim, use_tanh=False, C=10)
Bases: Module
A generic attention module for a decoder in seq2seq
Methods:
-
forward
–Args:
Source code in rl4co/models/zoo/ptrnet/decoder.py
14 15 16 17 18 19 20 21 22 |
|
forward
¶
forward(query, ref)
Parameters:
-
query
–is the hidden state of the decoder at the current time step. batch x dim
-
ref
–the set of hidden states from the encoder. sourceL x batch x hidden_dim
Source code in rl4co/models/zoo/ptrnet/decoder.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
Decoder
¶
Decoder(
embed_dim: int = 128,
hidden_dim: int = 128,
tanh_exploration: float = 10.0,
use_tanh: bool = True,
num_glimpses=1,
mask_glimpses=True,
mask_logits=True,
)
Bases: Module
Methods:
-
forward
–Args:
Source code in rl4co/models/zoo/ptrnet/decoder.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
forward
¶
forward(
decoder_input,
embedded_inputs,
hidden,
context,
decode_type="sampling",
eval_tours=None,
)
Parameters:
-
decoder_input
–The initial input to the decoder size is [batch_size x embed_dim]. Trainable parameter.
-
embedded_inputs
–[sourceL x batch_size x embed_dim]
-
hidden
–the prev hidden state, size is [batch_size x hidden_dim]. Initially this is set to (enc_h[-1], enc_c[-1])
-
context
–encoder outputs, [sourceL x batch_size x hidden_dim]
Source code in rl4co/models/zoo/ptrnet/decoder.py
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
|
Classes:
-
CriticNetworkLSTM
–Useful as a baseline in REINFORCE updates
CriticNetworkLSTM
¶
CriticNetworkLSTM(
embed_dim,
hidden_dim,
n_process_block_iters,
tanh_exploration,
use_tanh,
)
Bases: Module
Useful as a baseline in REINFORCE updates
Methods:
-
forward
–Args:
Source code in rl4co/models/zoo/ptrnet/critic.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
forward
¶
forward(inputs)
Parameters:
-
inputs
–[embed_dim x batch_size x sourceL] of embedded inputs
Source code in rl4co/models/zoo/ptrnet/critic.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
SymNCO¶
Classes:
-
SymNCO
–SymNCO Model based on REINFORCE with shared baselines.
SymNCO
¶
SymNCO(
env: RL4COEnvBase,
policy: Union[Module, SymNCOPolicy] = None,
policy_kwargs: dict = {},
baseline: str = "symnco",
num_augment: int = 4,
augment_fn: Union[str, callable] = "symmetric",
feats: list = None,
alpha: float = 0.2,
beta: float = 1,
num_starts: int = 0,
**kwargs
)
Bases: REINFORCE
SymNCO Model based on REINFORCE with shared baselines. Based on Kim et al. (2022) https://arxiv.org/abs/2205.13209.
Parameters:
-
env
(RL4COEnvBase
) –TorchRL environment to use for the algorithm
-
policy
(Union[Module, SymNCOPolicy]
, default:None
) –Policy to use for the algorithm
-
policy_kwargs
(dict
, default:{}
) –Keyword arguments for policy
-
num_augment
(int
, default:4
) –Number of augmentations
-
augment_fn
(Union[str, callable]
, default:'symmetric'
) –Function to use for augmentation, defaulting to dihedral_8_augmentation
-
feats
(list
, default:None
) –List of features to augment
-
alpha
(float
, default:0.2
) –weight for invariance loss
-
beta
(float
, default:1
) –weight for solution symmetricity loss
-
num_starts
(int
, default:0
) –Number of starts for multi-start. If None, use the number of available actions
-
**kwargs
–Keyword arguments passed to the superclass
Source code in rl4co/models/zoo/symnco/model.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
Classes:
-
SymNCOPolicy
–SymNCO Policy based on AutoregressivePolicy.
SymNCOPolicy
¶
SymNCOPolicy(
embed_dim: int = 128,
env_name: str = "tsp",
num_encoder_layers: int = 3,
num_heads: int = 8,
normalization: str = "batch",
projection_head: Module = None,
use_projection_head: bool = True,
**kwargs
)
Bases: AttentionModelPolicy
SymNCO Policy based on AutoregressivePolicy.
This differs from the default :class:AutoregressivePolicy
in that it
projects the initial embeddings to a lower dimension using a projection head and
returns it. This is used in the SymNCO algorithm to compute the invariance loss.
Based on Kim et al. (2022) https://arxiv.org/abs/2205.13209.
Parameters:
-
embed_dim
(int
, default:128
) –Dimension of the embedding
-
env_name
(str
, default:'tsp'
) –Name of the environment
-
num_encoder_layers
(int
, default:3
) –Number of layers in the encoder
-
num_heads
(int
, default:8
) –Number of heads in the encoder
-
normalization
(str
, default:'batch'
) –Normalization to use in the encoder
-
projection_head
(Module
, default:None
) –Projection head to use
-
use_projection_head
(bool
, default:True
) –Whether to use projection head
-
**kwargs
–Keyword arguments passed to the superclass
Source code in rl4co/models/zoo/symnco/policy.py
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
Functions:
-
problem_symmetricity_loss
–REINFORCE loss for problem symmetricity
-
solution_symmetricity_loss
–REINFORCE loss for solution symmetricity
-
invariance_loss
–Loss for invariant representation on projected nodes
problem_symmetricity_loss
¶
problem_symmetricity_loss(reward, log_likelihood, dim=1)
REINFORCE loss for problem symmetricity
Baseline is the average reward for all augmented problems
Corresponds to L_ps
in the SymNCO paper
Source code in rl4co/models/zoo/symnco/losses.py
5 6 7 8 9 10 11 12 13 14 15 |
|
solution_symmetricity_loss
¶
solution_symmetricity_loss(reward, log_likelihood, dim=-1)
REINFORCE loss for solution symmetricity
Baseline is the average reward for all start nodes
Corresponds to L_ss
in the SymNCO paper
Source code in rl4co/models/zoo/symnco/losses.py
18 19 20 21 22 23 24 25 26 27 28 |
|
invariance_loss
¶
invariance_loss(proj_embed, num_augment)
Loss for invariant representation on projected nodes
Corresponds to L_inv
in the SymNCO paper
Source code in rl4co/models/zoo/symnco/losses.py
31 32 33 34 35 36 37 38 39 |
|