Data
Datasets¶
Classes:
-
FastTdDataset
–Note:
-
TensorDictDataset
–Dataset compatible with TensorDicts with low CPU usage.
-
ExtraKeyDataset
–Dataset that includes an extra key to add to the data dict.
-
TensorDictDatasetFastGeneration
–Dataset compatible with TensorDicts.
FastTdDataset
¶
FastTdDataset(td: TensorDict)
Bases: Dataset
Note
Check out the issue on tensordict for more details: https://github.com/pytorch-labs/tensordict/issues/374.
Methods:
-
collate_fn
–Collate function compatible with TensorDicts that reassembles a list of dicts.
Source code in rl4co/data/dataset.py
24 25 26 |
|
collate_fn
staticmethod
¶
Collate function compatible with TensorDicts that reassembles a list of dicts.
Source code in rl4co/data/dataset.py
37 38 39 40 |
|
TensorDictDataset
¶
TensorDictDataset(td: TensorDict)
Bases: Dataset
Dataset compatible with TensorDicts with low CPU usage. Fast loading but somewhat slow instantiation due to list comprehension since we "disassemble" the TensorDict into a list of dicts.
Note
Check out the issue on tensordict for more details: https://github.com/pytorch-labs/tensordict/issues/374.
Methods:
-
collate_fn
–Collate function compatible with TensorDicts that reassembles a list of dicts.
Source code in rl4co/data/dataset.py
53 54 55 56 57 |
|
collate_fn
staticmethod
¶
Collate function compatible with TensorDicts that reassembles a list of dicts.
Source code in rl4co/data/dataset.py
68 69 70 71 72 73 74 75 |
|
ExtraKeyDataset
¶
ExtraKeyDataset(
dataset: TensorDictDataset,
extra: Tensor,
key_name="extra",
)
Bases: TensorDictDataset
Dataset that includes an extra key to add to the data dict. This is useful for adding a REINFORCE baseline reward to the data dict. Note that this is faster to instantiate than using list comprehension.
Source code in rl4co/data/dataset.py
84 85 86 87 88 89 |
|
TensorDictDatasetFastGeneration
¶
TensorDictDatasetFastGeneration(td: TensorDict)
Bases: Dataset
Dataset compatible with TensorDicts.
Similar performance in loading to list comprehension, but is faster in instantiation
than :class:TensorDictDatasetList
(more than 10x faster).
Warning
Note that directly indexing TensorDicts may be faster in creating the dataset
but uses > 3x more CPU. We may generally recommend using the :class:TensorDictDatasetList
Note
Check out the issue on tensordict for more details: https://github.com/pytorch-labs/tensordict/issues/374.
Methods:
-
collate_fn
–Equivalent to collating with
lambda x: x
Source code in rl4co/data/dataset.py
111 112 |
|
Data Generation¶
Functions:
-
generate_env_data
–Generate data for a given environment type in the form of a dictionary
-
generate_mdpp_data
–Generate data for the nDPP problem.
-
generate_dataset
–We keep a similar structure as in Kool et al. 2019 but save and load the data as npz
-
generate_default_datasets
–Generate the default datasets used in the paper and save them to data_dir/problem
generate_env_data
¶
generate_env_data(env_type, *args, **kwargs)
Generate data for a given environment type in the form of a dictionary
Source code in rl4co/data/generate_data.py
27 28 29 30 31 32 33 34 35 36 37 38 |
|
generate_mdpp_data
¶
generate_mdpp_data(
dataset_size,
size=10,
num_probes_min=2,
num_probes_max=5,
num_keepout_min=1,
num_keepout_max=50,
lock_size=True,
)
Generate data for the nDPP problem.
If lock_size
is True, then the size if fixed and we skip the size
argument if it is not 10.
This is because the RL environment is based on a real-world PCB (parametrized with data)
Source code in rl4co/data/generate_data.py
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
|
generate_dataset
¶
generate_dataset(
filename: Union[str, List[str]] = None,
data_dir: str = "data",
name: str = None,
problem: Union[str, List[str]] = "all",
data_distribution: str = "all",
dataset_size: int = 10000,
graph_sizes: Union[int, List[int]] = [20, 50, 100],
overwrite: bool = False,
seed: int = 1234,
disable_warning: bool = True,
distributions_per_problem: Union[int, dict] = None,
)
We keep a similar structure as in Kool et al. 2019 but save and load the data as npz This is way faster and more memory efficient than pickle and also allows for easy transfer to TensorDict
Parameters:
-
filename
(Union[str, List[str]]
, default:None
) –Filename to save the data to. If None, the data is saved to data_dir/problem/problem_graph_size_seed.npz. Defaults to None.
-
data_dir
(str
, default:'data'
) –Directory to save the data to. Defaults to "data".
-
name
(str
, default:None
) –Name of the dataset. Defaults to None.
-
problem
(Union[str, List[str]]
, default:'all'
) –Problem to generate data for. Defaults to "all".
-
data_distribution
(str
, default:'all'
) –Data distribution to generate data for. Defaults to "all".
-
dataset_size
(int
, default:10000
) –Number of datasets to generate. Defaults to 10000.
-
graph_sizes
(Union[int, List[int]]
, default:[20, 50, 100]
) –Graph size to generate data for. Defaults to [20, 50, 100].
-
overwrite
(bool
, default:False
) –Whether to overwrite existing files. Defaults to False.
-
seed
(int
, default:1234
) –Random seed. Defaults to 1234.
-
disable_warning
(bool
, default:True
) –Whether to disable warnings. Defaults to True.
-
distributions_per_problem
(Union[int, dict]
, default:None
) –Number of distributions to generate per problem. Defaults to None.
Source code in rl4co/data/generate_data.py
227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 |
|
generate_default_datasets
¶
generate_default_datasets(data_dir, generate_eda=False)
Generate the default datasets used in the paper and save them to data_dir/problem
Source code in rl4co/data/generate_data.py
335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 |
|
Transforms¶
Classes:
-
StateAugmentation
–Augment state by N times via symmetric rotation/reflection transform
Functions:
-
dihedral_8_augmentation
–Augmentation (x8) for grid-based data (x, y) as done in POMO.
-
dihedral_8_augmentation_wrapper
–Wrapper for dihedral_8_augmentation. If reduce, only return the first 1/8 of the augmented data
-
symmetric_transform
–SR group transform with rotation and reflection
-
symmetric_augmentation
–Augment xy data by
num_augment
times via symmetric rotation transform and concatenate to original data
StateAugmentation
¶
StateAugmentation(
num_augment: int = 8,
augment_fn: Union[str, callable] = "symmetric",
first_aug_identity: bool = True,
normalize: bool = False,
feats: list = None,
)
Bases: object
Augment state by N times via symmetric rotation/reflection transform
Parameters:
-
num_augment
(int
, default:8
) –number of augmentations
-
augment_fn
(Union[str, callable]
, default:'symmetric'
) –augmentation function to use, e.g. 'symmetric' (default) or 'dihedral8', if callable, then use the function directly. If 'dihedral8', then num_augment must be 8
-
first_aug_identity
(bool
, default:True
) –whether to augment the first data point too
-
normalize
(bool
, default:False
) –whether to normalize the augmented data
-
feats
(list
, default:None
) –list of features to augment
Source code in rl4co/data/transforms.py
116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
|
dihedral_8_augmentation
¶
Augmentation (x8) for grid-based data (x, y) as done in POMO. This is a Dihedral group of order 8 (rotations and reflections) https://en.wikipedia.org/wiki/Examples_of_groups#dihedral_group_of_order_8
Parameters:
-
xy
(Tensor
) –[batch, graph, 2] tensor of x and y coordinates
Source code in rl4co/data/transforms.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
dihedral_8_augmentation_wrapper
¶
Wrapper for dihedral_8_augmentation. If reduce, only return the first 1/8 of the augmented data since the augmentation augments the data 8 times.
Source code in rl4co/data/transforms.py
40 41 42 43 44 45 46 47 |
|
symmetric_transform
¶
SR group transform with rotation and reflection Like the one in SymNCO, but a vectorized version
Parameters:
-
x
(Tensor
) –[batch, graph, 1] tensor of x coordinates
-
y
(Tensor
) –[batch, graph, 1] tensor of y coordinates
-
phi
(Tensor
) –[batch, 1] tensor of random rotation angles
-
offset
(float
, default:0.5
) –offset for x and y coordinates
Source code in rl4co/data/transforms.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
symmetric_augmentation
¶
Augment xy data by num_augment
times via symmetric rotation transform and concatenate to original data
Parameters:
-
xy
(Tensor
) –[batch, graph, 2] tensor of x and y coordinates
-
num_augment
(int
, default:8
) –number of augmentations
-
first_augment
(bool
, default:False
) –whether to augment the first data point
Source code in rl4co/data/transforms.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
|
Utils¶
Functions:
-
load_npz_to_tensordict
–Load a npz file directly into a TensorDict
-
save_tensordict_to_npz
–Save a TensorDict to a npz file
-
check_extension
–Check that filename has extension, otherwise add it
-
load_solomon_instance
–Load solomon instance from a file
-
load_solomon_solution
–Load solomon solution from a file
load_npz_to_tensordict
¶
load_npz_to_tensordict(filename)
Load a npz file directly into a TensorDict We assume that the npz file contains a dictionary of numpy arrays This is at least an order of magnitude faster than pickle
Source code in rl4co/data/utils.py
11 12 13 14 15 16 17 18 19 |
|
save_tensordict_to_npz
¶
save_tensordict_to_npz(
tensordict, filename, compress: bool = False
)
Save a TensorDict to a npz file We assume that the TensorDict contains a dictionary of tensors
Source code in rl4co/data/utils.py
22 23 24 25 26 27 28 29 30 |
|
check_extension
¶
check_extension(filename, extension='.npz')
Check that filename has extension, otherwise add it
Source code in rl4co/data/utils.py
33 34 35 36 37 |
|
load_solomon_instance
¶
load_solomon_instance(name, path=None, edge_weights=False)
Load solomon instance from a file
Source code in rl4co/data/utils.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
load_solomon_solution
¶
load_solomon_solution(name, path=None)
Load solomon solution from a file
Source code in rl4co/data/utils.py
59 60 61 62 63 64 65 66 67 68 69 70 71 |
|