synthcity.plugins.generic.plugin_nflow module

class NormalizingFlowsPlugin(n_iter: int = 1000, n_layers_hidden: int = 1, n_units_hidden: int = 100, batch_size: int = 200, num_transform_blocks: int = 1, dropout: float = 0.1, batch_norm: bool = False, num_bins: int = 8, tail_bound: float = 3, lr: float = 0.001, apply_unconditional_transform: bool = True, base_distribution: str = 'standard_normal', linear_transform_type: str = 'permutation', base_transform_type: str = 'rq-autoregressive', encoder_max_clusters: int = 10, tabular: bool = True, n_iter_min: int = 100, n_iter_print: int = 50, patience: int = 5, patience_metric: Optional[synthcity.metrics.weighted_metrics.WeightedMetrics] = None, workspace: pathlib.Path = PosixPath('workspace'), compress_dataset: bool = False, sampling_patience: int = 500, random_state: int = 0, device: Any = device(type='cpu'), **kwargs: Any)

Bases: synthcity.plugins.core.plugin.Plugin

Inheritance diagram of synthcity.plugins.generic.plugin_nflow.NormalizingFlowsPlugin

Normalizing Flows methods.

Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact.

Parameters
  • n_iter – int Number of flow steps

  • n_layers_hidden – int Number of transformation layers

  • n_units_hidden – int Number of hidden units for each layer

  • batch_size – int Size of batch used for training

  • num_transform_blocks – int Number of blocks to use in coupling/autoregressive nets.

  • dropout – float Dropout probability for coupling/autoregressive nets.

  • batch_norm – bool Whether to use batch norm in coupling/autoregressive nets.

  • num_bins – int Number of bins to use for piecewise transforms.

  • tail_bound – float Box is on [-bound, bound]^2

  • lr – float Learning rate for optimizer.

  • apply_unconditional_transform – bool Whether to unconditionally transform ‘identity’ features in the coupling layer.

  • base_distribution – str Possible values: “standard_normal”

  • linear_transform_type

    str Type of linear transform to use. Possible values:

    • lu : A linear transform where we parameterize the LU decomposition of the weights.

    • permutation: Permutes using a random, but fixed, permutation.

    • svd: A linear module using the SVD decomposition for the weight matrix.

  • base_transform_type

    str Type of transform to use between linear layers. Possible values:

    • affine-couplingAn affine coupling layer that scales and shifts part of the variables.

      Ref: L. Dinh et al., “Density estimation using Real NVP”.

    • quadratic-coupling :

      Ref: Müller et al., “Neural Importance Sampling”.

    • rq-couplingRational Quadratic Coupling

      Ref: Durkan et al, “Neural Spline Flows”.

    • affine-autoregressive :Affine Autoregressive Transform

      Ref: Durkan et al, “Neural Spline Flows”.

    • quadratic-autoregressiveQuadratic Autoregressive Transform

      Ref: Durkan et al, “Neural Spline Flows”.

    • rq-autoregressiveRational Quadratic Autoregressive Transform

      Ref: Durkan et al, “Neural Spline Flows”.

  • stopping (# early) –

  • n_iter_print – int Number of iterations after which to print updates and check the validation loss.

  • n_iter_min – int Minimum number of iterations to go through before starting early stopping

  • patience – int Max number of iterations without any improvement before training early stopping is trigged.

  • patience_metric – Optional[WeightedMetrics] If not None, the metric is used for evaluation the criterion for training early stopping.

  • arguments (# Core Plugin) –

  • workspace – Path. Optional Path for caching intermediary results.

  • compress_dataset – bool. Default = False. Drop redundant features before training the generator.

  • sampling_patience – int. Max inference iterations to wait for the generated data to match the training schema.

  • random_state – int random seed to use

Example

>>> from sklearn.datasets import load_iris
>>> from synthcity.plugins import Plugins
>>>
>>> X, y = load_iris(as_frame = True, return_X_y = True)
>>> X["target"] = y
>>>
>>> plugin = Plugins().get("nflow", n_iter = 100)
>>> plugin.fit(X)
>>>
>>> plugin.generate(50)
class Config

Bases: object

arbitrary_types_allowed = True
validate_assignment = True
fit(X: Union[synthcity.plugins.core.dataloader.DataLoader, pandas.core.frame.DataFrame], *args: Any, **kwargs: Any) Any

Training method the synthetic data plugin.

Parameters
  • X – DataLoader. The reference dataset.

  • cond

    Optional, Union[pd.DataFrame, pd.Series, np.ndarray] Optional Training Conditional. The training conditional can be used to control to output of some models, like GANs or VAEs. The content can be anything, as long as it maps to the training dataset X. Usage example:

    >>> from sklearn.datasets import load_iris
    >>> from synthcity.plugins.core.dataloader import GenericDataLoader
    >>> from synthcity.plugins.core.constraints import Constraints
    >>>
    >>> # Load in `test_plugin` the generative model of choice
    >>> # ....
    >>>
    >>> X, y = load_iris(as_frame=True, return_X_y=True)
    >>> X["target"] = y
    >>>
    >>> X = GenericDataLoader(X)
    >>> test_plugin.fit(X, cond=y)
    >>>
    >>> count = 10
    >>> X_gen = test_plugin.generate(count, cond=np.ones(count))
    >>>
    >>> # The Conditional only optimizes the output generation
    >>> # for GANs and VAEs, but does NOT guarantee the samples
    >>> # are only from that condition.
    >>> # If you want to guarantee that output contains only
    >>> # "target" == 1 samples, use Constraints.
    >>>
    >>> constraints = Constraints(
    >>>     rules=[
    >>>         ("target", "==", 1),
    >>>     ]
    >>> )
    >>> X_gen = test_plugin.generate(count,
    >>>         cond=np.ones(count),
    >>>         constraints=constraints
    >>>        )
    >>> assert (X_gen["target"] == 1).all()
    

Returns

self

classmethod fqdn() str

The Fully-Qualified name of the plugin.

generate(count: Optional[int] = None, constraints: Optional[synthcity.plugins.core.constraints.Constraints] = None, random_state: Optional[int] = None, **kwargs: Any) synthcity.plugins.core.dataloader.DataLoader

Synthetic data generation method.

Parameters
  • count – optional int. The number of samples to generate. If None, it generated len(reference_dataset) samples.

  • cond – Optional, Union[pd.DataFrame, pd.Series, np.ndarray]. Optional Generation Conditional. The conditional can be used only if the model was trained using a conditional too. If provided, it must have count length. Not all models support conditionals. The conditionals can be used in VAEs or GANs to speed-up the generation under some constraints. For model agnostic solutions, check out the constraints parameter.

  • constraints

    optional Constraints. Optional constraints to apply on the generated data. If none, the reference schema constraints are applied. The constraints are model agnostic, and will filter the output of the generative model. The constraints are a list of rules. Each rule is a tuple of the form (<feature>, <operation>, <value>).

    Valid Operations:
    • ”<”, “lt” : less than <value>

    • ”<=”, “le”: less or equal with <value>

    • ”>”, “gt” : greater than <value>

    • ”>=”, “ge”: greater or equal with <value>

    • ”==”, “eq”: equal with <value>

    • ”in”: valid for categorical features, and <value> must be array. for example, (“target”, “in”, [0, 1])

    • ”dtype”: <value> can be a data type. For example, (“target”, “dtype”, “int”)

    Usage example:
    >>> from synthcity.plugins.core.constraints import Constraints
    >>> constraints = Constraints(
    >>>   rules=[
    >>>             ("InterestingFeature", "==", 0),
    >>>         ]
    >>>     )
    >>>
    >>> syn_data = syn_model.generate(
            count=count,
            constraints=constraints
        ).dataframe()
    >>>
    >>> assert (syn_data["InterestingFeature"] == 0).all()
    

  • random_state – optional int. Optional random seed to use.

Returns

<count> synthetic samples

static hyperparameter_space(**kwargs: Any) List[synthcity.plugins.core.distribution.Distribution]

Returns the hyperparameter space for the derived plugin.

static load(buff: bytes) Any
static load_dict(representation: dict) Any
static name() str

The name of the plugin.

plot(plt: Any, X: synthcity.plugins.core.dataloader.DataLoader, count: Optional[int] = None, plots: list = ['marginal', 'associations', 'tsne'], **kwargs: Any) Any

Plot the real-synthetic distributions.

Parameters
  • plt – output

  • X – DataLoader. The reference dataset.

Returns

self

classmethod sample_hyperparameters(*args: Any, **kwargs: Any) Dict[str, Any]

Sample value from the hyperparameter space for the current plugin.

classmethod sample_hyperparameters_optuna(trial: Any, *args: Any, **kwargs: Any) Dict[str, Any]
save() bytes
save_dict() dict
save_to_file(path: pathlib.Path) bytes
schema() synthcity.plugins.core.schema.Schema

The reference schema

schema_includes(other: Union[synthcity.plugins.core.dataloader.DataLoader, pandas.core.frame.DataFrame]) bool

Helper method to test if the reference schema includes a Dataset

Parameters

other – DataLoader. The dataset to test

Returns

bool, if the schema includes the dataset or not.

training_schema() synthcity.plugins.core.schema.Schema

The internal schema

static type() str

The type of the plugin.

static version() str

API version

plugin

alias of synthcity.plugins.generic.plugin_nflow.NormalizingFlowsPlugin