:mod:`base_model` ================= .. py:module:: base_model .. autoapi-nested-parse:: Base model implementing helper methods. Module Contents --------------- .. py:class:: BaseModel(hparams) Bases: :class:`pytorch_lightning.core.LightningModule` .. autoapi-inheritance-diagram:: base_model.BaseModel :parts: 1 The primary class containing all the training functionality. It is equivalent toPyTorch nn.Module in all aspects. :param LightningModule: The Pytorch-Lightning module derived from nn.module withuseful hooks :type LightningModule: nn.Module :raises NotImplementedError: Some methods must be overridden .. method:: forward(self) :abstractmethod: Dummy method to do forward pass on the model. :raises NotImplementedError: The method must be overridden in the derived models .. method:: training_step(self, batch, batch_idx) Called inside the testing loop with the data from the testing dataloader passed in as `batch`. The implementation is delegated to the dataloader instead. For performance critical usecase prefer monkey-patching instead. :param model: The chosen model :type model: Model :param batch: Batch of input and ground truth variables :type batch: int :return: Loss and logs :rtype: dict .. method:: validation_step(self, batch, batch_idx) Called inside the validation loop with the data from the validation dataloader passed in as `batch`. The implementation is delegated to the dataloader instead. For performance critical usecase prefer monkey-patching instead. :param model: The chosen model :type model: Model :param batch: Batch of input and ground truth variables :type batch: int :return: Loss and logs :rtype: dict .. method:: test_step(self, batch, batch_idx) Called inside the testing loop with the data from the testing dataloader passed in as `batch`. The implementation is delegated to the dataloader instead. For performance critical usecase prefer monkey-patching instead. :param model: The chosen model :type model: Model :param batch: Batch of input and ground truth variables :type batch: int :return: Loss and logs :rtype: dict .. method:: training_epoch_end(self, outputs) Called at the end of training epoch to aggregate outputs. :param outputs: List of individual outputs of each training step. :type outputs: list :return: Loss and logs. :rtype: dict .. method:: validation_epoch_end(self, outputs) Called at the end of validation epoch to aggregate outputs. :param outputs: List of individual outputs of each validation step. :type outputs: list :return: Loss and logs. :rtype: dict .. method:: test_epoch_end(self, outputs) Called at the end of testing epoch to aggregate outputs. :param outputs: List of individual outputs of each testing step. :type outputs: list :return: Loss and logs. :rtype: dict .. method:: configure_optimizers(self) Decide optimizers and learning rate schedulers. At least one optimizer is required. :return: Optimizer and the schedular :rtype: tuple .. method:: add_bias(self, bias) Initialize bias parameter of the last layer with the output variable's mean. :param bias: Mean of the output variable. :type bias: float .. method:: prepare_data(self, ModelDataset=None, force=False) Load and split the data for training and test during the first call. Behavior on second call determined by the `force` parameter. :param ModelDataset: The dataset class to be used with the model, defaults to None :type ModelDataset: class, optional :param force: Force the data preperation even if already prepared, defaults to False :type force: bool, optional .. method:: train_dataloader(self) Create the training dataloader from the training dataset. :return: The training dataloader :rtype: Dataloader .. method:: val_dataloader(self) Create the validation dataloader from the validation dataset. :return: The validation dataloader :rtype: Dataloader .. method:: test_dataloader(self) Create the testing dataloader from the testing dataset. :return: The testing dataloader :rtype: Dataloader