Skip to content

Initializers

mlpoppyns.learning.initializers.initializer_base

Base initializer.

This is an abstract class that contains the skeleton for any weight initialization scheme. Note that such weight initializer classes instances do behave as callable functions.

Authors:

Alberto Garcia Garcia (garciagarcia@ice.csic.es)

InitializerBase

Base abstract class for all weight initializers.

This class serves as a blueprint for creating various initialization methods. It defines the essential methods that all weight initializers must implement, ensuring consistency.

Source code in mlpoppyns/learning/initializers/initializer_base.py
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class InitializerBase:
    """
    Base abstract class for all weight initializers.

    This class serves as a blueprint for creating various initialization methods.
    It defines the essential methods that all weight initializers must implement, ensuring consistency.
    """

    @abc.abstractmethod
    def __call__(self, m: torch.nn.Module) -> None:
        """
        Custom call operator for initializing the parameters of a torch module.

        Args:
            m (torch.module): Module with parameters to be initialized. Could
                be anything from a linear layer to a convolutional one.
        """
        raise NotImplementedError

    @abc.abstractmethod
    def __str__(self) -> str:
        """
        String representation of the weight initializer.

        This method should provide a human-readable description of the weight initializer,
        including its name and any relevant parameters or characteristics.

        Returns:
            (str): String representation of the weight initializer.
        """
        raise NotImplementedError

__call__(m) abstractmethod

Custom call operator for initializing the parameters of a torch module.

Parameters:

Name Type Description Default
m module

Module with parameters to be initialized. Could be anything from a linear layer to a convolutional one.

required
Source code in mlpoppyns/learning/initializers/initializer_base.py
26
27
28
29
30
31
32
33
34
35
@abc.abstractmethod
def __call__(self, m: torch.nn.Module) -> None:
    """
    Custom call operator for initializing the parameters of a torch module.

    Args:
        m (torch.module): Module with parameters to be initialized. Could
            be anything from a linear layer to a convolutional one.
    """
    raise NotImplementedError

__str__() abstractmethod

String representation of the weight initializer.

This method should provide a human-readable description of the weight initializer, including its name and any relevant parameters or characteristics.

Returns:

Type Description
str

String representation of the weight initializer.

Source code in mlpoppyns/learning/initializers/initializer_base.py
37
38
39
40
41
42
43
44
45
46
47
48
@abc.abstractmethod
def __str__(self) -> str:
    """
    String representation of the weight initializer.

    This method should provide a human-readable description of the weight initializer,
    including its name and any relevant parameters or characteristics.

    Returns:
        (str): String representation of the weight initializer.
    """
    raise NotImplementedError

mlpoppyns.learning.initializers.initializer_kaiming

Kaiming uniform initializer.

Class for a Kaiming uniform weight initializer.

Authors:

Alberto Garcia Garcia (garciagarcia@ice.csic.es)

InitializerKaiming

Bases: InitializerBase

Kaiming uniform weight initializer class.

Any instance of this class is a callable function that will initialize a given torch module which contains trainable parameters (weights and biases) with a Kaiming uniform distribution for the weights and a constant value for the biases.

Source code in mlpoppyns/learning/initializers/initializer_kaiming.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
class InitializerKaiming(InitializerBase):
    """
    Kaiming uniform weight initializer class.

    Any instance of this class is a callable function that will initialize a
    given torch module which contains trainable parameters (weights and biases)
    with a Kaiming uniform distribution for the weights and a constant value for
    the biases.
    """

    def __call__(self, m: torch.nn.Module) -> None:
        """
        Custom call operator for initializing the parameters of a module.

        Weights are initialized using Kaiming uniform distribution (see "Delving
        deep into rectifiers: Surpassing human-level performance on ImageNet
        classification" - He, K. et al. (2015)) in which the values of are
        sampled from a uniform distribution U(-bound,bound) where:
        bound = gain * sqrt(3 / fan_mode).
        Biases are just filled with a constant close-to-zero value (0.01).

        Args:
            m (torch.module): Module with parameters to be initialized. Could
                be anything from a linear layer to a convolutional one. Right now,
                only initialization of Linear layers is performed.
        """
        if type(m) is torch.nn.Linear:
            torch.nn.init.kaiming_uniform_(m.weight, mode="fan_in")
            m.bias.data.fill_(0.01)

    def __str__(self) -> str:
        """
        Custom to string operator for the weight initializer.

        Returns:
            (str): A string which describes the weight initializer for output purposes.
        """
        return "Kaiming Uniform weight initializer"

__call__(m)

Custom call operator for initializing the parameters of a module.

Weights are initialized using Kaiming uniform distribution (see "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification" - He, K. et al. (2015)) in which the values of are sampled from a uniform distribution U(-bound,bound) where: bound = gain * sqrt(3 / fan_mode). Biases are just filled with a constant close-to-zero value (0.01).

Parameters:

Name Type Description Default
m module

Module with parameters to be initialized. Could be anything from a linear layer to a convolutional one. Right now, only initialization of Linear layers is performed.

required
Source code in mlpoppyns/learning/initializers/initializer_kaiming.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
def __call__(self, m: torch.nn.Module) -> None:
    """
    Custom call operator for initializing the parameters of a module.

    Weights are initialized using Kaiming uniform distribution (see "Delving
    deep into rectifiers: Surpassing human-level performance on ImageNet
    classification" - He, K. et al. (2015)) in which the values of are
    sampled from a uniform distribution U(-bound,bound) where:
    bound = gain * sqrt(3 / fan_mode).
    Biases are just filled with a constant close-to-zero value (0.01).

    Args:
        m (torch.module): Module with parameters to be initialized. Could
            be anything from a linear layer to a convolutional one. Right now,
            only initialization of Linear layers is performed.
    """
    if type(m) is torch.nn.Linear:
        torch.nn.init.kaiming_uniform_(m.weight, mode="fan_in")
        m.bias.data.fill_(0.01)

__str__()

Custom to string operator for the weight initializer.

Returns:

Type Description
str

A string which describes the weight initializer for output purposes.

Source code in mlpoppyns/learning/initializers/initializer_kaiming.py
46
47
48
49
50
51
52
53
def __str__(self) -> str:
    """
    Custom to string operator for the weight initializer.

    Returns:
        (str): A string which describes the weight initializer for output purposes.
    """
    return "Kaiming Uniform weight initializer"

mlpoppyns.learning.initializers.initializer_normal

Normal initializer.

Class for a normal weight initializer.

Authors:

Alberto Garcia Garcia (garciagarcia@ice.csic.es)

InitializerNormal

Bases: InitializerBase

Normal weight initializer class.

Any instance of this class is a callable function that will initialize a given torch module which contains trainable parameters (weights and biases) with a normal distribution for the weights and zero the biases.

Source code in mlpoppyns/learning/initializers/initializer_normal.py
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
class InitializerNormal(InitializerBase):
    """
    Normal weight initializer class.

    Any instance of this class is a callable function that will initialize a
    given torch module which contains trainable parameters (weights and biases)
    with a normal distribution for the weights and zero the biases.
    """

    def __call__(self, m: torch.nn.Module) -> None:
        """
        Custom call operator for initializing the parameters of a module.

        Weights are initialized using a normal distribution (with values taken
        from the N(mean, std^2) distribution) whilst biases are just filled with
        a constant zero value. In this case, std = 1 / sqrt(y) where y is the
        number of input features.

        Args:
            m (torch.module): Module with parameters to be initialized. Could
                be anything from a linear layer to a convolutional one. Right now,
                only initialization of Linear layers is performed.
        """
        if type(m) is torch.nn.Linear:
            y = m.in_features
            torch.nn.init.normal_(m.weight, 0.0, 1.0 / np.sqrt(y))
            m.bias.data.fill_(0.0)

    def __str__(self) -> str:
        """
        Custom to string operator for the weight initializer.

        Returns:
            (str): A string which describes the weight initializer for output purposes.
        """

        return "Normal weight initializer"

__call__(m)

Custom call operator for initializing the parameters of a module.

Weights are initialized using a normal distribution (with values taken from the N(mean, std^2) distribution) whilst biases are just filled with a constant zero value. In this case, std = 1 / sqrt(y) where y is the number of input features.

Parameters:

Name Type Description Default
m module

Module with parameters to be initialized. Could be anything from a linear layer to a convolutional one. Right now, only initialization of Linear layers is performed.

required
Source code in mlpoppyns/learning/initializers/initializer_normal.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
def __call__(self, m: torch.nn.Module) -> None:
    """
    Custom call operator for initializing the parameters of a module.

    Weights are initialized using a normal distribution (with values taken
    from the N(mean, std^2) distribution) whilst biases are just filled with
    a constant zero value. In this case, std = 1 / sqrt(y) where y is the
    number of input features.

    Args:
        m (torch.module): Module with parameters to be initialized. Could
            be anything from a linear layer to a convolutional one. Right now,
            only initialization of Linear layers is performed.
    """
    if type(m) is torch.nn.Linear:
        y = m.in_features
        torch.nn.init.normal_(m.weight, 0.0, 1.0 / np.sqrt(y))
        m.bias.data.fill_(0.0)

__str__()

Custom to string operator for the weight initializer.

Returns:

Type Description
str

A string which describes the weight initializer for output purposes.

Source code in mlpoppyns/learning/initializers/initializer_normal.py
45
46
47
48
49
50
51
52
53
def __str__(self) -> str:
    """
    Custom to string operator for the weight initializer.

    Returns:
        (str): A string which describes the weight initializer for output purposes.
    """

    return "Normal weight initializer"

mlpoppyns.learning.initializers.initializer_uniform

Uniform initializer.

Class for a uniform weight initializer.

Authors:

Alberto Garcia Garcia (garciagarcia@ice.csic.es)

InitializerUniform

Bases: InitializerBase

Uniform weight initializer class.

Any instance of this class is a callable function that will initialize a given torch module which contains trainable parameters (weights and biases) with a uniform distribution for the weights and zero the biases.

Source code in mlpoppyns/learning/initializers/initializer_uniform.py
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
class InitializerUniform(InitializerBase):

    """
    Uniform weight initializer class.

    Any instance of this class is a callable function that will initialize a
    given torch module which contains trainable parameters (weights and biases)
    with a uniform distribution for the weights and zero the biases.
    """

    def __call__(self, m: torch.nn.Module) -> None:
        """
        Custom call operator for initializing the parameters of a module.

        Weights are initialized using an uniform distribution (with values taken
        from the U(0,1) distribution) whilst biases are just filled with a
        constant zero value.

        Args:
            m (torch.module): Module with parameters to be initialized. Could
                be anything from a linear layer to a convolutional one. Right now,
                only initialization of Linear layers is performed.
        """

        if type(m) is torch.nn.Linear:
            torch.nn.init.uniform_(m.weight)
            m.bias.data.fill_(0.0)

    def __str__(self) -> str:
        """
        Custom to string operator for the weight initializer.

        Returns:
            (str): A string which describes the weight initializer for output purposes.
        """

        return "Uniform weight initializer."

__call__(m)

Custom call operator for initializing the parameters of a module.

Weights are initialized using an uniform distribution (with values taken from the U(0,1) distribution) whilst biases are just filled with a constant zero value.

Parameters:

Name Type Description Default
m module

Module with parameters to be initialized. Could be anything from a linear layer to a convolutional one. Right now, only initialization of Linear layers is performed.

required
Source code in mlpoppyns/learning/initializers/initializer_uniform.py
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
def __call__(self, m: torch.nn.Module) -> None:
    """
    Custom call operator for initializing the parameters of a module.

    Weights are initialized using an uniform distribution (with values taken
    from the U(0,1) distribution) whilst biases are just filled with a
    constant zero value.

    Args:
        m (torch.module): Module with parameters to be initialized. Could
            be anything from a linear layer to a convolutional one. Right now,
            only initialization of Linear layers is performed.
    """

    if type(m) is torch.nn.Linear:
        torch.nn.init.uniform_(m.weight)
        m.bias.data.fill_(0.0)

__str__()

Custom to string operator for the weight initializer.

Returns:

Type Description
str

A string which describes the weight initializer for output purposes.

Source code in mlpoppyns/learning/initializers/initializer_uniform.py
45
46
47
48
49
50
51
52
53
def __str__(self) -> str:
    """
    Custom to string operator for the weight initializer.

    Returns:
        (str): A string which describes the weight initializer for output purposes.
    """

    return "Uniform weight initializer."

mlpoppyns.learning.initializers.initializer_uniform_rule

Uniform rule initializer.

Class for an uniform rule weight initializer.

Authors:

Alberto Garcia Garcia (garciagarcia@ice.csic.es)

InitializerUniformRule

Bases: InitializerBase

Uniform weight initializer class.

Any instance of this class is a callable function that will initialize a given torch module which contains trainable parameters (weights and biases) with a uniform rule distribution for the weights and zero the biases.

Source code in mlpoppyns/learning/initializers/initializer_uniform_rule.py
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
class InitializerUniformRule(InitializerBase):

    """
    Uniform weight initializer class.

    Any instance of this class is a callable function that will initialize a
    given torch module which contains trainable parameters (weights and biases)
    with a uniform rule distribution for the weights and zero the biases.
    """

    def __call__(self, m: torch.nn.Module) -> None:
        """
        Custom call operator for initializing the parameters of a module.

        Weights are initialized using an uniform rule distribution (with values
        drawn from the distribution U(-y, y) where y = 1 / sqrt(n) being n the
        number of input features to the module) biases are just filled with a
        constant zero value.

        Args:
            m (torch.module): Module with parameters to be initialized. Could
                be anything from a linear layer to a convolutional one. Right now,
                only initialization of Linear layers is performed.
        """

        if type(m) is torch.nn.Linear:
            n = m.in_features
            y = 1.0 / np.sqrt(n)
            torch.nn.init.uniform_(m.weight, -y, y)
            m.bias.data.fill_(0.0)

    def __str__(self) -> str:
        """
        Custom to string operator for the weight initializer.

        Returns:
            (str): A string which describes the weight initializer for output purposes.
        """

        return "Uniform Rule weight initializer."

__call__(m)

Custom call operator for initializing the parameters of a module.

Weights are initialized using an uniform rule distribution (with values drawn from the distribution U(-y, y) where y = 1 / sqrt(n) being n the number of input features to the module) biases are just filled with a constant zero value.

Parameters:

Name Type Description Default
m module

Module with parameters to be initialized. Could be anything from a linear layer to a convolutional one. Right now, only initialization of Linear layers is performed.

required
Source code in mlpoppyns/learning/initializers/initializer_uniform_rule.py
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
def __call__(self, m: torch.nn.Module) -> None:
    """
    Custom call operator for initializing the parameters of a module.

    Weights are initialized using an uniform rule distribution (with values
    drawn from the distribution U(-y, y) where y = 1 / sqrt(n) being n the
    number of input features to the module) biases are just filled with a
    constant zero value.

    Args:
        m (torch.module): Module with parameters to be initialized. Could
            be anything from a linear layer to a convolutional one. Right now,
            only initialization of Linear layers is performed.
    """

    if type(m) is torch.nn.Linear:
        n = m.in_features
        y = 1.0 / np.sqrt(n)
        torch.nn.init.uniform_(m.weight, -y, y)
        m.bias.data.fill_(0.0)

__str__()

Custom to string operator for the weight initializer.

Returns:

Type Description
str

A string which describes the weight initializer for output purposes.

Source code in mlpoppyns/learning/initializers/initializer_uniform_rule.py
48
49
50
51
52
53
54
55
56
def __str__(self) -> str:
    """
    Custom to string operator for the weight initializer.

    Returns:
        (str): A string which describes the weight initializer for output purposes.
    """

    return "Uniform Rule weight initializer."

mlpoppyns.learning.initializers.initializer_xavier

Xavier uniform initializer.

Class for a xavier uniform weight initializer.

Authors:

Alberto Garcia Garcia (garciagarcia@ice.csic.es)

InitializerXavier

Bases: InitializerBase

Xavier uniform weight initializer class.

Any instance of this class is a callable function that will initialize a given torch module which contains trainable parameters (weights and biases) with a Xavier uniform distribution for the weights and a constant value for the biases.

Source code in mlpoppyns/learning/initializers/initializer_xavier.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
class InitializerXavier(InitializerBase):
    """
    Xavier uniform weight initializer class.

    Any instance of this class is a callable function that will initialize a
    given torch module which contains trainable parameters (weights and biases)
    with a Xavier uniform distribution for the weights and a constant value for
    the biases.
    """

    def __call__(self, m: torch.nn.Module) -> None:
        """
        Custom call operator for initializing the parameters of a module.

        Weights are initialized using Xavier uniform distribution (see
        "Understanding the difficulty of training deep feedforward neural
        networks" - Glorot, X. & Bengio, Y. (2010) in which the values of are
        sampled from a uniform distribution U(-a,a) where:
        a = gain * sqrt(6 / (fan_in + fan_out)).
        Biases are just filled with a constant close-to-zero value (0.01).

        Args:
            m (torch.module): Module with parameters to be initialized. Could
                be anything from a linear layer to a convolutional one. Right now,
                only initialization of Linear layers is performed.
        """

        if type(m) is torch.nn.Linear:
            torch.nn.init.xavier_uniform_(m.weight)
            m.bias.data.fill_(0.01)

    def __str__(self) -> str:
        """
        Custom to string operator for the weight initializer.

        Returns:
            (str): A string which describes the weight initializer for output purposes.
        """
        return "Xavier Uniform weight initializer"

__call__(m)

Custom call operator for initializing the parameters of a module.

Weights are initialized using Xavier uniform distribution (see "Understanding the difficulty of training deep feedforward neural networks" - Glorot, X. & Bengio, Y. (2010) in which the values of are sampled from a uniform distribution U(-a,a) where: a = gain * sqrt(6 / (fan_in + fan_out)). Biases are just filled with a constant close-to-zero value (0.01).

Parameters:

Name Type Description Default
m module

Module with parameters to be initialized. Could be anything from a linear layer to a convolutional one. Right now, only initialization of Linear layers is performed.

required
Source code in mlpoppyns/learning/initializers/initializer_xavier.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
def __call__(self, m: torch.nn.Module) -> None:
    """
    Custom call operator for initializing the parameters of a module.

    Weights are initialized using Xavier uniform distribution (see
    "Understanding the difficulty of training deep feedforward neural
    networks" - Glorot, X. & Bengio, Y. (2010) in which the values of are
    sampled from a uniform distribution U(-a,a) where:
    a = gain * sqrt(6 / (fan_in + fan_out)).
    Biases are just filled with a constant close-to-zero value (0.01).

    Args:
        m (torch.module): Module with parameters to be initialized. Could
            be anything from a linear layer to a convolutional one. Right now,
            only initialization of Linear layers is performed.
    """

    if type(m) is torch.nn.Linear:
        torch.nn.init.xavier_uniform_(m.weight)
        m.bias.data.fill_(0.01)

__str__()

Custom to string operator for the weight initializer.

Returns:

Type Description
str

A string which describes the weight initializer for output purposes.

Source code in mlpoppyns/learning/initializers/initializer_xavier.py
47
48
49
50
51
52
53
54
def __str__(self) -> str:
    """
    Custom to string operator for the weight initializer.

    Returns:
        (str): A string which describes the weight initializer for output purposes.
    """
    return "Xavier Uniform weight initializer"

mlpoppyns.learning.initializers.initializers

Initializers.

This is just an empty module that gathers all the available weight initializers.

Authors:

Alberto Garcia Garcia (garciagarcia@ice.csic.es)