ffnn
RL4CRN.utils.ffnn
Feed-forward neural network utilities.
This module defines a lightweight fully-connected neural network backbone used
throughout RL4CRN (e.g., policy encoders, parameter heads, and other small MLP
components). It provides a single FFNN class that builds an MLP with a
configurable number of hidden layers and activations, intended as a simple,
general-purpose function approximator.
FFNN
Bases: Module
Simple feed-forward neural network (MLP) backbone.
The network is a stack of fully-connected layers mapping an input vector to an output vector. It is primarily used as a reusable building block for encoders and heads in policies/value functions.
Architecture
- Linear(input_size -> hidden_size) + ReLU
num_layersblocks of: Linear(hidden_size -> hidden_size) + Tanh- Linear(hidden_size -> output_size)
Notes
- This class does not apply any output activation; callers should apply any required squashing (e.g., tanh/softplus) externally.
- Input tensors are expected to be of shape (N, input_size), where N is the batch dimension.
| PARAMETER | DESCRIPTION |
|---|---|
input_size
|
Dimensionality of the input features.
TYPE:
|
output_size
|
Dimensionality of the output features.
TYPE:
|
hidden_size
|
Width of the hidden layers.
TYPE:
|
num_layers
|
Number of additional hidden blocks after the first layer (each block is Linear + Tanh).
TYPE:
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
input_size |
Stored input dimensionality.
TYPE:
|
output_size |
Stored output dimensionality.
TYPE:
|
hidden_size |
Stored hidden width.
TYPE:
|
num_layers |
Stored number of hidden blocks.
TYPE:
|
model |
The assembled PyTorch module.
TYPE:
|
Example
net = FFNN(input_size=16, output_size=4, hidden_size=64, num_layers=2)
x = torch.randn(32, 16)
y = net(x) # (32, 4)