Moodle PHP Documentation 4.3
Moodle 4.3.5 (Build: 20240610) (7dcfaa79f78)
Phpml\Helper\Optimizer\StochasticGD Class Reference
Inheritance diagram for Phpml\Helper\Optimizer\StochasticGD:
Phpml\Helper\Optimizer\Optimizer Phpml\Helper\Optimizer\GD Phpml\Helper\Optimizer\ConjugateGradient

Public Member Functions

 __construct (int $dimensions)
 Initializes the SGD optimizer for the given number of dimensions.
 
 getCostValues ()
 Returns the list of cost values for each iteration executed in last run of the optimization.
 
 runOptimization (array $samples, array $targets, Closure $gradientCb)
 Optimization procedure finds the unknow variables for the equation A.
 
 setChangeThreshold (float $threshold=1e-5)
 Sets minimum value for the change in the theta values between iterations to continue the iterations.
 
 setEarlyStop (bool $enable=true)
 Enable/Disable early stopping by checking at each iteration whether changes in theta or cost value are not large enough.
 
 setLearningRate (float $learningRate)
 
 setMaxIterations (int $maxIterations)
 
 setTheta (array $theta)
 
 theta ()
 

Protected Member Functions

 clear ()
 Clears the optimizer internal vars after the optimization process.
 
 earlyStop (array $oldTheta)
 Checks if the optimization is not effective enough and can be stopped in case large enough changes in the solution do not happen.
 
 updateTheta ()
 

Protected Attributes

array $costValues = []
 List of values obtained by evaluating the cost function at each iteration of the algorithm.
 
int $dimensions
 Number of dimensions.
 
bool $enableEarlyStop = true
 Enable/Disable early stopping by checking the weight & cost values to see whether they changed large enough to continue the optimization.
 
Closure null $gradientCb
 Callback function to get the gradient and cost value for a specific set of theta (ϴ) and a pair of sample & target.
 
float $learningRate = 0.001
 Learning rate is used to control the speed of the optimization.
 
int $maxIterations = 1000
 Maximum number of iterations used to train the model.
 
array $samples = []
 A (samples)
 
array $targets = []
 y (targets)
 
array $theta = []
 
float $threshold = 1e-4
 Minimum amount of change in the weights and error values between iterations that needs to be obtained to continue the training.
 

Constructor & Destructor Documentation

◆ __construct()

Phpml\Helper\Optimizer\StochasticGD::__construct ( int $dimensions)

Initializes the SGD optimizer for the given number of dimensions.

Reimplemented from Phpml\Helper\Optimizer\Optimizer.

Member Function Documentation

◆ clear()

Phpml\Helper\Optimizer\StochasticGD::clear ( )
protected

Clears the optimizer internal vars after the optimization process.

Reimplemented in Phpml\Helper\Optimizer\GD.

◆ runOptimization()

Phpml\Helper\Optimizer\StochasticGD::runOptimization ( array $samples,
array $targets,
Closure $gradientCb )

Optimization procedure finds the unknow variables for the equation A.

ϴ = y for the given samples (A) and targets (y).

The cost function to minimize and the gradient of the function are to be handled by the callback function provided as the third parameter of the method.

Reimplemented from Phpml\Helper\Optimizer\Optimizer.

Reimplemented in Phpml\Helper\Optimizer\ConjugateGradient, and Phpml\Helper\Optimizer\GD.

◆ setChangeThreshold()

Phpml\Helper\Optimizer\StochasticGD::setChangeThreshold ( float $threshold = 1e-5)

Sets minimum value for the change in the theta values between iterations to continue the iterations.


If change in the theta is less than given value then the algorithm will stop training

Return values
$this

◆ setEarlyStop()

Phpml\Helper\Optimizer\StochasticGD::setEarlyStop ( bool $enable = true)

Enable/Disable early stopping by checking at each iteration whether changes in theta or cost value are not large enough.

Return values
$this

◆ setLearningRate()

Phpml\Helper\Optimizer\StochasticGD::setLearningRate ( float $learningRate)
Return values
$this

◆ setMaxIterations()

Phpml\Helper\Optimizer\StochasticGD::setMaxIterations ( int $maxIterations)
Return values
$this

◆ setTheta()

Phpml\Helper\Optimizer\StochasticGD::setTheta ( array $theta)

Reimplemented from Phpml\Helper\Optimizer\Optimizer.

Member Data Documentation

◆ $learningRate

float Phpml\Helper\Optimizer\StochasticGD::$learningRate = 0.001
protected

Learning rate is used to control the speed of the optimization.


Larger values of lr may overshoot the optimum or even cause divergence while small values slows down the convergence and increases the time required for the training


The documentation for this class was generated from the following file: