16.3 Mastering Stochastic Gradient Descent for Smarter Optimization

{
# setup
X = cbind(1, X) # add a column of 1s for the intercept
beta = par[-1] # coefficients
sigma = exp(par[1]) # error sd, exp keeps positive
N = nrow(X)

LP = X %*% beta # linear predictor
mu = LP # identity link in the glm sense

# calculate (log) likelihood
ll = dnorm(y, mean = mu, sd = sigma, log = TRUE)

value = -sum(ll) # negative for minimization

return(value)
}


Leave a Reply

Your email address will not be published. Required fields are marked *