# Likelihood and entropy. An example

*Sunday, July 20, 2014*

As an example of the ideas in the previous post, I am running here an example of an empirical likelihood estimation in the simplest possible setting following this example.

Asumme n=50 observations is extracted from $$x_i \sim N(4, 1)$, $i=1,…,n$:

The maximum likelihood estimate of the mean is simply

Maximizing the program in the post with respect to $$\lambda_2$^{1}, resuts in the following condition

and we can find the zeroes of the previous implicit function by minimizing:

The (inverse) of the probability of each observation is given by:

which means that the log-likelihood program to maximize is \mu$. Therefore, we can first solve the partially optimized problem with respect to -\sum_i\ln(1/\omega_i(\lambda_2,\mu))$. Four things are worth noting. First is that, given the simplicity of the problem, grid search is a reasonable option. Second, that as in any grid search problem, we should check the behavior of the program in the boundaries. Third, that the interval in which the optimizer is going to be looking is defined by the upper and lower bounds that we can establish on x$.

In this case, the vector of candidates is defined by

and the actual optimization happens here:

Therefore the estimate is

which coincides with the ML estimate shown above. As a matter of fact, we can also see that the empirical probability assigned to each observation follows a nice normal distribution:

And this is probably the first time I use the `base`

library for graphics in `R`

in 8 months.

It can be shown that $$\lambda_1=1$. ↩

RSTATS · STATS