Performing MLE

  1. Solve for in

Note

MLE can be extended from 1 parameter to multiple parameters, as shown in this example.

Warning

Step is done because the logarithm turns a product of probabilities into a sum, making the derivative of the function usually easier to compute on step .

Note that depending on the form of , applying step might overcomplicate the equation instead. In such cases, step may be skipped, and we would maximize instead of .

About MLE

Imagine you have a dataset, and you know it’s generated by some underlying process (distribution) that depends on an unknown parameter . MLE helps you find the best estimate of , denoted by , that maximizes the likelihood of observing the given data.

Read further: https://www.youtube.com/watch?v=XepXtl9YKwc

For example, Suppose you’re working in a factory that produces light bulbs. You want to estimate the probability that a randomly selected bulb is defective. You’ve observed that out of 100 randomly sampled bulbs, 5 are defective.

You recognize that the samples comes from binomial distribution, which has an MLE estimate

So, based on the data, the MLE estimate suggests that the probability of a bulb being defective is

Example

![image](assets/image 11.png)

Let represent a random sample from the distribution having hte following probability density function. , , zero elsewhere, .

Find , the mle of .

So, the MLE of , which we will denote , is