reference: Very intuitive example shown in this blog

https://wiseodd.github.io/techblog/2016/12/21/forward-reverse-kl/

Let our true distribution defined as P(X)P(X), and the approximate distribution as Q(X)Q(X).
Forward KL: xXP(x)log(P(x)Q(x))\sum_{x \in X} P(x) \log(\frac{P(x)}{Q(x)})
Discussion in different cases:

  1. If P(x)=0, then log term can be igored so that the Q(x) can be any shape when P(x)=0. Q(x) is able to assign any probabilities when P(x)=0)
  2. If P(x)>0, then the log term have effect during optimization so that Q(x) assign probabilities as close as possible to P(X) when P(x)>0.
  3. The following graph is a specific optimal optimization for Forward KL.
    KL divergence over-estimate/under-estimate

Reverse KL: xXQ(x)log(Q(x)P(x))\sum_{x \in X} Q(x) \log(\frac{Q(x)}{P(x)})

  1. If Q(x)=0, log term can be ignored so that Q(x) able to assign 0 probabilities to P(x)>=0.
  2. If Q(x) > 0, log term is taken into account in optimization step so that Q(x) assign probabilites as close as possible to P(X) when P(x)>0
  3. The following graph is a specific optimal optimization for Reverse KL.
    KL divergence over-estimate/under-estimate

相关文章:

  • 2021-08-17
  • 2022-01-13
  • 2021-11-02
  • 2021-12-14
  • 2022-02-25
  • 2021-12-07
  • 2021-07-03
  • 2021-11-14
猜你喜欢
  • 2021-08-14
  • 2022-01-20
  • 2022-01-16
  • 2021-12-23
  • 2021-12-01
相关资源
相似解决方案