信息熵 信息的不确定性 条件熵 在给定随机变量Y的条件下,随机变量X的不确定性
Archive:
Machine Learning
Infinity in Deep Learning Infinite Width: Neural Networks as Gaussian Process and Neural Tangent Kernel (NTK) Detailed derivation of neural networks as (1) Gaussian Process using central Limit theorem (2) Neural Tangent Kernel (NTK) Infinite Depth: NeuralODE and Adjoint Equation Discuss Neural ODE and in particular the use of adjoint equation in Parameter training
Bayesian P(A|B)=\frac{P(B|A)P(A)}{P(B)}