^"We typically use to denote such latent variables." Kingma. (2019). An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning.
^“A nonlinear mixed-effects model for simultaneous smoothing and registration of functional data”. Pattern Recognition Letters38: 1-7. (2014). doi:10.1016/j.patrec.2013.10.018.
^Tabachnick, B.G.; Fidell, L.S. (2001). Using Multivariate Analysis. Boston: Allyn and Bacon. ISBN978-0-321-05677-1[要ページ番号]
^Greene, Jeffrey A.; Brown, Scott C. (2009). “The Wisdom Development Scale: Further Validity Investigations”. International Journal of Aging and Human Development68 (4): 289–320 (at p. 291). doi:10.2190/AG.68.4.b. PMID19711618.
^"a latent variable model " Kingma. (2019). An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning.
^"This is also called the (single datapoint) marginal likelihood or the model evidence, when taken as a function of θ." Kingma. (2019). An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning.
^"Perhaps the simplest, and most common, DLVM is one that is specified as factorization" Kingma. (2019). An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning.
^"The distribution is often called the prior distribution over , since it is not conditioned on any observations." Kingma. (2019). An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning.
^"This is due to the integral ... for computing the marginal likelihood ..., not having an analytic solution or efficient estimator." Kingma. (2019). An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning.
^"The intractability of pθ(x), is related to the intractability of the posterior distribution pθ(z|x). ... Since pθ(x, z) is tractable to compute, a tractable marginal likelihood pθ(x) leads to a tractable posterior pθ(z|x), and vice versa. Both are intractable in DLVMs." Kingma. (2019). An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning.
^"We use the term deep latent variable model (DLVM) to denote a latent variable model pθ(x, z) whose distributions are parameterized by neural networks." Kingma. (2019). An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning.
^"One important advantage of DLVMs, is that even when each factor (prior or conditional distribution) in the directed model is relatively simple (such as conditional Gaussian), the marginal distribution pθ(x) can be very complex"