Show pageOld revisionsBacklinksBack to top This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== Table of Contents ====== - [[~:OneDeepLearningRevolution|The Deep Learning Revolution]] - The Impact of Deep Learning - A Tutorial Example - A Brief History of Machine Learning - [[~:TwoProbabilities|Probabilities]] - The Rules of Probability - Probability Densities - The Gaussian Distribution - Transformation Of Densities - Information Theory - Bayesian Probabilities - [[~:ThreStandardDistributions|Standard Distributions]] - Discrete Variables - The Multivariate Gaussian - Periodic Variables - The Exponential Family - Nonparametric Methods - Histograms - Kernel densities - Nearest-neighbours - [[~:FourSingleLayerNetworksRegression|Single Layer Networks: Regression]] - Linear Regression - Decision Theory - The Bias-Variance Trade-off - [[~:FiveSingleLayerNetworksClassification|Single-layer Networks: Classification]] - Discrimination Functions - Decision Theory - Generative Classifiers - Discriminative Classifiers - [[~:SixDeepNeuralNetworks|Deep Neural Networks]] - Limitations of Fixed Basis Functions - Mulilayer Networks - Deep Networks - Error Functions - Mixture Density Networks - [[~:SevenGradientDescent|Gradient Descent]] - Error Surfaces - Gradient Descent Optimization - Convergence - Normalization - Batch Normalization - Layer Normalization - [[~:EightBackpropegation|Backpropegations]] - Evaluation of Gradients - Automatic Differntiation - [[~:NineRegularization]] - Induction Bias - Weight Decay - Learning Curves - Parameter Sharing - Residual Connections - Model Averaging - [[~:TenConvolutionalNetworks|Convolutional Networks]] - Computer Vision - Convulutional Filters - General Graph Networks - [[>~:EleventStructuredDistributions|Structured Distributions]] - Graphical Models - Conditional Independence - Sequence Models - [[~:TwelveTransforms|Transformers]] - Attention - Natural Language - Transformer Language Models - Multimodal Transformers]] - [[~:ThirteenGraphNeuralNetworks|Graph Neural Networks]] - Machine Learning on Graphs - Neural Message-Passing - General Graph Networks - [[~:FourteenSampling|Sampling]] - Basic Sampling Algorithms - Markov Chgain Monte Carlo - Langevin Sampling - [[~:FifteenLatentVariables]] - k-means Clustering - Mixture of Gaussians - Expectation-Maximizastion Algorithm - Evidence Lower Bound - [[~:SixteenContinuousLatentVariables|Continuous Latent Variables]] - Principal Component Analysis - Probabilistic Latent Variables - Evidence Lower Bound - Nonlinear Latent Variable Models - [[~:SeventeenGenerativeAdversarialNetworks|Generative Adversarial Networks]] - Adversarial Training - Image GANs - [[~:EighteenNormalizingFlows|Normalizing Flows]] - Coupling Flows - Autoregressive Flows - Continuous Flows - [[~:NineteenAutoencoders|Autoencoders]] - Deterministic Autoencoders - Variational Autoencoders - [[~:TwentyDiffusionModels|Diffusion Models]] - Forward Encoder - Reverse Encoder - Score Matching - Guided Diffusion * Appendices - [[~:A1LinearAlgebra|Linear Algebra]] - [[~:A2CalculusOfVariations|Calculus of Variations]] - [[~:A3LagrangeMultipliers|Lagrange Multipliers]] deeplearningfoundationsandconcepts.txt Last modified: 2025/05/05 11:30by gedbadmin