Blog post introducing paper "Run Away from Your Teacher, a new self-supervised learning framework solving the puzzle of BYOL."
Latent variables and EM algorithm.
Some experience about customizing environments and developping simple baseline RL models using stable-baselines, which is a fork of the openai/baselines project.
Proof of the bias-variance tradeoff of the models.
Bootstrapping Aggregation method and Boosting method.
simple notes on preprint paper "Learning Representations by Maximizing Mutual Information Across Views".
notes on MoCo.
notes on paper "Bootstrap Your Own Latent A New Approach to Self-Supervised Learning".
definition of Lipschitz Continuity.
some basic concepts on Convex Optimization.