I’m reading a lot of startup materials these days. I find the startupschool materials from Y Combinator is short yet very useful. They have a series of video lectures. Each lasts about 20 to 30 minutes. More importantly, their advices are very actionable. You can apply them immediately. A good example is the lecture on customer development.
-
posts
-
Y Combinator startup schools
-
YouCompleteMe with Vim8 on MacOS
I’m switching my IDE from Visual Code to Vim8. Vim8 coupled with plugins like FZF or CtrlP (fuzzy file search) or jedi-python (python editing) provides a lot of functionalities like any modern IDE. The nice thing is that it is quite easy to remember all those shortcuts.
-
Entrepreneurship
I recently joined Entrepreneur First (joinef.com). The program recruits about 50 people from all over the world to join berlin cohort 2020. Once admitted, your mission is to team up with somebody to find an idea, develop it into a business model, and in best case, find a customer. If everything goes well, after 3 months you will graduate from the program with your own company. EF invests 90K Euro in exchange for 10% of your company.
-
Noisy Gradients I - The Theory
In variational infernce, reinforcement learning, or sensitivity analysis, it is very common that the loss function has the form $E_{p(x;\theta)}[f(x)]$. This loss function is typically minimized using stochastic optimization, which requires us to compute its noisy gradients $\nabla_{\theta}E_{p(x;\theta)}[f(x)]$.
-
spectrum - A truth discovery python library
spectrum is a library that provides implementation of truth discovery algorithms, which estimate the correct object values as well as data source reliabilities. An object’s value can be discrete or continuous. For example, a person’s birthplace has a discrete domain, i.e, the birth location. Whereas a stock price is continuous.
-
I love map
I always love map. Information when displayed on a map can reveal a lot of insights. So here is the map of mobile towers in Egypt.
-
Prior construction
In Bayesian analysis, one need to construct prior distribution to encode prior belief. Here is again the notebook :D.
-
Markov Chain Monte Carlo inference
In the previous blog, I wrote about variational inference as a type of approximate inference. Today let’s talk about another type of approximate inference, namely Markov Chain Monte Carlo. Here is again the notebook.
-
Variational inference
Variational inference is type of approximate inference. It allows use to approxiate posterior distribution . Here is a small notebook that I wrote.
-
Expectation maximization
Let’s talk about expectation maximization algorithm.
-
Box analysis process
What does this mean by doing data analysis?
-
Welcome to my blog!
It is often hard to see the relationship among machine learning methods. For example, one can view the popular k-means clustering algorithm as an instance of expectation maximization, which, again, can be viewed as a special case ariational inference.