Just want to write stuff. Something about me. Categories. Resources.

Grothendieck's Inequality and Constant

Yesterday in the High Dimensional Probability class, I encountered the proof of Grothendieck’s inequality which relates the solution of an Integer Linear Programming (ILP) and its corresponding real Linear Programming (LP).

The inequality states that

Grothendieck’s Inequality There exists a finite constant \(K>0\) such that for every \(l,m,n\in \mathbb N\) and every matrix \(M=(M_{ij})\in \mathbb F^{m\times n}\) (where \(\mathbb F\) can be \(\mathbb R\) or \(\mathbb C\)), \[ \max\limits_{\|x_i\|=\|y_j\|=1 } \left|\sum\limits_{i=1}^m\sum\limits_{j=1}^n M_{ij}\langle x_i,y_j\rangle\right| \le K\max\limits_{|\varepsilon_i|=|\delta_j|=1} \left|\sum\limits_{i=1}^m\sum\limits_{j=1}^n M_{ij}\varepsilon_i \delta_j\right| \] where the inner product is taken in \(\mathbb F^l\).

The smallest constant \(K_G\) that satisfies the inequality is called the Grothendieck’s constant. From the (beautiful) proof of Krivine, \(K_G\le \frac{\pi}{ 2\ln(1+\sqrt{2}) }\). What is more surprising is that this is not the sharpest constant, even though proof uses a very sharp argument (which is often referred to as kernel trick in machine learning).