Blog posts

2023

Why Julia is so fast?

7 minute read

Published:

I have always wanted to understand what makes one programming language faster than another. The problem is that in order to know this, one must understand in detail how the specific programming language one wishes to study is implemented. This is often a complicated task, as there are millions of books that explain the basics of the syntax of a programming language, but there are hardly any books that explain the technical and theoretical details behind the design of a language. Generally, to acquire this knowledge, one must have spent a lot of time programming in that language, get to know the project from within, read thousands of blogs and questions and answers on Stack Overflow, etc.

Formulas of Brion, Lawrence and Varchenko on rational generating functions for cones.

8 minute read

Published:

We strive to present two remarkable discoveries in discrete geometry: the formulas established by Michel Brion [1], James Lawrence [2], and Alexander N. Varchenko [3]. Initially, these formulas may appear incredulous, and even after dedicating considerable time to studying them, they continue to evoke a sense of intrigue and fascination.

Automatic Preconditioning by Limited Memory Quasi-Newton Updating

6 minute read

Published:

The paper presents a method to accelerate convergence in large-scale optimization and finite element problems by preconditioning conjugate gradient (CG) iterations with a limited memory quasi-Newton update, notably through the L-BFGS approach. It begins by reviewing the CG method for solving quadratic minimization problems and introduces preconditioning—replacing the system Ax = b with M⁻¹Ax = M⁻¹b—to improve the condition number and convergence rate. Building on this, the paper describes a Hessian-free Newton method that uses CG iterations on a Taylor expansion of the objective function, and then proposes an automatic preconditioning strategy that updates an approximation of the inverse Hessian using only a few stored vector pairs from recent iterations. This limited memory approach, especially effective when using around eight update pairs and a uniform sampling strategy, substantially reduces the number of CG iterations in non-linear optimization problems and, to a lesser extent, in finite element models without the need to compute the full Hessian.

2022