Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Why Julia is so fast?
Published:
I have always wanted to understand what makes one programming language faster than another. The problem is that in order to know this, one must understand in detail how the specific programming language one wishes to study is implemented. This is often a complicated task, as there are millions of books that explain the basics of the syntax of a programming language, but there are hardly any books that explain the technical and theoretical details behind the design of a language. Generally, to acquire this knowledge, one must have spent a lot of time programming in that language, get to know the project from within, read thousands of blogs and questions and answers on Stack Overflow, etc.
Formulas of Brion, Lawrence and Varchenko on rational generating functions for cones.
Published:
We strive to present two remarkable discoveries in discrete geometry: the formulas established by Michel Brion [1], James Lawrence [2], and Alexander N. Varchenko [3]. Initially, these formulas may appear incredulous, and even after dedicating considerable time to studying them, they continue to evoke a sense of intrigue and fascination.
Automatic Preconditioning by Limited Memory Quasi-Newton Updating
Published:
The paper presents a method to accelerate convergence in large-scale optimization and finite element problems by preconditioning conjugate gradient (CG) iterations with a limited memory quasi-Newton update, notably through the L-BFGS approach. It begins by reviewing the CG method for solving quadratic minimization problems and introduces preconditioning—replacing the system Ax = b with M⁻¹Ax = M⁻¹b—to improve the condition number and convergence rate. Building on this, the paper describes a Hessian-free Newton method that uses CG iterations on a Taylor expansion of the objective function, and then proposes an automatic preconditioning strategy that updates an approximation of the inverse Hessian using only a few stored vector pairs from recent iterations. This limited memory approach, especially effective when using around eight update pairs and a uniform sampling strategy, substantially reduces the number of CG iterations in non-linear optimization problems and, to a lesser extent, in finite element models without the need to compute the full Hessian.
Birth-Death process simulations and a Monte-Carlo dynamic algorithm (Gillespie)
Published:
Repository BirthDeathProcess.jl process is a package in julia that implements a series of utilities to simulate the Birth-Death process, given by the differential equation \(n' = \beta - d\cdot n\).
publications
Training Implicit Generative Models via an Invariant Statistical Loss
Published in 27th International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Robust training of implicit generative models for multivariate and heavy-tailed distributions with an invariant statistical loss
Published in Pattern Recognition (under review), 2024
talks
Semaine des jeunes talents scientifiques francophones / Week of Young Francophone Scientific Talents
Published:
II Andaluz.IA Forum 2024
Published:
teaching
Machine Learning II (2023 - 2026)
Undergraduate course, University Carlos III of Madrid, Spain, 2023
- Bachelor in Data Science (English)
- Tutoring foreign students
- Lecture notes
Statistical Signal Processing (2024)
Undergraduate course, University Carlos III of Madrid, Spain, 2024
- Tutoring foreign students