01.01.2022 − 31.12.2023
In this project, we will investigate the properties of sparse solutions induced through learning algorithms that promote sparsity. In particular, we want to look at sparsity as being a consequence of model and the implicit sparsity bias of the chosen optimization method. A related question is sample-size efficiency, as these algorithms also tend to have a much higher sample-size efficiency by implicitly restricting the degrees of freedom. Another example of potential sparsity exploitation is PageRank, we will attempt to obtain state of the art PageRank optimization algorithms whose running time scales with the sparsity of the solution, rather than depending on the whole graph. Riemannian optimization is another related field that exploits the geometry of Riemannian constraints to have iterates intrinsically in these manifolds, that are lower dimensional than the embedding space.
We will also look at algorithms that are able to quickly approximate the optimal fair resource allocation, given a notion of fairness in some problems. For instance, under the fairness axioms of Pareto optimality, independence of irrelevant alternative, affine invariance and symmetry, the optimal fair allocation can be posed as an optimization problem. This direction of research concerns the study of finding allocations that approximate this or other fair allocations efficiently.
Produced by this project: