Skip to content

Commit

Permalink
update loca
Browse files Browse the repository at this point in the history
  • Loading branch information
svaiter committed Aug 5, 2024
1 parent 86c40d5 commit a56565c
Show file tree
Hide file tree
Showing 2 changed files with 53 additions and 10 deletions.
Binary file added assets/img/unica.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
63 changes: 53 additions & 10 deletions content/loca24.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,24 +33,64 @@ The workshop will take place at the [I3S](https://www.i3s.unice.fr/en/) institut
### September 24: Conference

- 09:00-09:30: Welcome
- 09:30-10:30:
- 09:30-10:30: Jérôme Bolte

**TBA**

Abstract: TBA

- 10:30-11:00: Coffee break
- 11:00-12:00:
- 11:00-12:00: Aurélien Bellet

**Differentially Private Optimization with Coordinate Descent and Fixed-Point Iterations**

Abstract: Machine learning models are known to leak information about individual data points used to train them. Differentially private optimization aims to address this problem by training models with strong differential privacy guarantees. This is achieved by adding controlled noise to the optimization process, for instance during the gradient computation steps in the case of the popular DP-SGD algorithm. In this talk, I will discuss how to beyond DP-SGD by (i) introducing private coordinate descent algorithms that can better exploit the problem structure, and (ii) leveraging the framework of fixed-point iterations to design and analyze new private optimization algorithms for centralized and federated settings.

- 12:00-14:00: Lunch
- 14:00-15:00: Session poster
- 15:00-16:00:
- 15:00-16:00: Julie Delon

**TBA**

Abstract: TBA

- 16:00-16:30: Coffee break
- 16:30-17:30:
- 16:30-17:30: Claire Boyer

**A primer on physics-informed learning**

Abstract: Physics-informed machine learning combines the expressiveness of data-based approaches with the interpretability of physical models. In this context, we consider a general regression problem where the empirical risk is regularized by a partial differential equation that quantifies the physical inconsistency.
Practitioners often resort to physics-informed neural networks (PINNs) to solve this kind of problem. After discussing some strengths and limitations of PINNs, we prove that for linear differential priors, the problem can be formulated directly as a kernel regression task, giving a rigorous framework to analyze physics-informed ML. In particular, the physical prior can help in boosting the estimator convergence.


### September 25: Conference

- 09:30-10:30:
- 09:30-10:30: Anna Korba

**Implicit Diffusion: Efficient Optimization through Stochastic Sampling**

Abstract: We present a new algorithm to optimize distributions defined implicitly by parameterized stochastic diffusions. Doing so allows us to modify the outcome distribution of sampling processes by optimizing over their parameters. We introduce a general framework for first-order optimization of these processes, that performs jointly, in a single loop, optimization and sampling steps. This approach is inspired by recent advances in bilevel optimization and automatic implicit differentiation, leveraging the point of view of sampling as optimization over the space of probability distributions. We provide theoretical guarantees on the performance of our method, as well as experimental results demonstrating its effectiveness. We apply it to training energy-based models and finetuning denoising diffusions.

- 10:30-11:00: Coffee break
- 11:00-12:00:
- 11:00-12:00: Gabriel Peyré

**Transformers are Universal In-context Learners**

Abstract: Transformers deep networks define “in-context mappings'”, which enable them to predict new tokens based on a given set of tokens (such as a prompt in NLP applications or a set of patches for vision transformers). This work studies the ability of these architectures to handle an arbitrarily large number of context tokens. To mathematically and uniformly address the expressivity of these architectures, we consider that the mapping is conditioned on a context represented by a probability distribution of tokens (discrete for a finite number of tokens). The related notion of smoothness corresponds to continuity in terms of the Wasserstein distance between these contexts. We demonstrate that deep transformers are universal and can approximate continuous in-context mappings to arbitrary precision, uniformly over compact token domains. A key aspect of our results, compared to existing findings, is that for a fixed precision, a single transformer can operate on an arbitrary (even infinite) number of tokens. Additionally, it operates with a fixed embedding dimension of tokens (this dimension does not increase with precision) and a fixed number of heads (proportional to the dimension). The use of MLP layers between multi-head attention layers is also explicitly controlled. This is a joint work with Takashi Furuya (Shimane Univ.) and Maarten de Hoop (Rice Univ.).

- 12:00-14:00: Lunch
- 14:00-15:00:
- 14:00-15:00: Jérôme Malick

**TBA**

Abstract: TBA

- 15:00-15:30: Coffee break
- 15:30-16:30:
- 15:30-16:30: Gabriele Steidl

**TBA**

Abstract: TBA

## Scientific committee
- Laure Blanc-Féraud
Expand All @@ -61,6 +101,9 @@ The workshop will take place at the [I3S](https://www.i3s.unice.fr/en/) institut

## Sponsors

![Université Côte d'Azur](/img/unica.png)
![I3S](/img/i3s.png)
![LJAD](/img/ljad.png)
![Morpheme](/img/morpheme.png)
![Inria](/img/inria.png)
![RT MAIAGES](/img/maiages.png)
![RT MAIAGES](/img/maiages.png)
![Inria](/img/inria.png)

0 comments on commit a56565c

Please sign in to comment.