-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
336 lines (272 loc) ยท 20.4 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
<!DOCTYPE HTML>
<!--
Massively by HTML5 UP
html5up.net | @ajlkn
Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
-->
<html>
<head>
<title>Index</title>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
<link rel="stylesheet" href="assets/css/main.css" />
<noscript><link rel="stylesheet" href="assets/css/noscript.css" /></noscript>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-B6CGCXLZWE"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-B6CGCXLZWE');
</script>
</head>
<body class="is-preload">
<!-- Wrapper -->
<div id="wrapper" class="fade-in">
<!-- Intro -->
<div id="intro">
<!--
<h2>Pablo Arias Sarah</h2>
Hacking social interaction mechanisms with voice/face transformations
-->
<ul class="actions">
<li><a href="#header" class="button icon solid solo fa-arrow-down scrolly">Deep dive</a></li>
</ul>
</div>
<!-- Header -->
<header id="header">
<a href="index.html" class="logo">P. A. S.</a>
</header>
<!-- Nav -->
<nav id="nav">
<ul class="links">
<li class="active"><a href="index.html">Home</a></li>
<li><a href="index.html#About">About</a></li>
<li><a href="index.html#News">News</a></li>
<li ><a href="index.html#Highlights"> Highlights</a></li>
<li ><a href="publications.html">Publications</a></li>
<li ><a href="talks.html">Talks</a></li>
<li ><a href="ducksoup.html">DuckSoup</a></li>
<li ><a href="ARIAS_CV.pdf">CV</a></li>
<li ><a href="music.html">Music</a></li>
</ul>
<ul class="icons">
<li><a href="https://twitter.com/pablo_arias_sar" class="icon brands fa-twitter"><span class="label">Twitter</span></a></li>
<li><a href="https://www.linkedin.com/in/pablo-arias-sarah-08a20693/" class="icon brands fa-linkedin-in"><span class="label">Linked-in</span></a></li>
<li><a href="https://github.com/Pablo-Arias" class="icon brands fa-github"><span class="label">GitHub</span></a></li>
</ul>
</nav>
<!-- Main -->
<div id="main">
<!-- Intro -->
<article class="post featured">
<header class="major", id="About">
<h3>About <br /></h3>
</header>
<div style=font-size:80%>
<p><span class="image left"><img src="images/profile_rando_test_filtered.jpg" alt="" /></span>Hi! I'm Pablo Arias-Sarah, a French/Colombian Lecturer working at the University of Glasgow, in the <a href="https://www.gla.ac.uk/schools/psychologyneuroscience/">School of Psychology and Neuroscience</a>. I study human social interactions using real time voice/face transformations. To do this, we developed a videoconference experimental platform called <a href="ducksoup.html"> DuckSoup</a>, which enables researchers to transform participants' voice and face (e.g. increase participants' smiles or their vocal intonations) in real time during free social interactions. I am interested in human social communication, social biases and human enhancement.</p>
<p> I hold a PhD in cognitive science from Sorbonne University (Paris, France), a Master of Engineering in digital technologies and multimedia from Polytech' Nantes (Nantes, France), and a Master of Science in acoustics, signal processing and computer science applied to sound, from IRCAM (Paris, France). You can find a complete list of my publications <a href="https://scholar.google.fr/citations?user=6jMFwJQAAAAJ&hl=en&oi=ao">here </a> or <a href="https://twitter.com/pablo_arias_sar"> follow me on twitter</a> to keep up to date with my latest work.
</p>
</div>
</article>
<!-- Featured Post -->
<article class="post featured">
<h2 id="News">Career News</h2>
<div class="table-wrapper" style='font-size:70%'>
<table>
<thead>
<tr>
<th>Date</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>October 2024</td>
<td text-align="justify"> I started a <strong>permanent position </strong>in the <a href="https://www.gla.ac.uk/schools/psychologyneuroscience/">School of Psychology and Neuroscience</a> in the University of Glasgow as part of the <a href="https://cscan.gla.ac.uk/">Center for Social Cognitive and Affective Neuroscience</a>! ๐</td>
</tr>
<tr>
<td>November 2022</td>
<td text-align="justify"> We were awarded a prestigious <strong>Vetenskapsrรฅdet grant</strong> from the Swedish Research Council to develop our new platform DuckSoup in collaboration with Petter Johanson and Lars Hall.
</td>
</tr>
<tr>
<td>October 2022</td>
<td text-align="justify"><strong>Moving to Scotland</strong> to start a new position as Marie Curie Fellow</strong> in the University of Glasgow in the <a href="https://www.gla.ac.uk/schools/psychologyneuroscience/">School of Psychology and Neuroscience</a> with <a href="https://scholar.google.co.uk/citations?user=cwgW51EAAAAJ&hl=en">Philippe Schyns</a> and <a href="https://scholar.google.co.uk/citations?user=gO2daQsAAAAJ&hl=en"> Rachael Jack</a>. In collaboration with Lund University Cognitive Science. Super psyched! ๐คฉ
</td>
</tr>
<tr>
<td>June 2022</td>
<td text-align="justify"> I won an <strong>Individual Marie Curie postdoctoral fellowship</strong> for my proposal SINA (Studying Social Interactions with Audiovisual Transformations). In collaboration with Rachael Jack, Philippe Schyns (Glasgow University) and Petter Johansson (Lund University)! ๐ฃ</td>
</tr>
<tr>
<td>June 2021</td>
<td text-align="justify">We were awarded the <strong>Sorbonne Univeristy Emergence</strong> grant for our project REVOLT (Revealing human bias with real time vocal deep fakes) proposal, in collaboration with Nicolas Obin (Sorbonne Univeristy) ๐ฅ. </td>
</tr>
<tr>
<td>Sept 2019</td>
<td text-align="justify">I'm starting a new postdoctoral position at <strong>Lund University Cognitive Science</strong> in Sweden to work with Petter Johannsson and Lars Hall in the Choice Blindness lab! We aim to create unprecedented methodological tools to study human social interaction mechanisms. </td>
</tr>
<tr>
<td>Dec 2018</td>
<td text-align="justify">Defended my <strong>PhD thesis</strong> entitled <a href="https://hal.archives-ouvertes.fr/tel-02010161/file/PhD%20Arias.pdf"> The cognition of auditory smiles: a computational approach"</a>, which was evaluated by an inspring jury composed of <a href="https://scholar.google.fr/citations?user=XWdplJkAAAAJ&hl=en&oi=ao">Tecumseh Fitch</a> (Univ. Viena), <a href="https://scholar.google.fr/citations?user=XWdplJkAAAAJ&hl=en&oi=ao">Rachael Jack</a> (Univ. Glasgow), <a href="https://scholar.google.fr/citations?user=n9ZNrsEAAAAJ&hl=en&oi=ao">Catherine Pelachaud</a> (Sorbonne University), <a href="https://scholar.google.fr/citations?user=qKynVZ0AAAAJ&hl=en&oi=ao">Martine Gavaret</a> (Paris Descartes), <a href="https://scholar.google.fr/citations?user=PjXi-vYAAAAJ&hl=en&oi=ao">Julie Grezes</a> and <a href="https://scholar.google.fr/citations?user=PjXi-vYAAAAJ&hl=en&oi=ao">Pascal Belin</a> (Univ. Aix Marseille), <a href="https://scholar.google.fr/citations?user=PjXi-vYAAAAJ&hl=en&oi=ao"> Patrick Susini</a> (IRCAM) and <a href="https://scholar.google.fr/citations?user=PjXi-vYAAAAJ&hl=en&oi=ao">Jean-Julien Aucouturier</a> (CNRS).</td>
</tr>
</tbody>
</table>
</div>
</article>
<!-- Posts -->
<section class="post">
<header class="major", id="Highlights">
<h2>Research Highlights </h2>
</header>
</section>
<section class="posts">
<!-- PNAS -->
<article>
<header>
<span class="date">November, 2024</span>
<h4>Aligning the smiles of dating dyads causally increases attraction โค๏ธ <br /></h4>
</header>
<a href="ducksoup.html" class="image fit"><img src="images/trial_setup_website.jpg" alt="" /></a>
<div style='font-size:85%'>
<p> We have a new article out in <i>PNAS</i>! We asked participants to take part in a speed-dating experiment, while we aligned (๐ vs ๐) or misaligned (๐ vs ๐) their smiles in real-time with our face transformation algorithms. While participants remained unaware of the manipulations, aligned smiles enhanced their romantic attraction, compared to unaligned scenarios. Therefore, we causally manipulated the emergence of romantic attraction in free social interactions. This demonstrates the potential of our experimental platform <a href="ducksoup.html">DuckSoup</a>, supports alignment theories and raises important ethical questions about transformation filters! A titanseque effort that we are delighted to publish in PNAS! Check this <a href="https://x.com/pablo_arias_sar/status/1851225740294951139">twitter thread</a> or the <a href="https://eprints.gla.ac.uk/334629/">manuscript</a> for more information.
</p>
</div>
</article>
<!-- Mozza -->
<article>
<header>
<span class="date">September, 2024</span>
<h4>Mozza is now open-source! ๐จ๐พโ๐ป ๐โ๐<br /></h4>
</header>
<a href="ducksoup.html" class="image fit"><img src="images/mozza_example.jpg" alt="" /></a>
<div style='font-size:85%'>
<p> We are releasing in open-source our gStreamer plugin Mozza, which enables users to parametrically transform the facial smiles in a video feed either in real-time or offline. The open source code is <a href="https://github.com/ducksouplab/mozza">here</a>. This is an implementation of <a href="https://hal.science/hal-01712834v1/file/Arias-2018-Realistic%20transformation%20of%20facial%20and%20vocal%20smiles.pdf">Arias 2018 IEEE TACs.</a>
</div>
</article>
<!-- DuckSoup -->
<article>
<header>
<span class="date">September, 2023</span>
<h4><a href="ducksoup.html">DuckSoup is in public beta! ๐ฅณ๐ฅณ<br /></a></h4>
</header>
<a href="ducksoup.html" class="image fit"><img src="images/ducksoup.jpg" alt="" /></a>
<div style='font-size:85%'>
<p> We are <strong>releasing a public beta of our new experimental platform DuckSoup</strong> ๐. <a href="ducksoup.html"> DuckSoup</a> is an open source videoconference platform enabling researchers to manipulate participants' facial and vocal attributes in real time during social interactions. If you are interested in collecting large, synchronised & multicultural human social interaction data sets, get in touch! Check out a project description <a href="ducksoup.html"> here</a> ๐งโโ๏ธ and the open source code <a href="https://github.com/ducksouplab/ducksoup"> here</a> ๐ง๐ฝโ๐ป.
</div>
</article>
<!-- Pupil dilation reflects the dynamic integration of audiovisual emotional speech -->
<article>
<header>
<span class="date">April, 2023</span>
<h4><a href="https://www.nature.com/articles/s41598-023-32133-2.pdf">Pupil dilation reflects the dynamic integration of audiovisual emotional speech<br /></a></h4>
</header>
<a href="https://www.nature.com/articles/s41598-023-32133-2.pdf" class="image fit"><img src="images/website_et.jpg" alt="" /></a>
<div style='font-size:85%'>
<p> New article out in <i>Scientific reports</i>! ๐ We investigated if pupillary reactions ๐ can index the processes underlying the audiovisual integration of emotional signals (๐๐ฑ๐ฎ). We used our audiovisual smiles algorithms to create congruent/incongruent audiovisual smiles and studied pupillary reactions to manipulated stimuli. We show that pupil dilation can reflect emotional information mismatch in audiovisual speech. We hope to replicate these findings in neurodivergent populations to probe their emotional processing.
Check the full article <a href="https://www.nature.com/articles/s41598-023-32133-2">here</a>. Or check <a href="https://twitter.com/pablo_arias_sar/status/1643544942789230592">this</a> twitter thread explaining the findings.
</p>
</div>
</article>
<!-- Production strategies of vocal attitudes -->
<article>
<header>
<span class="date">September, 2022</span>
<h4><a href="https://hal.science/hal-03881495/file/Production_Strategies_of_Vocal_Attitudes_IS.pdf">Production Strategies of Vocal Attitudes<br /></a></h4>
</header>
<a href="https://hal.science/hal-03881495/file/Production_Strategies_of_Vocal_Attitudes_IS.pdf" class="image fit"><img src="images/vocal_atttitudes.jpg" alt="" /></a>
<div style='font-size:85%'>
<p> New article out in Interspeech! ๐ฃ๏ธ We analysed a large multispeaker dataset of vocal utterances and characterised the acoustic strategies used by speakers to communicate social attitudes using deep alignment methods. We produced high-level representations of speakersโ articulation (e.g. Vowel Space Density) and speech rhythm. We hope to use these measures to provide an objective validation method of deep voice conversions methods.
Check the full article <a href="https://hal.science/hal-03881495/file/Production_Strategies_of_Vocal_Attitudes_IS.pdf">here</a>.
</p>
</div>
</article>
<!-- Facial mimicry in the congenitally blind -->
<article>
<header>
<span class="date">December, 2021</span>
<h4><a href="https://neuro-team-femto.github.io/articles/2021/Arias_Current_Biology_2021.pdf">Facial mimicry in the congenitally blind<br /></a></h4>
</header>
<a href="https://neuro-team-femto.github.io/articles/2021/Arias_Current_Biology_2021.pdf" class="image fit"><img src="images/main_figure_CB_v1.jpg" alt="" /></a>
<div style='font-size:85%'>
<p> We have a new article out in <i>Current Biology</i>! We show that congenitally blind individuals facially imitate smiles heard in speech despite having never seen a facial expression. This demonstrates that the development of facial mimicry does not depend on visual learning and that imitation is not a mere visuo-motor process but a flexible mechanism deployed across sensory inputs.
Check the full article <a href="https://neuro-team-femto.github.io/articles/2021/Arias_Current_Biology_2021.pdf">here</a>. Or check <a href="https://twitter.com/PabloAriasMusic/status/1453637734489284608">this</a> twitter thread explaining the findings.
</p>
</div>
</article>
<!-- Beyond correlation: acoustic transformation methods for the experimental study of emotional voice and speech -->
<article>
<header>
<span class="date">January, 2021</span>
<h4><a href="https://hal.archives-ouvertes.fr/hal-02907502/document">Beyond correlation: acoustic transformation methods for the experimental study of emotional voice and speech<br /></a></h4>
</header>
<a href="https://hal.archives-ouvertes.fr/hal-02907502/document" class="image fit"><img src="images/emotion_review.jpg" alt="" /></a>
<div style='font-size:85%'>
<p> We have a new article out in <i>Emotion Review</i>! In this article we present the methodological advantages of using stimulus manipulation techniques for the experimental study of emotions. We give several examples using such computational models to uncover cognitive mechanisms, and argue that such stimulus manipulation techniques can allow researchers to make causal inferences between stimulus features and participant's behavioral, physiological and neural responses.
</p>
<div style='font-size:85%'>
</article>
<!-- Auditory smiles trigger unconscious facial imitation -->
<article>
<header>
<span class="date">April, 2018</span>
<h4><a href="https://www.cell.com/current-biology/pdf/S0960-9822(18)30752-8.pdf">Auditory smiles trigger unconscious facial imitation<br /></a></h4>
</header>
<a href="https://www.cell.com/current-biology/pdf/S0960-9822(18)30752-8.pdf" class="image fit"><img src="images/auditory_smiles_trigger.jpg" alt="" /></a>
<div style='font-size:85%'>
<p>
We have a new article out in <i>Current Biology</i> ๐ฅณ !! In this article we modeled the auditory consequences of smiles in speech and showed that such auditory smiles can trigger facial imitation in listeners even in the absence of visual cues. Interestingly, these reactions occur even when participants do not explicitly detect the smiles.
</p>
</div>
</article>
<article>
<header>
<span class="date">January, 2018</span>
<h4><a href="https://www.researchgate.net/profile/Emmanuel-Ponsot/publication/322609047_Uncovering_mental_representations_of_smiled_speech_using_reverse_correlation/links/5a659c69a6fdccb61c583953/Uncovering-mental-representations-of-smiled-speech-using-reverse-correlation.pdf">Uncovering mental representations of smiled speech using reverse correlation.</h4>
</header>
<a href="https://www.researchgate.net/profile/Emmanuel-Ponsot/publication/322609047_Uncovering_mental_representations_of_smiled_speech_using_reverse_correlation/links/5a659c69a6fdccb61c583953/Uncovering-mental-representations-of-smiled-speech-using-reverse-correlation.pdf" class="image fit"><img src="images/jasa_el.jpg" alt="" /></a>
<div style='font-size:85%'>
<p>
New article out in <i>JASA-EL</i>! We uncovered the spectral cues underlying the perceptual processing of smiles in speech using reverse correlation. The analyses revealed that listeners rely on robust spectral representations that specifically encode vowelโs formants. These findings demonstrate the causal role played by formants in the perception of smiles and present a novel method to estimate the spectral bases of high-level (e.g., emotional/social/paralinguistic) speech representations.
</p>
</div>
</article>
</section>
</div>
<!-- Footer -->
<!-- First Column -->
<footer id="footer">
<section class="split contact">
<section>
<h3>Email</h3>
<p><a href="#">pablo[dot]arias[dot]sar(At]gmail.com</a></p>
</section>
</section>
<!-- Second Column -->
<section class="split contact">
<section>
<h3>Social</h3>
<ul class="icons alt">
<li><a href="https://twitter.com/pablo_arias_sar" class="icon brands alt fa-twitter"><span class="label">Twitter</span></a></li>
<li><a href="https://www.linkedin.com/in/pablo-arias-sarah-08a20693/" class="icon brands fa-linkedin-in"><span class="label">Linked-in</span></a></li>
<li><a href="https://github.com/Pablo-Arias/STIM" class="icon brands alt fa-github"><span class="label">GitHub</span></a></li>
</ul>
</section>
</section>
</footer>
<!-- Copyright -->
<div id="copyright">
<ul><li>© Untitled</li><li>Design: <a href="https://html5up.net">HTML5 UP</a></li></ul>
</div>
</div>
<!-- Scripts -->
<script src="assets/js/jquery.min.js"></script>
<script src="assets/js/jquery.scrollex.min.js"></script>
<script src="assets/js/jquery.scrolly.min.js"></script>
<script src="assets/js/browser.min.js"></script>
<script src="assets/js/breakpoints.min.js"></script>
<script src="assets/js/util.js"></script>
<script src="assets/js/main.js"></script>
</body>
</html>