Stanislas Dehaene at PSL/Collège de France and NeuroSpin, CEA. I’m interested in humans’ striking ability to manipulate highly abstract structures, be it language, mathematics or music. My work focuses on the perception of geometry, seeking traces of the ability for abstraction in a domain attested to be extremely old: homo erectus already carved abstract geometrical patterns half a million years ago, while other non human primates seem unable to produce such shapes.
My work relies on experimental psychology to ask specific questions: are certain processed faster/memorized longer/confused less easily with others? Even when matched for low level perceptual features? What characterizes such shapes, and why? We’ve run comparative experiments with French adults at a large scale, together with behavioral data from preschoolers, uneducated adults, and neural networks. This work is moving toward incorporating neuroscience methodologies to answer new questions: EEG in babies to get even more naive participants, and fMRI & MEG in adults to look for perception-independent representations of geometrical shapes.
I also have strong interested and have worked, collaborated or wish to work on the following topics:
“A bat and a ball together cost $1.10 and the bat costs $1 than the ball. How much does the ball costs?” If your gut feeling told you 10 cents, you’re not alone. Is there a non-linguistic way to make you make the same mistake? Or does it inherently come from the formulation of the problem?
Human reasoning and its (in)dependence from natural language processing: to what extent do systematic fallacies in reasoning hinge on the fact the meaning of sentences is enriched through pragmatics from literal to very rich? I argue that we can recast some fallacies as humans’ engaging in a different game: building on the theoretical work of the Erotetic Theory of Reasoning, and bridging the gap with the Bayesian confirmation literature, I argue that humans are not trying to “maximize what’s true” (either when speaking or reasoning) but rather “maximize what’s informative and useful”. This worked is carried in collaboration with Salvador Mascarenhas at ENS, Institut Jean-Nicod, LINGUAE team
Learning a generative model over LOGO/Turtle graphics programs. Shown are renderers of randomly generated programs from the learned prior.
Human sequence processing: it is easy to find long sequences of few elements that are easy to remember (think a, b, c, b, a, b, c, b,…) but characterizing what makes such sequences easy proves to be very hard. This has links with geometry when such sequences unfold on a plane to form a geometrical shape, in which case the nature of the rules that humans are able to use is informative about the internal representation of the unfolding sequence.
Program induction: the nature of the structured representation of many of the high-level concepts mentioned above (language, music, maths, geometry) makes it tempting to model them as computer programs. Then one has to wonder: how does one go from world perception and stimuli to abstract, internal representation? Is it always possible and how to do it efficiently? The part of program induction I took interest in tackles these very questions: starting with a certain idea of what the structure might look like, and a bunch of “real world tasks” (i.e. examples of the input-output produced by the programs, but not the programs themselves), can we make proof-of-concept algorithms that find accurate representation? I was worked on this together with Kevin Ellis and many great people at MIT’s CBMM’s CoCoSci group, piloted by Josh Tenenbaum.
After studying math, physics and computer science in prep. classes in France, I entered École Normale Supérieure (ENS) de Cachan in Computer Science where I finished my License (BA) and my first year of Master. During my masters I spent six months in Oxford, UK doing theoretical computer science under the supervision of Luke Ong, working on the semantics of various λ-calculi.
I then took a gap year sailing A dinner on the island of Ligia, Greece during my gap year. Boat in the background.
and decided to focus on cognitive neuroscience: I applied for the CogMaster and worked with Stanislas Dehaene on geometrical sequences. Between that year and my PhD I spent six months at École Normale Supérieure working under the supervision of Salvador Mascarenhas on the links between reasoning and language, as well as six months at MIT under the supervision of Josh Tenenbaum working on program induction and more specifically applying it in the domain of geometry.
Mathias Sablé-Meyer, Joel Fagot, Serge Caparos, Timo van Kerkoerle, Marie Amalric, Stanislas Dehaene. A signature of human uniqueness in the perception of geometric shapes. PsyArXiv
2018
Kevin Ellis, Lucas Morales, Mathias Sablé-Meyer, Armando Solar-Lezama, Joshua B. Tenenbaum. Library Learning for Neurally-Guided Bayesian Program Induction. NIPS 2018. Spotlight.
Talks & Seminars
Invited speaker at the LINGUAE Seminar: “The laws of mental geometry in human and non-human primates”, 2019
FYSSEN seminar “Pillars of cognitive development in mathematics”, 2019
Joint talk with Kevin Ellis: “Dream-Coder: Bootstrapping Domain-Specific Languages for Neurally-Guided Bayesian Program Learning”, at CogSci 2018 workshop on program induction
I had fun with Jacquin’s algorithm for fractal compression of images in 2013, you can find some of that here
Misc
I’m an avid climber, a competent sailor, an adequate handyman but a terrible mechanic. Occasionally I get hooked into tinkering with various programming languages, neural networks, etc., and when in luck I write about it. Selected examples here:
The documentation for a small project I realised during my time as an intern at NeuroSpin — reverse engineer a food distribution system to interface it with a computer through USB.
My name without an accent is “Mathias Sable-Meyer”. I have been inconsistent in my use of handles in the past, and used: “mathsm” at MIT, “mathias-sm” on github, “@SableMeyer” on Twitter, “msm” whenever it’s free, “msableme” when a university choses for me.