Nous accueillons à partir du 14 juin 2018 le professeur Christoph Scheepers (University of Glasgow – UK) pour une série de quatre séminaires sur le thème « Generalized mixed effects models for the analysis of experimental data« .
Ces séminaires se dérouleront les 14, 18 et 25 juin et le 04 juillet de 16h00 à 18h00 à l’université Paris Diderot – bâtiment Olympe de Gouges – 8, rue Albert Einstein – 75013 Paris – Salle 204.
Résumé des séminaires :
This course will introduce and explain state of the art techniques for the analysis of experimental data in psycholinguistics and psychology of language. The focus is on confirmatory analysis and how to perform experimental hypothesis tests in the most accurate and generalizable manner. The course will be problem-oriented in the sense of trying to provide best
Session 1 – Regression
Since linear mixed models can be seen as an extension of basic regression, the first session will give a refresher of the principles behind regression analysis, and its relationship to other commonly used methods such as t-tests and ANOVA. Various predictor-coding schemes for the specification of hypothesis-relevant contrasts will be illustrated by appropriate examples, and their implications for parameter interpretation will be discussed.
Session 2 – Generalized Linear Models
The second session will introduce the concept of
Session 3 – Generalized Linear Mixed Models
The third session will focus on repeated-measures designs (probably the most commonly used type of design in psycholinguistics and cognitive psychology), leading to the introduction of
Session 4 – Control Predictors in a (Maximal) GLMM
Since psycholinguistic experiments often require control of potential confound variables (e.g. lexical frequency in a word-recognition experiment), the last session will specifically focus on how to handle control predictors (sometimes referred to as covariates) in a maximal GLMM. I will present results from data simulations showing that simple ‘matching’ of confound variables between, say, different groups of items in the stimulus set is often not enough to avoid anticonservative inferences. Using appropriate examples, I will illustrate how to tackle this problem while at the same time keeping model complexity at a tolerable level to avoid convergence problems.