Next session: Using Git and GitHub with R, on February 20th from 16:00 to 18:00 @IMAG, auditorium.

Attendance is totally free but registration is mandatory and is on a first come first serve basis.

Registration to the event here.

Don’t forget to subscribe to the r-in-grenoble newsletter.

Help us communicate

Please talk about our R group with your colleagues.

You can also print this poster and put it up somewhere at work.

Join our R working sessions in Grenoble

Each month, we will organize one working session of 2 hours (on thursdays, from 16h-18h, at the IMAG building).

Please find the guidelines for these sessions:

Schedule of 2019-2020

Click on the title to see the session description

September 26, 2019
Reproducible research in R
P. Jedynak & M. Rolland
    Reproducibility is a hot topic in modern science. Interest in reproducibility was triggered by the “reproducibility crisis” undergone by many fields in the past decades. A reproducible analysis is an analysis that can be re-run from raw data to final result identically by anybody (including “future you”!) on any computer and at any point in time. In this presentation we will give attendants an overview of many tools implemented in R that help make your work more reproducible and how these are integrated in Rstudio: project management structure, workflow management with {drake}, version control with Git and GitHub, literate programming with Rmarkdown, work compendia and package development and much more.
October 17, 2019
Spatial data with packages {sf} and {raster}
I. Hough
    R is a powerful tool for geospatial data processing, analysis, and visualization. This workshop will provide an introduction to manipulating geospatial data with a few core packages. {sf} provides classes and methods for vector data and is designed to work well with the [tidyverse](http://tidyverse.org/). {raster} provides classes and methods for raster data. We will also see how to create static, animated, and interactive maps with {tmap}, {rasterviz}, and {mapview}.
November 28, 2019
An introduction to Bayesian multilevel models using R, brms, and Stan
L. Nalborczyk
    Bayesian multilevel models are increasingly used to overcome the limitations of frequentist approaches in the analysis of complex structured data. During this session, I will briefly introduce the logic of Bayesian inference and motivate the use of multilevel modelling. I will then show how Bayesian multilevel models can be fitted using the probabilistic programming language Stan and the R package brms (Bürkner, 2016). The brms package allows fitting complex nonlinear multilevel (aka 'mixed-effects') models using an understandable high-level formula syntax. I will demonstrate the use of brms with some general examples and discuss model comparison tools available within the package. Prior experience with data manipulation and linear models in R will be helpful.
December 19, 2019
Organizing a data challenge in R - moved to 30/01
    MOVED TO 30/01/20
January 30, 2020
Organizing a data challenge in R
M. Richard & F. Chuffart
    Data challenges are original tools to explore computational methods trough active learning and competition approaches. It can be used both (i) to improve pedagogic practices in education and (ii) to address scientific questions, such as benchmarking of computational methods. First, we will present you examples of R data challenges that we have run on the open-source challenge platform codalab (challenges in master classes, remote challenges during scientific conferences or winter-school-like scientific data challenges). Then, we will give a tutorial to teach you how to easily set up your own data challenge.
February 20, 2020
Using Git and GitHub with R
A. Arnaud
    As nicely written by Hadley Wickham R guru, 'Good coding style is like correct punctuation: you can manage without it, butitsuremakesthingseasiertoread.' In this session, we will review good practices to write R code : 1) style advises, 2) versioning with git. In a first part, we will present some ideas of the tidyverse style guide (following http://style.tidyverse.org/) , the second part will bedevoted to handle the integration between R and Git inside Rstudio. Asan exercise, we will provide you a Rmarkdown file to create a R packagein order to put in practice the previous recommendations and turnpersonal code into reusable code (following http://r-pkgs.had.co.nz). Make sure that the following packages are installed on your computer : git, R, RStudio, Rmarkdown, and Roxygen.
March 26, 2020
Joint presentation with the Python user group - more details soon
M. Rolland & ?
    abstract coming
April 16, 2020
R Markdown
P. Jedynak & M. Rolland
    Learn about what is R Markdown and how to use this format. We will also present the many possibilities of R Markdown: 1/ Compile a single R Markdown document to a report in different formats, such as PDF, HTML, or Word. 2/ Create notebooks in which you can directly run code chunks interactively. 3/ Make slides for presentations (HTML5, LaTeX Beamer, or PowerPoint). 4/ Produce dashboards with flexible, interactive, and attractive layouts. 5/ Build interactive applications based on Shiny. 6/ Write journal articles. 7/ Author books of multiple chapters. 8/ Generate websites and blogs.
May 28, 2020
Data manipulation with packages {dplyr} and {data.table}
P. Jedynak & I. Hough
    Data manipulation is an essential step in any data analysis project. Core package of the tidyverse environment, {dplyr} provides a data manipulation framework that uses an intuitive syntax defined by a small set of verbs. In this tutorial we will learn how to pick observations by their values (filter()), reorder the rows (arrange()), pick variables by their name (select()), create new variables with functions of existing variables (mutate()), collapse many values down to a single summary (summarise()) and combine all these operations using the '%>%' (pipe) operator. This will help you solve the most common data manipulation challenges. **ADD data.table**
June 25, 2020
Data visualisation with package {ggplot2}
S. Cadiou & M. Rolland
    'The simple graph has brought more information to the data analyst’s mind than any other device.' --- John Tukey. {ggplot2} is one of today's most powerful data visualisation suite. In this tutorial we will learn the underlying philosophy behind {ggplot2}: the Grammar of Graphics. We will see how all types of graphs such as histograms, scatterplots, boxplots, probability densities etc. can be specified by a small set of shared parameters, and how understanding what these parameters are and how to use them will allow you to seamlessly produce powerful and complex graphs to better understand your data.


Materials from previous sessions