Next session: **An introduction to Bayesian multilevel models using R, brms, and Stan**, on **November 28th from 16:00 to 18:00 @IMAG**. Registration to the event will be open soon.

Attendance is totally free but registration is mandatory and is on a first come first serve basis.

Don’t forget to subscribe to the r-in-grenoble newsletter.

Please talk about our R group with your colleagues.

You can also print this poster and put it up somewhere at work.

Each month, we will organize one working session of 2 hours (on thursdays, from 16h-18h, at the IMAG building).

Please find the guidelines for these sessions:

Everyone is welcome (beginners to advanced R users, just bring your laptop).

Presentations/tutorials will last 30 to 60 min, on some R topics that can be useful to many people, with practical examples.

We want to have a lightning talk (5 min) after the main session so that people can quickly present how they use R in their work (or a specific package, etc.).

During the second part of the session, people could ask and answer some questions about specific problems they encounter when coding in R.

During this time, food and sodas will be offered by the Grenoble Alpes Data Institute.

If you wish to share your R experience during a working session and/or to co-animate a working session, please contact us.

*Click on the title to see the session description*

- Reproducibility is a hot topic in modern science. Interest in reproducibility was triggered by the “reproducibility crisis” undergone by many fields in the past decades. A reproducible analysis is an analysis that can be re-run from raw data to final result identically by anybody (including “future you”!) on any computer and at any point in time. In this presentation we will give attendants an overview of many tools implemented in R that help make your work more reproducible and how these are integrated in Rstudio: project management structure, workflow management with {drake}, version control with Git and GitHub, literate programming with Rmarkdown, work compendia and package development and much more.

- R is a powerful tool for geospatial data processing, analysis, and visualization. This workshop will provide an introduction to manipulating geospatial data with a few core packages. {sf} provides classes and methods for vector data and is designed to work well with the [tidyverse](http://tidyverse.org/). {raster} provides classes and methods for raster data. We will also see how to create static, animated, and interactive maps with {tmap}, {rasterviz}, and {mapview}.

- Bayesian multilevel models are increasingly used to overcome the limitations of frequentist approaches in the analysis of complex structured data. During this session, I will briefly introduce the logic of Bayesian inference and motivate the use of multilevel modelling. I will then show how Bayesian multilevel models can be fitted using the probabilistic programming language Stan and the R package brms (Bürkner, 2016). The brms package allows fitting complex nonlinear multilevel (aka 'mixed-effects') models using an understandable high-level formula syntax. I will demonstrate the use of brms with some general examples and discuss model comparison tools available within the package. Prior experience with data manipulation and linear models in R will be helpful.

- abstract coming

- abstract coming

- abstract coming

- Learn about what is R Markdown and how to use this format. We will also present the many possibilities of R Markdown: 1/ Compile a single R Markdown document to a report in different formats, such as PDF, HTML, or Word. 2/ Create notebooks in which you can directly run code chunks interactively. 3/ Make slides for presentations (HTML5, LaTeX Beamer, or PowerPoint). 4/ Produce dashboards with flexible, interactive, and attractive layouts. 5/ Build interactive applications based on Shiny. 6/ Write journal articles. 7/ Author books of multiple chapters. 8/ Generate websites and blogs.

- Data manipulation is an essential step in any data analysis project. Core package of the tidyverse environment, {dplyr} provides a data manipulation framework that uses an intuitive syntax defined by a small set of verbs. In this tutorial we will learn how to pick observations by their values (filter()), reorder the rows (arrange()), pick variables by their name (select()), create new variables with functions of existing variables (mutate()), collapse many values down to a single summary (summarise()) and combine all these operations using the '%>%' (pipe) operator. This will help you solve the most common data manipulation challenges. **ADD data.table**

- 'The simple graph has brought more information to the data analyst’s mind than any other device.' --- John Tukey. {ggplot2} is one of today's most powerful data visualisation suite. In this tutorial we will learn the underlying philosophy behind {ggplot2}: the Grammar of Graphics. We will see how all types of graphs such as histograms, scatterplots, boxplots, probability densities etc. can be specified by a small set of shared parameters, and how understanding what these parameters are and how to use them will allow you to seamlessly produce powerful and complex graphs to better understand your data.

Spatial data with packages {sf} and {raster}: Presentation; R script

Deep Learning with package {keras}: presentation and exercise (.Rmd)