What I do
I am a researcher crafting novel visual tools, to support humans in solving real-world, data-intensive problems. This research is at the intersection of the academic fields of Information Visualization, Human-Computer Interaction and Design. My outputs are publications, prototypes, and toolkits available online and their code open-source. I am also interested in the combination of art, design, and visualization, to explore untapped design spaces.
|Research domains||Application domains|
- » Oct. 2017 - Presented two papers at IEEE Vis 2017
- » Oct. 2017 - Will travel to IEEE VIS conference 2017 in Phoenix, USA
- » Jul. 2017 - InfoVis'17 paper accepted! (w/ Jeremy Boy as co-author).
- » Jul. 2017 - Talked dataviz at RES Conference 02
- » Jul. 2017 - Talked Dataviz+Python at JDEV 2017
- » Jul. 2017 - Talked dataviz at École d'été cartographie at ENSSIB
- » Jun. 2017 - Organized the Journées Visu 2017
- » May. 2017 - Series of talks at Tokyo Univ., NII and Tokyo Dataviz meetup
- » Apr. 2017 - Organized an advanced d3js dataviz workshop at MixIT 2017
Funded Projects (Work with me!)
Visual Analytics (VizTics): We aim at pushing the envelope of Visual Analytics tools and methods, by building a high performance infrastructure for visual exploration and steering of predictive models. The funding comes from a research starting grant Impulsion 2017 and will last one year (starting January 2017).
Parametric Spaces Visualization (ParamSpaceViz): This project aims at combining visual exploration techniques with image analysis, for better understanding of input configurations values and dependencies. It is in collaboration with Stephane Derrode (image analysis Researcher) and is funded by LIRIS Lab transverse projects initiative to promote collaboration between teams from different research areas.
3D Motion Capture Visualization & Analysis (Amigo): École Centrale de Lyon has just acquired a state-of-the-art motion capture platform. We are interested in finding motion patterns and building predictive models, using interactive visualizations. Details on the platform and preliminary results are available. We are also interested in building benchmark datasets of annotated motions.