Grigori Fursin is a British[2] computer scientist, president of the non-profit CTuning foundation, founding member of MLCommons,[3] and co-chair of the MLCommons Task Force on Automation and Reproducibility.[4] His research group created open-source machine learning based self-optimizing compiler, MILEPOST GCC, considered to be the first in the world.[5] At the end of the MILEPOST project he established cTuning foundation to crowdsource program optimisation and machine learning across diverse devices provided by volunteers. His foundation also developed Collective Knowledge Framework and Collective Mind[6] to support open research. Since 2015 Fursin leads Artifact Evaluation at several ACM and IEEE computer systems conferences. He is also a founding member of the ACM taskforce on Data, Software, and Reproducibility in Publication.[7][8][9]
Education
Fursin completed his PhD in computer science at the University of Edinburgh in 2005. While in Edinburgh, he worked on foundations of practical program autotuning and performance prediction.[10]
Notable projects
Collective Mind - collection of portable, extensible and ready-to-use automation recipes with a human-friendly interface to help the community compose, benchmark and optimize complex AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware.[11][6][12][13]
Collective Knowledge – open-source framework to help researchers and practitioners organize their software projects as a database of reusable components and portable workflows with common APIs based on FAIR principles,[14] and quickly prototype, crowdsource and reproduce research experiments.
MILEPOST GCC – open-source technology to build machine learning based compilers.
Interactive Compilation Interface – plugin framework to expose internal features and optimisation decisions of compilers for external auto tuning and learning.
cTuning foundation – non-profit research organisation developing open-source tools and common methodology for collaborative and reproducible experimentation.
Artifact Evaluation - validation of experimental results from published papers at the computer systems and machine learning conferences.[15][16][17]
^World's First Intelligent, Open Source Compiler Provides Automated Advice on Software Code Optimization, IBM press-release, June 2009 (link)
^ abFursin, Grigori (June 2024). "Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments". arXiv:2406.16791 [cs.LG].
^Fursin, Grigori; Bruce Childers; Alex K. Jones; Daniel Mosse (June 2014). TRUST'14. Proceedings of the 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering at PLDI'14. doi:10.1145/2618137. Archived from the original on 25 December 2022. Retrieved 26 December 2024.
^Fursin, Grigori; Bruce Childers; Alex K. Jones; Daniel Mosse (June 2014). TRUST'14. Proceedings of the 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering at PLDI'14. doi:10.1145/2618137.
^Childers, Bruce R; Grigori Fursin; Shriram Krishnamurthi; Andreas Zeller (March 2016). Artifact evaluation for publications. Dagstuhl Perspectives Workshop 15452. doi:10.4230/DagRep.5.11.29.
This biographical article relating to a computer specialist is a stub. You can help Wikipedia by expanding it.