-
Notifications
You must be signed in to change notification settings - Fork 53
/
Copy pathpreslist.yml
1053 lines (944 loc) · 58.3 KB
/
preslist.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
- title: "Automatic Differentiation in RooFit"
description: |
With the growing datasets of HEP experiments, statistical analysis becomes
more computationally demanding, requiring improvements in existing
statistical analysis software. One way forward is to use Automatic
Differentiation (AD) in likelihood fitting, which is often done with RooFit
(a toolkit that is part of ROOT.) As of recently, RooFit can generate the
gradient code for a given likelihood function with Clad, a compiler-based AD
tool. At the CHEP 2023, and ICHEP 2024 conferences, we showed how using this
analytical gradient significantly speeds up the minimization of simple
likelihoods. This talk will present the current state of AD in RooFit. One
highlight is that it now supports more complex models like template
histogram stacks ("HistFactory"). It also uses a new version of Clad that
contains several improvements tailored to the RooFit use case. This
contribution will furthermore demo complete RooFit workflows that benefit
from the improved performance with AD, such as CMS and ATLAS Higgs
measurements.
location: "[MODE 2024](https://indico.cern.ch/event/1380163/)"
date: 2024-09-25
speaker: Vassil Vassilev
id: "VVMODE2024"
artifacts: |
[Link to Slides](/assets/presentations/assets/presentations/V_Vassilev-MODE2024_CladRooFit.pdf)
highlight: 1
- title: "Advanced optimizations for source transformation based
automatic differentiation"
description: |
Clad is a LLVM/Clang plugin designed to provide automatic differentiation (AD)
for C++ mathematical functions. It generates code for computing derivatives modifying
abstract syntax tree using LLVM compiler features. Clad supports forward- and
reverse-mode differentiation that are effectively used to integrate all kinds of
functions. The typical AD approach in Machine Learning tools records and flattens the
compute graph at runtime, whereas Clad can perform more advanced optimizations at
compile time using a rich program representation provided by the Clang AST. These
optimizations investigate which parts of the computation graph are relevant to
the AD rules.
One such technique is the “To-Be-Recorded” optimization, which reduces
the memory pressure to the clad tape data structure in the adjoint mode. Another
optimization technique is activity analysis, which discards all derivative
statements that are not relevant to the generated code. In the talk we will explain
compiler-level optimizations specific to AD, and will show some specific examples
of how these analyses have impacted clad applications.
location: "[MODE 2024](https://indico.cern.ch/event/1380163/)"
date: 2024-09-25
speaker: Maksym Andriichuk
id: "MAMODE2024"
artifacts: |
[Link to Slides](/assets/presentations/Maksym_Andriichuk_MODE2024_Optimizations.pdf)
highlight: 1
- title: "Improving BioDynamo's Performance using ROOT C++ Modules"
description: |
Poster presented at the FOURTH Mode Workshop on Differentiable Programming for Experiment Design
location: "[Fourth MODE Workshop](https://indico.cern.ch/event/1380163/)"
date: 2024-09-24
speaker: Isaac Morales Santana
id: "FOURTHMODEBDM"
artifacts: |
[Link to poster](/assets/presentations/Fourth_MODE_Isaac_Morales.pdf)
highlight: 0
- title: "Automatic Differentiation of the Kokkos framework and the STL with Clad"
description: |
Kokkos is a high-performance library allowing scientists to develop performance-portable C++ code capable of running on CPUs, GPUs and exotic hardware. The Kokkos infrastructure enables researchers to write generic code for libraries, frameworks, and scientific simulations such as climate simulation tools like Albany and HOMMEXX that can later be run on a large scale on any supercomputing hardware without code rewrites.
Kokkos enables differentiable programming using operator overloading tool, Sacado, which records and executes the linearised computation graph. On the other side of the tool spectrum is Clad. It uses the source transformation approach to AD where more advanced optimisations can be investigated. For Kokkos, Clad brings reverse mode support and increased scalability. The challenge with source transformation tools is incorporating framework-specific knowledge and expressing the analytical primitives specific to the framework.
In this talk, we discuss how Clad works and enables AD for large domain-specific frameworks such as Kokkos. We describe how Clad handles support for the C++ STL as another example of its flexibility. We explain extension points such as user-defined custom derivatives, which allow derivatives of Kokkos constructs to be expressed in terms of themselves, without falling back to precise hardware-dependent definitions. We delve into the specifics of the process and lessons learned while integrating Clad with Kokkos and show results demonstrating how Clad has facilitated efficient and scalable automatic differentiation with Kokkos.
location: "[Fourth MODE Workshop](https://indico.cern.ch/event/1380163/)"
date: 2024-09-25
speaker: Atell Yehor Krasnopolski
id: "AKMODE2024"
artifacts: |
[Link to Slides](/assets/presentations/Krasnopolsky-2024-MODE-clad-STL-kokkos.pdf)
highlight: 1
- title: "Accelerating Large Scientific Workflows Using Source Transformation Automatic Differentiation"
description: |
In this presentation, we will delve into the innovative world of Clad, an
automatic differentiation (AD) tool designed for C++. We introduce the Clad
programming model and highlight the advantages of employing
transformation-based automatic differentiation within high-performance
static languages like C++. Through practical examples, we demonstrate how
Clad can be leveraged at scale.
Our discussion will also focus on the integration of AD into RooFit, a
toolkit extensively utilized in high-energy physics and nuclear physics
experiments for statistical modeling. We showcase how the new AD backend
effectively extracts differentiable properties from decades-old
infrastructure, resulting in enhanced performance and numeric stability
during likelihood minimizations.
One of the key aspects of our approach is the development of a generic
methodology to transform the object-oriented compute graph within RooFit
into overhead-free C++ code, making it amenable to AD. This transformation
enables us to apply AD to large production workflows consisting of hundreds
of thousands of lines of scientific codes. Furthermore, we will illustrate
that AD emerges as the preferred choice for workflows involving numerous
parameters. It leads to reduced minimization times, faster overall fit
convergence by minimizing fit iterations, and improved accuracy in gradient
calculations.
location: "SNL"
date: 2023-10-16
speaker: Vassil Vassilev
id: "VVSNL2023"
artifacts: |
[Link to Slides](/assets/presentations/V_Vassilev-SNL_Accelerating_Large_Workflows_Clad.pdf)
highlight: 1
- title: "Unlocking the Power of C++ as a Service: Uniting Python's Usability with C++'s Performance"
description: |
In many ways Python and C++ represent the two ends in the spectrum of
programming languages. C++ has an important role in the field of computing
as the language design principles promote efficiency, reliability and
backward compatibility – a vital tripod for any long-lived codebase. Python
has prioritized better usability and safety while making some tradeoffs on
efficiency and backward compatibility. That has led developers to believe
that there is a binary choice between performance and usability.
Python has become the language of data science and machine learning in
particular while C++ still is the language of voice for performance-critical
software. The C++ and Python ecosystems are vast and achieving seamless
interoperability between them is essential to avoid risky software rewrites.
In this talk we leverage our decade-old experience in writing automatic
Python to C++ bindings. We demonstrate how we could connect the Python
interpreter to the new in-tree C++ interpreter called Clang-Repl. We show
how we can build a uniform execution environment between both languages
using the new compiler-as-a-service (CaaS) API available in Clang. The
execution environment enables advanced interoperability such as the ability
for Python to instantiate C++ templates on demand, inherit from C++ classes
or catch std::exception. We show how CaaS can be connected to external
services such as Jupyter and execute code written in both languages.
location: "[LLVM 2023](https://llvm.swoogo.com/2023devmtg/agenda)"
date: 2023-10-12
speaker: Vassil Vassilev
id: "VVLLVM2023"
artifacts: |
[Video](https://youtu.be/rdfBnGjyFrc),
[Link to Slides](/assets/presentations/V_Vassilev-LLVMDev23_CppPython.pdf)
highlight: 1
- title: "Code-Completion in Clang-Repl"
description: |
Built upon Clang and LLVM incremental compilation pipelines, Clang-Repl is
a C++ interpreter featuring a REPL that enables C++ users to develop
programs in an exploratory fashion. Autocompletion in Clang-Repl is a
significant leap forward in this direction. The feature empowers Clang-Repl
to accelerate their input and prevent typos. Inspired by the counterpart
feature in Cling, a downstream project of Clang-Repl, our auto-completion
feature leverages existing components of Clang/LLVM, and provides
context-aware semantic completion suggestions. In this talk, we will present
how autocompletion works at REPL and how it interacts with other Clang/LLVM
infrastructure.
<br/><br/>
location: "[LLVM 2023](https://llvm.swoogo.com/2023devmtg/agenda)"
date: 2023-10-12
speaker: Yuquan (Fred) Fu
id: "FFLLVM2023"
artifacts: |
[Link to Slides](/assets/presentations/Y_Fu-LLVMDev23_ClangReplAutoComplete.pdf)
highlight: 1
- title: "Automatic program reoptimization support in LLVM ORC JIT"
description: |
One of the prominent applications of the JIT compiler is the ability to
compile “hot” functions utilizing various runtime profiling metrics gathered
by slow versions of functions. The ORC API can be generalized further to
make use of the profiling metrics and “reoptimize” the function hiding the
reoptimization latency. For instance, one of the many applications of this
technique is to compile functions at a lower optimization level for faster
compilation speed and then reoptimize them to a higher level when those
functions are frequently executed. This talk we demonstrate how we can
express lazy JIT, speculative compilation, and re-optimization as "symbol
redirection" problems. We demonstrate the improved ORC API for redirecting
symbols. In addition, this technical talk will peek at internal details of
how we implemented re-optimization support and showcase the demos such as
real time clang-repl re-optimization from -O0 to -O3 or real time virtual
call optimization through de-virtualization.
location: "[LLVM 2023](https://llvm.swoogo.com/2023devmtg/agenda)"
date: 2023-10-11
speaker: Sunho Kim
id: "SKLLVM2023"
artifacts: |
[Video](https://youtu.be/2ST0Rz_pC58),
[Link to Slides](/assets/presentations/S_Kim-LLVMDev23_Automatic_Program_Reopt.pdf)
highlight: 1
- title: "C++ as a service - rapid software development and dynamic interoperability with python and beyond"
description: |
The C++ programming language is used for many numerically intensive
scientific applications. A combination of performance and solid backward
compatibility has led to its use for many research software codes over the
past 20 years. Despite its power, C++ is often seen as difficult to learn
and inconsistent with rapid application development. Exploration and
prototyping is slowed down by the long edit-compile-run cycles during
development.
In this talk we show how to leverage our experience in interactive C++,
just-in-time compilation technology (JIT), dynamic optimizations, and large
scale software development to greatly reduce the impedance mismatch between
C++ and Python. We show how clang-repl generalizes Cling in LLVM upstream to
offer a robust, sustainable and omnidisciplinary solution for C++ language
interoperability. The demonstrate how we have:
* advanced the interpretative technology to provide a state-of-the-art
C++ execution environment;
* enabled functionality which can provide native-like, dynamic runtime
interoperability between C++ and Python; and
* allowed utilization of heterogeneous hardware.
The presentation includes interactive session where we demonstrate some of
the capabilities of our system via the Jupyter interactive environment.
location: "[Compiler-Research Monthly 2023](https://compiler-research.org/meetings/#caas_20Sep2023)"
date: 2023-09-20
speaker: Vassil Vassilev
id: "VVCRSep2023"
artifacts: |
[Video](https://youtu.be/be89sF0WLrc),
[Link to Slides](/assets/presentations/V_Vassilev-CaaS_ShowCase.pdf)
highlight: 1
- title: "Efficient C++ Derivatives Through Source Transformation AD With Clad"
description: |
Clad enables automatic differentiation (AD) for C++. It is based on LLVM
compiler infrastructure and is a plugin for Clang compiler. Clad is based
on source code transformation. Given C++ source code of a mathematical
function, it can automatically generate C++ code for computing derivatives
of the function. Clad supports a large set of C++ features including control
flow statements and function calls. It supports reverse-mode AD
(a.k.a backpropagation) as well as forward-mode AD. It also facilitates
computation of hessian matrix and jacobian matrix of any arbitrary function.
In this talk we describe the programming model that Clad enables. We explain
what are the benefits of using transformation-based automatic
differentiation in high-performance static languages such as C++. We show
examples of how to use the tool at scale.
location: "[MODE 2023](https://indico.cern.ch/event/1242538/)"
date: 2023-07-25
speaker: Vassil Vassilev
id: "VVMODE2023"
artifacts: |
[Link to Slides](/assets/presentations/V_Vassilev-MODE2023_Efficient_Cpp_Derivatives_Clad.pdf)
highlight: 1
- title: "Making Likelihood Calculations Fast Using Automatic Differentiation in RooFit"
description: |
In this talk, we present our efforts in supporting Automatic Differentiation
(AD) in RooFit, a toolkit for statistical modeling and fitting used by many
HEP/NP experiments that is part of ROOT. The new AD backend improves both
the performance and numeric stability of likelihood minimizations, for which
we will provide several examples in this contribution. Our approach is to
extend RooFit with a tool that generates overhead-free C++ code for a full
likelihood function built from RooFit functional models. Gradients are then
generated using Clad, a compiler-based source-code-transformation AD tool,
using this C++ code. After presenting promising results from a
proof-of-concept with this pipeline applied to a HistFactory model at the
ACAT 2022 conference, we showcased more general benchmarks on the full
minimization pipeline at CHEP 2023. In this workshop, we present how AD can
be applied to production workflows in the field of HEP/NP. We also
demonstrate that AD is the prime choice for workflows with many parameters,
yielding lower minimization times and faster overall fit convergence due to
lesser fit iterations and improved accuracy of the calculated gradients.
location: "[MODE 2023](https://indico.cern.ch/event/1242538/)"
date: 2023-07-25
speaker: Garima Singh
id: "GSMODE2023"
artifacts: |
[Link to Slides](/assets/presentations/G_Singh-MODE3_Fast_Likelyhood_Calculations_RooFit.pdf)
highlight: 1
- title: "Automatic Interoperability Between C++ and Python"
description: |
The simplicity of Python and the power of C++ force stark choices on a
scientific software stack. There have been multiple developments to mitigate
language boundaries by implementing language bindings, but the impedance
mismatch between the static nature of C++ and the dynamic one of Python
hinders their implementation; examples include the use of user-defined
Python types with templated C++ and advanced memory management.
The development of the C++ interpreter Cling has changed the way we can
think of language bindings as it provides an incremental compilation
infrastructure available at runtime. That is, Python can interrogate C++ on
demand, and bindings can be lazily constructed at runtime. This automatic
binding provision requires no direct support from library authors and offers
better performance than alternative solutions, such as PyBind11. ROOT
pioneered this approach with PyROOT, which was later enhanced with its
successor, cppyy. However, until now, cppyy relied on the reflection layer
of ROOT, which is limited in terms of provided features and performance.
The next step for language interoperability with cppyy, enabling research
into uniform cross-language execution environments and boosting optimization
opportunities across language boundaries. We illustrate the use of advanced
C++ in Numba-accelerated Python through cppyy. We outline a path forward for
re-engineering parts of cppyy to use upstream LLVM components to improve
performance and sustainability. We demonstrate cppyy purely based on a C++
reflection library, InterOp, which offers interoperability primitives based
on Cling and Clang-Repl.
Based on our recent publication: https://arxiv.org/abs/2304.02712
We can share more details about the efforts done within the
compiler-research project in the area of mixing python and C++ in a single
jupyter notebook via technologies such as xeus-clang-repl.
location: "[PyHEP.Dev 2023](https://indico.cern.ch/event/1234156/contributions/5504654/)"
date: 2023-07-25
speaker: Baidyanath Kundu
id: "BKPyHEPDev2023"
artifacts: |
[Link to Slides](/assets/presentations/B_Kundu-PyHEP23_Cppyy_CppInterOp.pdf)
highlight: 1
- title: "Adding Automatic Differentiation to RooFit"
description: |
In this talk, we report on the effort to support automatic differentiation
(AD) in RooFit, a toolkit for statistical modeling and fitting used by many
HEP/NP experiments that is part of ROOT. The new AD backend improves both
the performance and numeric stability of likelihood minimizations, for which
we will provide several examples in this contribution. Our approach is to
extend RooFit with a tool that generates overhead-free C++ code for a full
likelihood function built from RooFit functional models. Gradients are then
generated using Clad, a compiler-based source-code-transformation AD tool,
using this C++ code. After presenting promising results from a
proof-of-concept with this pipeline applied to a HistFactory model at the
ACAT 2022 conference, we reported on the integration inside ROOT and
showcased more general benchmarks at CHEP 2023. Following this last
milestone, work focused on evolving the Minuit 2 minimizer backed to make
better use of the automatic gradient and extended the code generation with
support for more RooFit models. In this workshop, we will present updated
benchmarks where all numeric differentiation is avoided on the Minuit 2
side, as well as new results with the RooFit AD backed applied to
cutting-edge ATLAS Higgs analysis benchmarks for the first time.
These results show that the RooFit AD backend is the prime choice for
combined binned likelihoods with many parameters, yielding minimization
times one order of magnitude below RooFits other backends and improving the
fit convergence rate.
location: "[The Road to Differentiable and Probabilistic Programming in Fundamental Physics 2023](https://indico.ph.tum.de/event/7113)"
date: 2023-06-27
speaker: Garima Singh
id: "GSMiapbTUM2023"
artifacts: |
[Link to Slides](/assets/presentations/G_Singh-MiapbTUM_AD_RooFit.pdf)
highlight: 1
- title: "Fast And Automatic Floating Point Error Analysis With CHEF-FP"
description: |
As we reach the limit of Moore's Law, researchers are exploring different
paradigms to achieve unprecedented performance. Approximate Computing (AC),
which relies on the ability of applications to tolerate some error in the
results to trade-off accuracy for performance, has shown significant promise.
Despite the success of AC in domains such as Machine Learning, its acceptance
in High-Performance Computing (HPC) is limited due to its stringent
requirement of accuracy. We need tools and techniques to identify regions of
the code that are amenable to approximations and their impact on the
application output quality so as to guide developers to employ selective
approximation. To this end, we propose CHEF-FP, a flexible, scalable, and
easy-to-use source-code transformation tool based on Automatic
Differentiation (AD) for analysing approximation errors in HPC applications.
CHEF-FP uses Clad, an efficient AD tool built as a plugin to the Clang
compiler and based on the LLVM compiler infrastructure, as a backend and
utilizes its AD abilities to evaluate approximation errors in C++ code.
CHEF-FP works at the source level by injecting error estimation code
into the generated adjoints. This enables the error-estimation code to
undergo compiler optimizations resulting in improved analysis time and
reduced memory usage. We also provide theoretical and architectural
augmentations to source code transformation-based AD tools to perform FP
error analysis.
In this talk, we primarily focus on analyzing errors introduced by
mixed-precision AC techniques, the most popular approximate technique in HPC.
We also show the applicability of our tool in estimating other kinds of
errors by evaluating our tool on codes that use approximate functions.
Moreover, we demonstrate the speedups achieved by CHEF-FP during analysis
time as compared to the existing state-of-the-art tool as a result of its
ability to generate and insert approximation error estimate code directly
into the derivative source. The generated code also becomes a candidate for
better compiler optimizations contributing to lesser runtime performance
overhead.
location: "[IPDPS 2023](https://www.ipdps.org/ipdps2023/)"
date: 2023-05-18
speaker: Baidyanath Kundu
id: "IPDPS2023"
artifacts: |
[Link to Slides](/assets/presentations/IPDPS23-Estimating_Floating_Point_Errors.pdf)
highlight: 1
- title: "Making Likelihood Calculations Fast: Automatic Differentiation Applied to RooFit"
description: |
With the growing datasets of current and next-generation High-Energy and
Nuclear Physics (HEP/NP) experiments, statistical analysis has become more
computationally demanding. These increasing demands elicit improvements and
modernizations in existing statistical analysis software. One way to address
these issues is to improve parameter estimation performance and numeric
stability using automatic differentiation (AD). AD's computational efficiency
and accuracy is superior to the preexisting numerical differentiation
techniques and offers significant performance gains when calculating the
derivatives of functions with a large number of inputs, making it particularly
appealing for statistical models with many parameters. For such models,
many HEP/NP experiments use RooFit, a toolkit for statistical modeling and
fitting that is part of ROOT.
In this talk, we report on the effort to support the AD of RooFit likelihood
functions. Our approach is to extend RooFit with a tool that generates
overhead-free C++ code for a full likelihood function built from RooFit
functional models. Gradients are then generated using Clad, a compiler-based
source-code-transformation AD tool, using this C++ code. We present our results
from applying AD to the entire minimization pipeline and profile likelihood
calculations of several RooFit and HistFactory models at the LHC-experiment
scale. We show significant reductions in calculation time and memory usage
for the minimization of such likelihood functions. We also elaborate on
this approach's current limitations and explain our plans for the future.
location: "[CHEP 2023](https://indico.jlab.org/event/459/)"
date: 2023-05-08
speaker: Garima Singh
id: "GSCHEP2023"
artifacts: |
[Link to Slides](/assets/presentations/Garima_Singh_AD_RooFIt_CHEP_2023.pdf)
highlight: 1
- title: "Using C++ From Numba, Fast and Automatic"
description: |
The scientific community using Python has developed several ways to
accelerate Python codes. One popular technology is Numba, a Just-in-time
(JIT) compiler that translates a subset of Python and NumPy code into fast
machine code using LLVM. We have extended Numba's integration with LLVM's
intermediate representation (IR) to enable the use of C++ kernels and
connect them to Numba accelerated codes. Such a multilanguage setup is also
commonly used to achieve performance or to interface with external
bare-metal libraries. In addition, Numba users will be able to write the
performance-critical codes in C++ and use them easily at native speed.
This work relies on high-performance, dynamic, bindings between Python and
C++. Cppyy, which is the basis of PyROOT's interfaces to C++ libraries.
Cppyy uses Cling, an incremental C++ interpreter, to generate on-demand
bindings of required entities and connect them with the Python interpreter.
This environment is uniquely positioned to enable the use of C++ from Numba
in a fast and automatic way.
In this talk, we demonstrate using C++ from Numba through Cppyy. We show
our approach which extends Cppyy to match the object typing and lowering
models of Numba and the necessary additions to the reflection layers to
generate IR from Python objects. The uniform LLVM runtime allows
optimizations such as inlining which can in the future remove the C++
function call overhead. We discuss other optimizations such as lazily
instantiated C++ templates based on input data. The talk also briefly
outlines the non-negligible, Numba-introduced JIT overhead and possible
ways to optimize it. Since this is built as a Cppyy extension Numba
supports all bindings automatically without any user intervention.
location: "[PyHEP 2022](https://indico.cern.ch/event/1150631/)"
date: 2022-09-16
speaker: Baidyanath Kundu
id: "CppyyNumbaPyHEP2022"
artifacts: |
[Video](https://www.youtube.com/watch?v=RceFPtB4m1I),
[Link to notebook](/assets/presentations/B_Kundu-PyHEP22_Cppyy_Numba.pdf)
highlight: 1
- title: "Automatic Differentiation of Binned Likelihoods With RooFit and Clad"
description: |
RooFit is a toolkit for statistical modeling and fitting used by most
experiments in particle physics. Just as data sets from next-generation
experiments grow, processing requirements for physics analysis become more
computationally demanding, necessitating performance optimizations for
RooFit. One possibility to speed-up minimization and add stability is the
use of automatic differentiation (AD). Unlike for numerical differentiation,
the computation cost scales linearly with the number of parameters, making AD
particularly appealing for statistical models with many parameters. In this
talk, we report on one possible way to implement AD in RooFit. Our approach
is to add a facility to generate C++ code for a full RooFit model automatically.
Unlike the original RooFit model, this generated code is free of virtual
function calls and other RooFit-specific overhead. In particular, this code
is then used to produce the gradient automatically with Clad. Clad is a source
transformation AD tool implemented as a plugin to the clang compiler, which
automatically generates the derivative code for input C++ functions. We show
results demonstrating the improvements observed when applying this code
generation strategy to HistFactory and other commonly used RooFit models.
HistFactory is the subcomponent of RooFit that implements binned likelihood
models with probability densities based on histogram templates. These models
frequently have a very large number of free parameters, and are thus an
interesting first target for AD support in RooFit.
location: "[ACAT 2022](https://indico.cern.ch/event/1106990/)"
date: 2022-10-26
speaker: Garima Singh
id: "GSACAT2022"
artifacts: |
[Link to slides](/assets/presentations/GS-ACAT2022-AutomaticDifferentiationofBinnedLikelihoodswithRooFitandClad.pdf)
highlight: 1
- title: "Adapting C++ for Data Science"
description: |
Over the last decade the C++ programming language has evolved significantly
into safer, easier to learn and better supported by tools general purpose
programming language capable of extracting the last bit of performance from
bare metal. The emergence of technologies such as LLVM and Clang have
advanced tooling support for C++ and its ecosystem grew qualitatively.
C++ has an important role in the field of scientific computing as the
language design principles promote efficiency, reliability and backward
compatibility - a vital tripod for any long-lived codebase. Other
ecosystems such as Python have prioritized better usability and safety
while making some tradeoffs on efficiency and backward compatibility. That
has led developers to believe that there is a binary choice between
performance and usability.
In this talk we would like to present the advancements in the C++ ecosystem;
its relevance for scientific computing and beyond; and foreseen challenges.
The talk introduces three major components for data science - interpreted
C++; automatic language bindings; and differentiable programming. We outline
how these components help Python and C++ ecosystems interoperate making a
little compromise on either performance or usability. We elaborate on a
future hybrid Python/C++ differentiable programming analysis framework which
might accelerate science discovery in HEP by amplifying the power and
physics sensitivity of data analyses into end-to-end differentiable
pipelines.
location: "[ACAT 2022](https://indico.cern.ch/event/1106990/)"
date: 2022-10-28
speaker: Vassil Vassilev
id: "VVACAT2022"
artifacts: |
[Link to slides](/assets/presentations/VV-ACAT2022-AdaptingCppforDataScience.pdf)
highlight: 1
- title: "Efficient and Accurate Automatic Python Bindings with Cppyy and Cling"
description: |
The simplicity of Python and the power of C++ provide a hard choice for a
scientific software stack. There have been multiple developments to mitigate
the hard language boundaries by implementing language bindings. The static
nature of C++ and the dynamic nature of Python are problematic for bindings
provided by library authors and in particular features such as template
instantiations with user-defined types or more advanced memory management.
The development of the C++ interpreter Cling has changed the way we can
think of language bindings as it provides an incremental compilation
infrastructure available at runtime. That is, Python can interrogate C++ on
demand and fetch only the necessary information. This way of automatic
binding provision requires no binding support by the library authors and
offers better performance than Pybind11. This approach pioneered in ROOT
with PyROOT and later was enhanced with its successor Cppyy. However, until
now, Cppyy relied on the reflection layer of ROOT which is limited in terms
of provided features and performance.
In this talk we show how basing Cppyy purely on Cling yields better
correctness, performance and installation simplicity. We illustrate more
advanced language interoperability of Numba-accelerated Python code capable
of calling C++ functionality via Cppyy. We outline a path forward for
integrating the reflection layer in LLVM upstream which will contribute to
the project sustainability and will foster greater user adoption. We
demonstrate usage of Cppyy through Cling's LLVM mainline version Clang-Repl
location: "[ACAT 2022](https://indico.cern.ch/event/1106990/)"
date: 2022-10-25
speaker: Baidyanath Kundu
id: "BKACAT2022"
artifacts: |
[Link to slides](/assets/presentations/BK-ACAT2022-AutomaticPythonBindingswithCppyyandCling.pdf)
highlight: 1
- title: "Automatic Differentiation in ROOT"
description: |
Automatic Differentiation is a powerful technique to evaluate the derivative of
a function specified by a computer program. Thanks to the ROOT interpreter, Cling,
this technique is available in ROOT for computing gradients and Hessian matrices of
multi-dimensional functions. We will present the current integration of this tool
in the ROOT Mathematical libraries for computing gradients of functions that can
then be used in numerical algorithms. For example, we demonstrate the correctness
and performance improvements in ROOT’s fitting algorithms. We will show also how
gradient and Hessian computation via AD is integrated in the main ROOT minimization
algorithm Minuit. We will show also the present plans to integrate the Automatic
Differentiation in the RooFit modelling package for obtaining gradients of the full
model that can be used for fitting and other statistical studies.
location: "[MODE AD Workshop 2022](https://indico.cern.ch/event/1145124/contributions/)"
date: 2022-09-14
speaker: Garima Singh
id: "GSModeAD2022"
artifacts: |
[Link to slides](/assets/presentations/GS-MODEAD2022-AutomaticDifferentiationinROOT.pdf)
highlight: 1
- title: "CSSI Element: C++ as a service - rapid software development and dynamic interoperability with Python and beyond"
description: |
Poster presented at the 2022 PI meeting for the CSSI program of the National Science Foundation.
location: "[2022 NSF CSSI PI meeting](https://cssi-pi-community.github.io/2022-meeting)"
date: 2022-07-26
speaker: David Lange
id: "CaaSNSFPI2022"
artifacts: |
[Link to slides](/assets/presentations/CSSI_lange_poster_20202_printed.pdf)
highlight: 0
- title: "Estimating Floating-Point Errors Using Automatic Differentiation"
description: |
Floating-point errors are a testament to the finite nature of computing
and if left uncontrolled they can have catastrophic results. As such, for
high-precision computing applications, quantifying these uncertainties
becomes imperative. There have been significant efforts to mitigate such
errors by either extending the underlying floating-point precision, using
alternate compensation algorithms or estimating them using a variety of
statistical and non-statistical methods. A prominent method of dynamic
floating-point error estimation is using Automatic Differentiation (AD).
However, most state-of-the-art AD-based estimation software requires
manually adapting or annotating the source code by some amount. Moreover,
operator overloading AD based error estimation tools call for multiple
gradient recomputations to report errors over a large variety of inputs
and suffer from all the shortcomings of the underlying operator
overloading strategy such as reduced efficiency. In this work, we propose
a customizable way to use AD to synthesize source code for estimating
uncertainties arising from floating-point arithmetic in C/C++ applications.
Our work presents an automatic error annotation framework that can be used
in conjunction with custom user defined error models. We also present our
progress with error estimation on GPU applications.
location: "[SIAM UQ 2022](https://www.siam.org/conferences/cm/conference/uq22)"
date: 2022-04-14
speaker: Vassil Vassilev, Garima Singh
id: "FPErrorEstADSIAMUQ2022"
artifacts: |
[Video](https://www.youtube.com/watch?v=pndnawFPKHA&list=PLeZvkLnDkqbS8yQZ6VprODLKQVdL7vlTO&index=8),
[Link to slides](/assets/presentations/G_Singh-SIAMUQ22_FP_Error_Estimation.pdf)
highlight: 1
- title: "GPU Acceleration of Automatic Differentiation in C++ with Clad"
description: |
Automatic Differentiation (AD) is instrumental for science and industry. It
is a tool to evaluate the derivative of a function specified through a
computer program. The range of AD application domain spans from Machine
Learning to Robotics to High Energy Physics. Computing gradients with the
help of AD is guaranteed to be more precise than the numerical alternative
and have at most a constant factor (4) more arithmetical operations compared
to the original function. Moreover, AD applications to domain problems
typically are computationally bound. They are often limited by the
computational requirements of high-dimensional transformation parameters and
thus can greatly benefit from parallel implementations on graphics
processing units (GPUs).
Clad aims to enable differentiable analysis for C/C++ and CUDA and is a
compiler-assisted AD tool available both as a compiler extension and in
ROOT. Moreover, Clad works as a compiler plugin extending the Clang
compiler; as a plugin extending the interactive interpreter Cling; and as
a Jupyter kernel extension based on xeus-cling.
In this talk, we demonstrate the advantages of parallel gradient
computations on graphics processing units (GPUs) with Clad. We explain how
to bring forth a new layer of optimisation and a proportional speed up by
extending the usage of Clad for CUDA. The gradients of well-behaved C++
functions can be automatically executed on a GPU. Thus, across the spectrum
of fields, researchers can reuse their existing models and have workloads
scheduled on parallel processors without the need to optimize their
computational kernels. The library can be easily integrated into existing
frameworks or used interactively, and provides optimal performance.
Furthermore, we demonstrate the achieved application performance
improvements, including (~10x) in ROOT histogram fitting and corresponding
performance gains from offloading to GPUs.
location: "[ACAT 2021](https://indico.cern.ch/event/855454)"
date: 2021-11-30
speaker: Ioana Ifrim
id: "CppADCudaACAT21"
artifacts: |
[Video](https://videos.cern.ch/record/2295042),
[Link to slides](/assets/presentations/I_Ifrim-ACAT21_GPU_AD.pdf)
highlight: 1
- title: "Enabling Interactive C++ with Clang"
description: |
The design of LLVM and Clang enables them to be used as libraries, and has
led to the creation of an entire compiler-assisted ecosystem of tools. The
relatively friendly codebase of Clang and advancements in the JIT
infrastructure in LLVM further enable research into different methods for
processing C++ by blurring the boundary between compile time and runtime.
Challenges include incremental compilation and fitting compile/link time
optimizations into a more dynamic environment.
Incremental compilation pipelines process code chunk-by-chunk by building an
ever-growing translation unit. Code is then lowered into the LLVM IR and
subsequently run by the LLVM JIT. Such a pipeline allows creation of
efficient interpreters. The interpreter enables interactive exploration and
makes the C++ language more user friendly. The incremental compilation mode
is used by the interactive C++ interpreter, Cling, initially developed to
enable interactive high-energy physics analysis in a C++ environment. Cling
is now used for interactive development in Jupyter Notebooks
(via xeus-cling), dynamic python bindings (via cppyy) and interactive CUDA
development.
In this talk we dive into the details of implementing incremental
compilation with Clang. We outline a path forward for `Clang-Repl` which is
built with the experience gained in Cling and is now available in mainstream
llvm. We describe how the new Orc JIT infrastructure allows us to naturally
perform static optimizations at runtime, and enables linker voodoo to make
the compiler/interpreter boundaries disappear. We explain the potential of
having a compiler-as-a-service architecture in the context of automatic
language interoperability for Python and beyond.
location: "[LLVM Developers' Meeting 2021](https://llvm.swoogo.com/2021devmtg/)"
date: 2021-11-17
speaker: Vassil Vassilev
id: "InteractiveCppLLVMDev21"
artifacts: |
[Video](https://youtu.be/33ncbIQoa4c),
[Link to slides](/assets/presentations/V_Vassilev-LLVMDev21_InteractiveCpp.pdf)
highlight: 1
- title: "Enabling Interactive C++ with Clang"
description: |
Clad enables automatic differentiation (AD) for C++ algorithms through
source-to-source transformation. It is based on LLVM compiler infrastructure
and as a Clang compiler plugin. Different from other tools, Clad manipulates
the high-level code representation (the AST) rather than implementing its
own C++ parser and does not require modifications to existing code bases.
This methodology is both easier to adopt and potentially more performant
than other approaches. Having full access to the Clang compiler's internals
means that Clad is able to follow the high-level semantics of algorithms and
can perform domain-specific optimisations; automatically generate code
(re-targeting C++) on accelerator hardware with appropriate scheduling;
and has a direct connection to compiler diagnostics engine and thus
producing precise and expressive diagnostics positioned at desired source
locations.
In this talk, we showcase the above mentioned advantages through examples
and outline Clad's features, applications and support extensions. We
describe the challenges coming from supporting automatic differentiation of
broader C++ and present how Clad can compute derivatives of functions,
member functions, functors and lambda expressions. We show the newly added
support of array differentiation which provides the basis utility for CUDA
support and parallelisation of gradient computation. Moreover, we will demo
different interactive use-cases of Clad, either within a Jupyter environment
as a kernel extension based on xeus-cling or within a gpu-cpu environment
where the gradient computation can be accelerated through gpu-code produced
by Clad and run with the help of the Cling interpreter.
location: "[24th EuroAD Workshop 2021](http://www.autodiff.org/?module=Workshops&submenu=EuroAD%2F24%2Fprogramme)"
date: 2021-11-04
speaker: Ioana Ifrim
id: "CppADCudaEuroAD21"
artifacts: "[Link to slides](/assets/presentations/I_Ifrim-EuroAD21_GPU_AD.pdf)"
highlight: 1
- title: "Interactive C++ for Data Science"
description: |
C++ is used for many numerically intensive applications. A combination of
performance and solid backward compatibility has led to its use for many
research software codes over the past 20 years. Despite its power, C++ is
often seen as difficult to learn and not well suited with rapid application
development. The long edit-compile-run cycle is a large impediment to
exploration and prototyping during development.
Cling has emerged as a recognized capability that enables interactivity,
dynamic interoperability and rapid prototyping capabilities for C++
developers. Cling is an interactive C++ interpreter, built on top of the
Clang and LLVM compiler infrastructure. The interpreter enables interactive
exploration and makes the C++ language more welcoming for research. Cling
supports the full C++ feature set including the use of templates, lambdas,
and virtual inheritance.Cling’s field of origin is the field of high energy
physics where it facilitates the processing of scientific data. The
interpreter was an essential part of the software tools of the LHC
experimental program and was part of the software used to detect the
gravitational waves of the LIGO experiment. Interactive C++ has proven to be
useful for other communities. The Cling ecosystem includes dynamic bindings
tools to languages including python, D and Julia (cppyy); C++ in Jupyter
Notebooks (xeus-cling); interactive CUDA; and automatic differentiation on
the fly (clad).
This talk outlines key properties of interactive C++ such as execution
results, entity redefinition, error recovery and code undo. It demonstrates
the capability enabled by an interactive C++ platform in the context of data
science. For example, the use of eval-style programming, C++ in Jupyter
notebooks and CUDA C++. We talk about design and implementation challenges
and go beyond just interpreting C++. We showcase template instantiation on
demand, language interoperability on demand and bridging compiled and
interpreted code. We show how to easily build new capabilities using the
Cling infrastructure through developing an automatic differentiation plugin
for C++ and CUDA.
location: "[CppCon21](https://cppcon.org/program2021/)"
date: 2021-10-27
speaker: Vassil Vassilev
id: "InteractiveCppCppCon21"
artifacts: |
[Video](https://youtu.be/23E0S3miWB0),
[Link to slides](/assets/presentations/V_Vassilev-CppCon21_InteractiveCpp.pdf)
highlight: 1
- title: "Differentiable Programming in C++"
description: |
Mathematical derivatives are vital components of many computing algorithms
including: neural networks, numerical optimization, Bayesian inference,
nonlinear equation solvers, physics simulations, sensitivity analysis, and
nonlinear inverse problems. Derivatives track the rate of change of an
output parameter with respect to an input parameter, such as how much
reducing an individuals’ carbon footprint will impact the Earth’s
temperature. Derivatives (and generalizations such as gradients, jacobians,
hessians, etc) allow us to explore the properties of a function and better
describe the underlying process as a whole. In recent years, the use of
gradient-based optimizations such as training neural networks have become
widespread, leading to many languages making differentiation a first-class
citizen.
Derivatives can be computed numerically, but unfortunately the accumulation
of floating-point errors and high-computational complexity presents several
challenges. These problems become worse with higher order derivatives and
more parameters to differentiate.
Many derivative-based algorithms require gradients, or the computation of
the derivative of an output parameter with respect to many input parameters.
As such, developing techniques for computing gradients that are scalable in
the number of input parameters is crucial for the performance of such
algorithms. This paper describes a broad set of domains where scalable
derivative computations are essential. We make an overview of the major
techniques in computing derivatives, and finally, we introduce the flagman
of computational differential calculus -- algorithmic (also known as
automatic) differentiation (AD). AD makes clever use of the ‘nice’
mathematical properties of the chained rule and generative programming to
solve the scalability issues by inverting the dependence on the number of
input variables to the number of output variables. AD provides mechanisms to
augment the regular function computation with instructions calculating its
derivatives.
Differentiable programming is a programming paradigm in which the programs
can be differentiated throughout, usually via automatic differentiation.
This talk introduces the differentiable programming paradigm in terms of
C++. It shows its applications in science as applicable for data science
and industry. The authors make an overview of the existing tools in the area
and the two common implementation approaches -- template metaprogramming and
custom parsers. We demonstrate implementation pros and cons and propose
compiler toolchain-based implementation either in clang AST or LLVM IR. We
would like to briefly outline our current efforts in standardization of that
feature.
location: "[CppCon21](https://cppcon.org/program2021/)"
date: 2021-10-26
speaker: "William Moses, Vassil Vassilev"
id: DifferentiableProgrammingInCppCppCon21
artifacts: |
[Video](https://youtu.be/1QQj1mAV-eY),
[Link to slides](/assets/presentations/W_Moses_V_Vassilev-CppCon21_DifferentiableProgrammingInCpp.pdf)
highlight: 1
- title: "RooFit-AD Plan of Work"
description:
location: "Weekly Compiler Research Meetings"
date: 2023-02-01
speaker: Garima Singh
artifacts: "[Link to slides](/assets/presentations/RooFitADPlanofWork_01_02_23.pdf)"
- title: "A numba extension for cppyy / PyROOT"
description:
location: "[Parallelism, Performance and Programming model meeting](https://indico.cern.ch/e/PPP140)"
date: 2022-09-01
speaker: Baidyanath Kundu
artifacts:
- "[Slides](https://indico.cern.ch/event/1196174/contributions/5028203/attachments/2501253/4296778/PPP.pdf), "
- "[Notebook](https://indico.cern.ch/event/1196174/contributions/5028203/attachments/2501253/4296735/PPP.ipynb)"
- title: "Add Numerical Differentiation Support To Clad"
description:
location: "[IRIS-HEP GSoC presentations meeting](https://indico.cern.ch/event/1066812/)"
date: 2021-09-01
speaker: Garima Singh
artifacts: "[Link to slides](https://indico.cern.ch/event/1066812/contributions/4495279/attachments/2301763/3915404/Numerical%20Differentiaition%20.pdf)"
- title: "Floating point error estimation using Clad -- Final Report"
description:
location: "[IRIS-HEP topical meeting](https://indico.cern.ch/event/1040761/)"
date: 2021-06-21
speaker: Garima Singh
artifacts: "[Link to slides](https://indico.cern.ch/event/1040761/contributions/4371613/attachments/2268248/3851583/floating_point_error_est.pdf)"
- title: "Utilise Second Order Derivatives from Clad in ROOT"
description:
location: "[IRIS-HEP GSoC presentations meeting](https://indico.cern.ch/event/1066812/)"
date: 2021-09-01
speaker: Baidyanath Kundu
artifacts: "[Link to slides](https://indico.cern.ch/event/1066812/contributions/4509414/attachments/2301766/3915408/Utilize%20second%20order%20derivatives%20from%20Clad%20in%20ROOT.pdf)"
- title: "Add support for differentiating functors"
description:
location: "[IRIS-HEP GSoC presentations meeting](https://indico.cern.ch/event/1066812/)"
date: 2021-09-01
speaker: Parth Arora
artifacts:
- "[presentation](https://indico.cern.ch/event/1066812/contributions/4485920/attachments/2301761/3915402/IRIS-HEP-Add-support-for-differentiating-functors-presentation.pdf), "
- "[Poster](/assets/presentations/add-support-for-differentiating-functors-poster.pdf)"
- title: "GPU Acceleration of Automatic Differentiation in C++ with Clad"
description:
location: "[IRIS-HEP topical meeting](https://indico.cern.ch/event/1040761/)"
date: 2021-06-21
speaker: Ioana Ifrim
artifacts: "[Link to slides](https://indico.cern.ch/event/1040761/contributions/4400258/attachments/2268253/3851595/Ioana%20Ifrim%20-%20GPU%20Acceleration%20of%20Automatic%20Differentiation%20in%20C%2B%2B%20with%20Clad.pdf)"
- title: "Floating point error estimation using Clad -- Project Roadmap"
location: "Onboarding meetup"
speaker: Garima Singh
date: 2020-12-15
description:
artifacts:
- "[pdf](/assets/presentations/ErrorEstimationWithClad_15_12_2020.pdf), "
- "[pptx](/assets/presentations/ErrorEstimationWithClad_15_12_2020.pptx)"
- title: "Adding CUDA® Support to Cling: JIT Compile to GPUs"
description: My abstract
location: "[2020 LLVM workshop](https://llvm.org/devmtg/2020-09/)"
speaker: Simeon Ehrig
date: 2020-10-08
artifacts: "[Link to slides and video](https://zenodo.org/record/4021877)"
- title: "Error estimates of floating-point numbers and Jacobian matrix computation in Clad"
location: "[2020 LLVM workshop](https://llvm.org/devmtg/2020-09/)"
speaker: Vassil Vassilev
date: 2020-10-07
artifacts: "[Link to poster](/assets/presentations/LLVM2020_Clad.pdf)"
- title: "Incremental Compilation Support in Clang"
description: My abstract
location: "[2020 LLVM workshop](https://llvm.org/devmtg/2020-09/)"
date: 2020-10-07
speaker: Vassil Vassilev
artifacts: "[Link to poster](/assets/presentations/LLVM2020_CaaS.pdf)"
- title: "Modernizing Boost in CMSSW"
description:
location: "[IRIS-HEP topical meeting](https://indico.cern.ch/event/945364)"
date: 2020-09-02
speaker: Lukas Camolezi
artifacts: "[Link to slides](https://indico.cern.ch/event/945364/contributions/3992254/attachments/2095731/3522488/cmssw-finalpresentation.pdf)"
- title: "Enabling C++ Modules for ROOT on Windows"
description:
location: "[IRIS-HEP topical meeting](https://indico.cern.ch/event/950229/)"
date: 2020-09-08
speaker: Vaibhav Garg
artifacts: "[Link to slides](/assets/presentations/WinCXXModules_31_08_2020.pdf)"
- title: "Clad -- Automatic Differentiation in C++ and Clang"
description:
location: "[23rd Euro AD workshop](http://www.autodiff.org/?module=Workshops&submenu=EuroAD/23/main)"
date: 2020-08-11
speaker: Vassil Vassilev
artifacts: "[Link to talk](http://www.autodiff.org/Docs/euroad/23rd%20EuroAd%20Workshop%20-%20Vassil%20Vassilev%20-%20Clad%20--%20Automatic%20Differentiation%20in%20C++%20and%20Clang.pdf)"
- title: "CaaS poster for 2020 NSF CSSI PI meeting"
description: Our compiler as a service project contribution to the 2020 NSF CSSI PI meeting.
location: "[2020 NSF CSSI meeting, Seattle, WA](https://cssi-pi-community.github.io/2020-meeting)"
date: 2020-02-13
speaker: David Lange
artifacts: "[Link to poster](/assets/presentations/CSSI2020_poster.pdf)"
- title: "CaaS slide for 2020 NSF CSSI PI meeting"
description: Our compiler as a service project contribution to the 2020 NSF CSSI PI meeting.
location: "[2020 NSF CSSI meeting, Seattle, WA](https://cssi-pi-community.github.io/2020-meeting)"
date: 2020-02-13
speaker: David Lange
artifacts: "[Link to slides](/assets/presentations/CSSI2020_slide.pdf)"
- title: "Automatic Differentiation in C++"
description:
location: Prague 2020 ISO C++ WG21 meeting
date: 2020-02-10
speaker: Vassil Vassilev and Marco Foco (NVIDIA)
artifacts: "[Link to slides](/assets/presentations/CladInROOT_15_02_2020.pdf)"
- title: "C++ Modules in ROOT and Beyond"
description:
location: "[2019 CHEP International Conference](chep2019.org)"
date: 2019-11-07
speaker: Oksana Shadura
artifacts: "[Link to slides](https://indico.cern.ch/event/773049/contributions/3473264/attachments/1937517/3215659/C_Modules_in_ROOT_and_Beyond4.pdf)"
- title: "Clad: the automatic differentiation plugin for Clang"
description:
location: "[IRIS-HEP topical meeting](https://indico.cern.ch/event/815976/)"
date: 2019-05-29
speaker: Aleksandr Efremov
artifacts: "[Link to slides](https://indico.cern.ch/event/815976/contributions/3405951/attachments/1853315/3043286/CladIRIS.pdf)"
- title: "Future of ROOT runtime C++ modules"
description:
location: "[ROOT Users Workshop (Sarajevo)](https://indico.cern.ch/event/697389/)"
date: 2018-09-12
speaker: Yuka Takahashi
artifacts: "[Link to slides](https://indico.cern.ch/event/697389/contributions/3062026/attachments/1714046/2764895/Future_of_ROOT_runtime_C_modules.pdf)"
- title: "Optimizing Frameworks Performance Using C++ Modules-Aware ROOT"
description: