Skip to content

Commit 5545212

Browse files
aaronj0vgvassilev
authored andcommitted
Add latest projects from GSoC 25
1 parent 5ce22e2 commit 5545212

File tree

1 file changed

+214
-2
lines changed

1 file changed

+214
-2
lines changed

Diff for: _data/openprojectlist.yml

+214-2
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,214 @@
1+
- name: "Agent-Based Simulation of CAR-T Cell Therapy Using BioDynaMo"
2+
description: |
3+
Chimeric Antigen Receptor T-cell (CAR-T) therapy has revolutionized
4+
cancer treatment by harnessing the immune system to target and
5+
destroy tumor cells. While CAR-T has demonstrated success in blood
6+
cancers, its effectiveness in solid tumors remains limited due to
7+
challenges such as poor tumor infiltration, immune suppression,
8+
and T-cell exhaustion. To improve therapy outcomes, computational
9+
modeling is essential for optimizing treatment parameters, predicting
10+
failures, and testing novel interventions. However, existing models
11+
of CAR-T behavior are often overly simplistic or computationally expensive,
12+
making them impractical for large-scale simulations.
13+
14+
This project aims to develop a scalable agent-based simulation of CAR-T
15+
therapy using BioDynaMo, an open-source high-performance biological
16+
simulation platform. By modeling T-cell migration, tumor engagement,
17+
and microenvironmental factors, we will investigate key treatment variables
18+
such as dosage, administration timing, and combination therapies. The
19+
simulation will allow researchers to explore how tumor microenvironment
20+
suppression (e.g., regulatory T-cells, hypoxia, immunosuppressive cytokines)
21+
affects CAR-T efficacy and what strategies such as checkpoint inhibitors or
22+
cytokine support can improve outcomes.
23+
24+
The final deliverable will be a fully documented, reproducible BioDynaMo
25+
simulation, along with analysis tools for visualizing treatment dynamics.
26+
The model will provide insights into the optimal CAR-T cell dosing, tumor
27+
penetration efficiency, and factors influencing therapy resistance. This
28+
project will serve as a foundation for in silico testing of immunotherapies,
29+
reducing the need for costly and time-consuming laboratory experiments while
30+
accelerating the development of more effective cancer treatments.
31+
32+
tasks: |
33+
* Expected plan of work:
34+
35+
- Phase 1: Initial Setup & Simple T-cell Dynamics
36+
- Phase 2: Advanced CAR-T Cell Behavior & Tumor Interaction
37+
- Phase 3: Integration of Immunosuppressive Factors & Data Visualization
38+
39+
* Expected deliverables
40+
41+
- A fully documented BioDynaMo simulation of CAR-T therapy.
42+
- Analysis scripts for visualizing tumor reduction and CAR-T efficacy.
43+
- Performance benchmarks comparing different treatment strategies.
44+
- A research-style report summarizing findings.
45+
46+
47+
- name: "Enable GPU support and Python Interoperability via a Plugin System"
48+
description: |
49+
Xeus-Cpp integrates [Clang-Repl](https://clang.llvm.org/docs/ClangRepl.html)
50+
with the Xeus protocol via CppInterOp, providing a powerful platform for
51+
C++ development within Jupyter Notebooks.
52+
53+
This project aims to introduce a plugin system for magic commands
54+
(cell, line, etc.), enabling a more modular and maintainable approach
55+
to extend Xeus-Cpp. Traditionally, magic commands introduce additional
56+
code and dependencies directly into the Xeus-Cpp kernel, increasing
57+
its complexity and maintenance burden. By offloading this functionality
58+
to a dedicated plugin library, we can keep the core kernel minimal
59+
while ensuring extensibility. This approach allows new magic commands
60+
to be developed, packaged, and deployed independently—eliminating the
61+
need to rebuild and release Xeus-Cpp for each new addition.
62+
63+
Initial groundwork has already been laid with the Xplugin library,
64+
and this project will build upon that foundation. The goal is to clearly
65+
define magic command compatibility across different platforms while
66+
ensuring seamless integration.
67+
A key objective is to reimplement existing features, such as the LLM
68+
cell magic and the in-development Python magic, as plugins. This will
69+
not only improve modularity within Xeus-Cpp but also enable these
70+
features to be used in other Jupyter kernels.
71+
72+
As an extended goal, we aim to develop a new plugin for GPU execution,
73+
leveraging CUDA or OpenMP to support high-performance computing workflows
74+
within Jupyter.
75+
76+
tasks: |
77+
* Move the currently implemented magics and reframe using xplugin
78+
* Complete the on-going work on the Python interoperability magic
79+
* Implement a test suite for the plugins
80+
* Extended: To be able to execute on GPU using CUDA or OpenMP
81+
* Optional: Extend the magics for the wasm use case (xeus-cpp-lite)
82+
* Present the work at the relevant meetings and conferences
83+
84+
- name: "Integrate Clad to PyTorch and compare the gradient execution times"
85+
description: |
86+
PyTorch is a popular machine learning framework that includes its own
87+
automatic differentiation engine, while Clad is a Clang plugin for
88+
automatic differentiation that performs source-to-source transformation
89+
to generate functions capable of computing derivatives at compile time.
90+
91+
This project aims to integrate Clad-generated functions into PyTorch
92+
using its C++ API and expose them to a Python workflow. The goal is
93+
to compare the execution times of gradients computed by Clad with those
94+
computed by PyTorch's native autograd system. Special attention will be
95+
given to CUDA-enabled gradient computations, as PyTorch also offers GPU
96+
acceleration capabilities.
97+
98+
tasks: |
99+
* Incorporate Clad's API components (such as `clad::array` and `clad::tape`)
100+
into PyTorch using its C++ API
101+
* Pass Clad-generated derivative functions to PyTorch and expose them to Python
102+
* Perform benchmarks comparing the execution times and performance of Clad-derived
103+
gradients versus PyTorch's autograd
104+
* Automate the integration process
105+
* Document thoroughly the integration process and the benchmark results and identify
106+
potential bottlenecks in Clad's execution
107+
* Present the work at the relevant meetings and conferences.
108+
109+
- name: "Support usage of Thrust API in Clad"
110+
description: |
111+
The rise of ML has shed light into the power of GPUs and researchers are looking
112+
for ways to incorporate them in their projects as a lightweight parallelization
113+
method. Consequently, General Purpose GPU programming is becoming a very popular
114+
way to speed up execution time.
115+
116+
Clad is a clang plugin for automatic differentiation that performs source-to-source
117+
transformation and produces a function capable of computing the derivatives of a
118+
given function at compile time. This project aims to enhance Clad by adding support
119+
for Thrust, a parallel algorithms library designed for GPUs and other accelerators.
120+
By supporting Thrust, Clad will be able to differentiate algorithms that rely on
121+
Thrust's parallel computing primitives, unlocking new possibilities for GPU-based
122+
machine learning, scientific computing, and numerical optimization.
123+
124+
tasks: |
125+
* Research and decide on the most valuable Thrust functions to support in Clad
126+
* Create pushforward and pullback functions for these Thrust functions
127+
* Write tests that cover the additions
128+
* Include demos of using Clad on open source code examples that call Thrust functions
129+
* Write documentation on which Thrust functions are supported in Clad
130+
* Present the work at the relevant meetings and conferences.
131+
132+
- name: "Enable automatic differentiation of C++ STL concurrency primitives in Clad"
133+
description: |
134+
Clad is an automatic differentiation (AD) clang plugin for C++. Given a C++ source
135+
code of a mathematical function, it can automatically generate C++ code for computing
136+
derivatives of the function. This project focuses on enabling automatic differentiation
137+
of codes that utilise C++ concurrency features such as `std::thread`, `std::mutex`,
138+
atomic operations and more. This will allow users to fully utilize their CPU resources.
139+
140+
tasks: |
141+
* Explore C++ concurrency primitives and prepare a report detailing the associated challenges
142+
involved and the features that can be feasibly supported within the given timeframe.
143+
* Add concurrency primitives support in Clad's forward-mode automatic differentiation.
144+
* Add concurrency primitives support in Clad's reverse-mode automatic differentiation.
145+
* Add proper tests and documentation.
146+
* Present the work at the relevant meetings and conferences.
147+
148+
- name: "Implementing Debugging Support in Xeus-Cpp"
149+
description: |
150+
xeus-cpp is an interactive execution environment for C++ in Jupyter
151+
notebooks, built on the Clang-Repl C++ interpreter, provided by
152+
[CppInterOp](https://github.com/compiler-research/CppInterOp/). While
153+
xeus-cpp enables a seamless workflow for running C++ code interactively,
154+
the lack of an integrated debugging experience remains a gap, especially
155+
when dealing with code that is dynamically compiled and executed through
156+
LLVM's JIT(Just-In-Time) infrastructure.
157+
158+
Jupyter's debugging system follows the Debug Adapter Protocol (DAP),
159+
enabling seamless integration of debuggers into interactive kernels.
160+
Existing Jupyter kernels, such as the IPython & the xeus-python kernel,
161+
have successfully implemented debugging workflows that support
162+
breakpoints, variable inspection, and execution control, even in
163+
dynamically executed environments. These implementations address
164+
challenges such as symbol resolution and source mapping for dynamically
165+
generated code, ensuring that debugging within Jupyter remains intuitive
166+
and user-friendly.
167+
168+
However, debugging C++ inside an interactive environment presents unique
169+
challenges, particularly due to Clang-Repl’s use of LLVM’s ORC JIT to
170+
compile and execute code dynamically. To integrate debugging into xeus-cpp,
171+
the project will explore existing solutions for DAP implementations like
172+
`lldb_dap` and debuggers like lldb that can interface with Jupyter while
173+
effectively supporting the execution model of Clang-Repl.
174+
175+
tasks: |
176+
* Seamless debugging integration, establishing reliable interactions
177+
between xeus-cpp, a Debug Adapter Protocol (DAP) implementation, and
178+
a debugger.
179+
* Implement a testing framework through `xeus-zmq` to thoroughly test
180+
the debugger. This can be inspired by an existing implementation
181+
in `xeus-python`.
182+
* Present the work at the relevant meetings and conferences.
183+
184+
185+
- name: "Interactive Differential Debugging - Intelligent Auto-Stepping and Tab-Completion"
186+
description: |
187+
Differential debugging is a time-consuming task that is not well supported by existing tools.
188+
Existing state-of-the-art tools do not consider a baseline(working) version while debugging
189+
regressions in complex systems, often leading to manual efforts by developers to achieve an
190+
automatable task.
191+
192+
The differential debugging technique analyzes a regressed system and identifies the cause of
193+
unexpected behaviors by comparing it to a previous version of the same system. The idd tool
194+
inspects two versions of the executable - a baseline and a regressed version. The interactive
195+
debugging session runs both executables side-by-side, allowing the users to inspect and compare
196+
various internal states.
197+
198+
This project aims to implement intelligent stepping (debugging) and tab completions of commands.
199+
IDD should be able to execute until a stack frame or variable diverges between the two versions
200+
of the system, then drop to the debugger. This may be achieved by introducing new IDD-specific
201+
commands. IDD should be able to tab complete the underlying GDB/LLDB commands. The contributor
202+
is also expected to set up the necessary CI infrastructure to automate the testing process of IDD.
203+
204+
205+
tasks: |
206+
* Enable stream capture
207+
* Enable IDD-specific commands to execute until diverging stack or variable value.
208+
* Enable tab completion of commands.
209+
* Set up CI infrastructure to automate testing IDD.
210+
* Present the work at the relevant meetings and conferences.
211+
1212
- name: "Using ROOT in the field of genome sequencing"
2213
description: |
3214
[ROOT](https://root.cern/) is a framework for data processing,
@@ -18,8 +229,9 @@
18229
the requirements of the field.
19230
20231
tasks: |
21-
* Reproduce the results from previous comparisons against the ROOT master
22-
* Investigate changing the compression strategies
232+
* Reproduce the results based on previous comparisons against ROOT master
233+
* Investigate and compare the latest compression strategies used by [Samtools](https://www.htslib.org/) for conversions to BAM, with RAM(ROOT Alignment Maps).
234+
* Explore ROOT's [RNTuple](https://root.cern/doc/v622/md_tree_ntuple_v7_doc_README.html) format to efficiently store RAM maps, in place of the previously used `TTree`.
23235
* Investigate different ROOT file splitting techniques
24236
* Produce a comparison report
25237

0 commit comments

Comments
 (0)