Skip to content

Add gsoc contributor info and intro blog #303

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
May 24, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .github/actions/spelling/allow/names.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ Abdulrasool
Abdelrhman
Abhigyan
Abhinav
Aditi
Alexandru
Alja
Anandh
Expand Down Expand Up @@ -32,6 +33,7 @@ Ilieva
Isemann
JLange
Jomy
Joshi
Jurgaityt
Kyiv
LBNL
Expand All @@ -43,6 +45,7 @@ Mabille
Manipal
Matevz
Mihaly
Milind
Militaru
Mircho
Mozil
Expand Down Expand Up @@ -99,6 +102,7 @@ abhi
abhinav
acherjan
acherjee
aditi
aditya
adityapand
adityapandeycn
Expand Down Expand Up @@ -142,6 +146,7 @@ isaacmoralessantana
izvekov
jacklqiu
jeaye
joshi
junaire
kausik
kchristin
Expand All @@ -163,6 +168,7 @@ mfoco
mihail
mihailmihov
mihov
milind
mizvekov
mozil
mvassilev
Expand Down
3 changes: 3 additions & 0 deletions .github/actions/spelling/allow/terms.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
AARCH
AIML
BGZF
Caa
CINT
Expand All @@ -16,9 +17,11 @@ JIT'd
Jacobians
LLMs
LLVM
LULESH
NVIDIA
NVMe
PTX
SBO
Slib
Softsusy
Superbuilds
Expand Down
25 changes: 25 additions & 0 deletions _data/contributors.yml
Original file line number Diff line number Diff line change
Expand Up @@ -405,6 +405,31 @@
proposal: /assets/docs/Abhinav_Kumar_Proposal_GSoC_2025.pdf
mentors: Anutosh Bhat, Vipul Cariappa, Aaron Jomy, Vassil Vassilev


- name: Aditi Milind Joshi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you put a photo here, too?

photo: Aditi.jpeg
info: "Google Summer of Code 2025 Contributor"
email: [email protected]
github: "https://github.com/aditimjoshi"
linkedin: "https://www.linkedin.com/in/aditi-joshi-149280309/"
education: "B.Tech in Computer Science and Engineering (AIML), Manipal Institute of Technology, Manipal, India"
active: 1
projects:
- title: "Implement and improve an efficient, layered tape with prefetching capabilities"
status: Ongoing
description: |
Automatic Differentiation (AD) is a computational technique that enables
efficient and precise evaluation of derivatives for functions expressed in code.
Clad is a Clang-based automatic differentiation tool that transforms C++ source
code to compute derivatives efficiently. A crucial component for AD in Clad is the
tape, a stack-like data structure that stores intermediate values for reverse mode AD.
While benchmarking, it was observed that the tape operations of the current implementation
were significantly slowing down the program. This project aims to optimize and generalize
the Clad tape to improve its efficiency, introduce multilayer storage, enhance thread safety,
and enable CPU-GPU transfer.
proposal: /assets/docs/Aditi_Milind_Joshi_Proposal_2025.pdf
mentors: Aaron Jomy, David Lange, Vassil Vassilev

- name: "This could be you!"
photo: rock.jpg
info: See <a href="/careers">openings</a> for more info
Expand Down
10 changes: 10 additions & 0 deletions _pages/team/aditi-milind-joshi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: "Compiler Research - Team - Aditi Milind Joshi"
layout: gridlay
excerpt: "Compiler Research: Team members"
sitemap: false
permalink: /team/AditiMilindJoshi
email: [email protected]
---

{% include team-profile.html %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
title: "Implement and improve an efficient, layered tape with prefetching capabilities"
layout: post
excerpt: "A GSoC 2025 project focussing on optimizing Clad's tape data structure for reverse-mode automatic differentiation, introducing slab-based memory, thread safety, multilayer storage, and future support for CPU-GPU transfers."
sitemap: true
author: Aditi Milind Joshi
permalink: blogs/gsoc25_aditi_introduction_blog/
banner_image: /images/blog/gsoc-clad-banner.png
date: 2025-05-22
tags: gsoc clad clang c++
---

### Introduction

I'm Aditi Joshi, a third-year B.Tech undergraduate student studying Computer Science and Engineering (AIML) at Manipal Institute of Technology, Manipal, India. This summer, I will be contributing to the Clad repository as part of Google Summer of Code 2025, where I will be working on the project "Implement and improve an efficient, layered tape with prefetching capabilities."

**Mentors:** Aaron Jomy, David Lange, Vassil Vassilev

### Briefly about Automatic Differentiation and Clad

Automatic Differentiation (AD) is a computational technique that enables efficient and precise evaluation of derivatives for functions expressed in code. Unlike numerical differentiation, which suffers from approximation errors, or symbolic differentiation, which can be computationally expensive, AD systematically applies the chain rule to compute gradients with minimal overhead.

Clad is a Clang-based automatic differentiation tool that transforms C++ source code to compute derivatives efficiently. By leveraging Clang’s compiler infrastructure, Clad performs source code transformations to generate derivative code for given functions, enabling users to compute gradients without manually rewriting their implementations. It supports both forward-mode and reverse-mode differentiation, making it useful for a range of applications.

### Understanding the Problem

In reverse-mode automatic differentiation (AD), we compute gradients efficiently for functions with many inputs and a single output. To do this, we need to store intermediate results during the forward pass for use during the backward (gradient) pass. This is where the tape comes in — a stack-like data structure that records the order of operations and their intermediate values.

Currently, Clad uses a monolithic memory buffer as the tape. While this approach is lightweight for small problems, it becomes inefficient and non-scalable for larger applications or parallel workloads. Frequent memory reallocations, lack of thread safety, and the absence of support for offloading make it a limiting factor in Clad’s usability in complex scenarios.

### Project Goals

The aim of this project is to design a more efficient, scalable, and flexible tape. Some of the key enhancements include:

- Replacing dynamic reallocation with a slab-based memory structure to minimize copying overhead.
- Introducing Small Buffer Optimization (SBO) for short-lived tapes.
- Making the tape thread-safe by using locks or atomic operations.
- Implementing multi-layer storage, where parts of the tape are offloaded to disk to manage memory better.
- (Stretch Goal) Supporting CPU-GPU memory transfers for future heterogeneous computing use cases.
- (Stretch Goal) Introducing checkpointing for optimal memory-computation trade-offs.

### Implementation Plan

The first phase of the project will focus on redesigning Clad’s current tape structure to use a slab-based memory model instead of a single contiguous buffer. This change will reduce memory reallocation overhead by linking fixed-size slabs dynamically as the tape grows. To improve performance in smaller workloads, I’ll also implement Small Buffer Optimization (SBO) — a lightweight buffer embedded directly in the tape object that avoids heap allocation for short-lived tapes. These improvements are aimed at making the tape more scalable, efficient, and cache-friendly.

Once the core memory model is in place, the next step will be to add thread safety to enable parallel usage. The current tape assumes single-threaded execution, which limits its applicability in multi-threaded scientific workflows. I’ll introduce synchronization mechanisms such as std::mutex to guard access to tape operations and ensure correctness in concurrent scenarios. Following this, I will implement a multi-layered tape system that offloads older tape entries to disk when memory usage exceeds a certain threshold — similar to LRU-style paging — enabling Clad to handle much larger computation graphs.

As stretch goals, I plan to explore CPU-GPU memory transfer support for the slabbed tape and introduce basic checkpointing functionality to recompute intermediate values instead of storing them all, trading memory usage for computational efficiency. Throughout the project, I’ll use benchmark applications like LULESH to evaluate the performance impact of each feature and ensure that the redesigned tape integrates cleanly into Clad’s AD workflow. The final stages will focus on extensive testing, documentation, and contributing the changes back to the main repository.

### Why I Chose This Project

My interest in AD started when I was building a neural network from scratch using CUDA C++. That led me to Clad, where I saw the potential of compiler-assisted differentiation. I’ve since contributed to the Clad repo by investigating issues and raising pull requests, and I’m looking forward to pushing the limits of what Clad’s tape can do.

This project aligns perfectly with my interests in memory optimization, compiler design, and parallel computing. I believe the enhancements we’re building will make Clad significantly more powerful for real-world workloads.

### Looking Ahead

By the end of the summer, I hope to deliver a robust, feature-rich tape that enhances Clad’s reverse-mode AD performance across CPU and GPU environments. I’m excited to contribute to the scientific computing community and gain deeper insights into the world of compilers.

---

### Related Links

- [Clad Repository](https://github.com/vgvassilev/clad)
- [Project Description](https://hepsoftwarefoundation.org/gsoc/2025/proposal_Clad-ImproveTape.html)
- [GSoC Project Proposal](/assets/docs/Aditi_Milind_Joshi_Proposal_2025.pdf)
- [My GitHub Profile](https://github.com/aditimjoshi)
Binary file added assets/docs/Aditi_Milind_Joshi_Proposal_2025.pdf
Binary file not shown.
Binary file added images/blog/gsoc-clad-banner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/team/Aditi.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading