Skip to content

Commit 026e89f

Browse files
authored
Add support for Helium and Glm (#1156)
* Add support for Helium and Glm * Add unit tests * Remove duplicate definitions
1 parent 16ff98d commit 026e89f

File tree

6 files changed

+125
-0
lines changed

6 files changed

+125
-0
lines changed

README.md

+2
Original file line numberDiff line numberDiff line change
@@ -328,6 +328,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
328328
1. **Florence2** (from Microsoft) released with the paper [Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks](https://arxiv.org/abs/2311.06242) by Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan.
329329
1. **[Gemma](https://huggingface.co/docs/transformers/main/model_doc/gemma)** (from Google) released with the paper [Gemma: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/gemma-open-models/) by the Gemma Google team.
330330
1. **[Gemma2](https://huggingface.co/docs/transformers/main/model_doc/gemma2)** (from Google) released with the paper [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/google-gemma-2/) by the Gemma Google team.
331+
1. **[GLM](https://huggingface.co/docs/transformers/main/model_doc/glm)** (from the GLM Team, THUDM & ZhipuAI) released with the paper [ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools](https://arxiv.org/abs/2406.12793v2) by Team GLM: Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Dan Zhang, Diego Rojas, Guanyu Feng, Hanlin Zhao, Hanyu Lai, Hao Yu, Hongning Wang, Jiadai Sun, Jiajie Zhang, Jiale Cheng, Jiayi Gui, Jie Tang, Jing Zhang, Jingyu Sun, Juanzi Li, Lei Zhao, Lindong Wu, Lucen Zhong, Mingdao Liu, Minlie Huang, Peng Zhang, Qinkai Zheng, Rui Lu, Shuaiqi Duan, Shudan Zhang, Shulin Cao, Shuxun Yang, Weng Lam Tam, Wenyi Zhao, Xiao Liu, Xiao Xia, Xiaohan Zhang, Xiaotao Gu, Xin Lv, Xinghan Liu, Xinyi Liu, Xinyue Yang, Xixuan Song, Xunkai Zhang, Yifan An, Yifan Xu, Yilin Niu, Yuantao Yang, Yueyan Li, Yushi Bai, Yuxiao Dong, Zehan Qi, Zhaoyu Wang, Zhen Yang, Zhengxiao Du, Zhenyu Hou, Zihan Wang.
331332
1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
332333
1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
333334
1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
@@ -337,6 +338,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
337338
1. **[Granite](https://huggingface.co/docs/transformers/main/model_doc/granite)** (from IBM) released with the paper [Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler](https://arxiv.org/abs/2408.13359) by Yikang Shen, Matthew Stallone, Mayank Mishra, Gaoyuan Zhang, Shawn Tan, Aditya Prasad, Adriana Meza Soria, David D. Cox, Rameswar Panda.
338339
1. **[Grounding DINO](https://huggingface.co/docs/transformers/model_doc/grounding-dino)** (from IDEA-Research) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
339340
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
341+
1. **[Helium](https://huggingface.co/docs/transformers/main/model_doc/helium)** (from the Kyutai Team) released with the blog post [Announcing Helium-1 Preview](https://kyutai.org/2025/01/13/helium.html) by the Kyutai Team.
340342
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
341343
1. **[Hiera](https://huggingface.co/docs/transformers/model_doc/hiera)** (from Meta) released with the paper [Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles](https://arxiv.org/pdf/2306.00989) by Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer.
342344
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.

docs/snippets/6_supported-models.snippet

+2
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@
4343
1. **Florence2** (from Microsoft) released with the paper [Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks](https://arxiv.org/abs/2311.06242) by Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan.
4444
1. **[Gemma](https://huggingface.co/docs/transformers/main/model_doc/gemma)** (from Google) released with the paper [Gemma: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/gemma-open-models/) by the Gemma Google team.
4545
1. **[Gemma2](https://huggingface.co/docs/transformers/main/model_doc/gemma2)** (from Google) released with the paper [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/google-gemma-2/) by the Gemma Google team.
46+
1. **[GLM](https://huggingface.co/docs/transformers/main/model_doc/glm)** (from the GLM Team, THUDM & ZhipuAI) released with the paper [ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools](https://arxiv.org/abs/2406.12793v2) by Team GLM: Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Dan Zhang, Diego Rojas, Guanyu Feng, Hanlin Zhao, Hanyu Lai, Hao Yu, Hongning Wang, Jiadai Sun, Jiajie Zhang, Jiale Cheng, Jiayi Gui, Jie Tang, Jing Zhang, Jingyu Sun, Juanzi Li, Lei Zhao, Lindong Wu, Lucen Zhong, Mingdao Liu, Minlie Huang, Peng Zhang, Qinkai Zheng, Rui Lu, Shuaiqi Duan, Shudan Zhang, Shulin Cao, Shuxun Yang, Weng Lam Tam, Wenyi Zhao, Xiao Liu, Xiao Xia, Xiaohan Zhang, Xiaotao Gu, Xin Lv, Xinghan Liu, Xinyi Liu, Xinyue Yang, Xixuan Song, Xunkai Zhang, Yifan An, Yifan Xu, Yilin Niu, Yuantao Yang, Yueyan Li, Yushi Bai, Yuxiao Dong, Zehan Qi, Zhaoyu Wang, Zhen Yang, Zhengxiao Du, Zhenyu Hou, Zihan Wang.
4647
1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
4748
1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
4849
1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
@@ -52,6 +53,7 @@
5253
1. **[Granite](https://huggingface.co/docs/transformers/main/model_doc/granite)** (from IBM) released with the paper [Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler](https://arxiv.org/abs/2408.13359) by Yikang Shen, Matthew Stallone, Mayank Mishra, Gaoyuan Zhang, Shawn Tan, Aditya Prasad, Adriana Meza Soria, David D. Cox, Rameswar Panda.
5354
1. **[Grounding DINO](https://huggingface.co/docs/transformers/model_doc/grounding-dino)** (from IDEA-Research) released with the paper [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang.
5455
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
56+
1. **[Helium](https://huggingface.co/docs/transformers/main/model_doc/helium)** (from the Kyutai Team) released with the blog post [Announcing Helium-1 Preview](https://kyutai.org/2025/01/13/helium.html) by the Kyutai Team.
5557
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
5658
1. **[Hiera](https://huggingface.co/docs/transformers/model_doc/hiera)** (from Meta) released with the paper [Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles](https://arxiv.org/pdf/2306.00989) by Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer.
5759
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.

src/configs.js

+2
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,8 @@ function getNormalizedConfig(config) {
124124
break;
125125
case 'gemma':
126126
case 'gemma2':
127+
case 'glm':
128+
case 'helium':
127129
mapping['num_heads'] = 'num_key_value_heads';
128130
mapping['num_layers'] = 'num_hidden_layers';
129131
mapping['dim_kv'] = 'head_dim';

src/models.js

+17
Original file line numberDiff line numberDiff line change
@@ -4285,6 +4285,19 @@ export class LlamaModel extends LlamaPreTrainedModel { }
42854285
export class LlamaForCausalLM extends LlamaPreTrainedModel { }
42864286
//////////////////////////////////////////////////
42874287

4288+
//////////////////////////////////////////////////
4289+
// Helium models
4290+
export class HeliumPreTrainedModel extends PreTrainedModel { }
4291+
export class HeliumModel extends HeliumPreTrainedModel { }
4292+
export class HeliumForCausalLM extends HeliumPreTrainedModel { }
4293+
//////////////////////////////////////////////////
4294+
4295+
//////////////////////////////////////////////////
4296+
// Glm models
4297+
export class GlmPreTrainedModel extends PreTrainedModel { }
4298+
export class GlmModel extends GlmPreTrainedModel { }
4299+
export class GlmForCausalLM extends GlmPreTrainedModel { }
4300+
//////////////////////////////////////////////////
42884301

42894302
//////////////////////////////////////////////////
42904303
// EXAONE models
@@ -7139,6 +7152,8 @@ const MODEL_MAPPING_NAMES_DECODER_ONLY = new Map([
71397152
['cohere', ['CohereModel', CohereModel]],
71407153
['gemma', ['GemmaModel', GemmaModel]],
71417154
['gemma2', ['Gemma2Model', Gemma2Model]],
7155+
['helium', ['HeliumModel', HeliumModel]],
7156+
['glm', ['GlmModel', GlmModel]],
71427157
['openelm', ['OpenELMModel', OpenELMModel]],
71437158
['qwen2', ['Qwen2Model', Qwen2Model]],
71447159
['phi', ['PhiModel', PhiModel]],
@@ -7235,6 +7250,8 @@ const MODEL_FOR_CAUSAL_LM_MAPPING_NAMES = new Map([
72357250
['cohere', ['CohereForCausalLM', CohereForCausalLM]],
72367251
['gemma', ['GemmaForCausalLM', GemmaForCausalLM]],
72377252
['gemma2', ['Gemma2ForCausalLM', Gemma2ForCausalLM]],
7253+
['helium', ['HeliumForCausalLM', HeliumForCausalLM]],
7254+
['glm', ['GlmForCausalLM', GlmForCausalLM]],
72387255
['openelm', ['OpenELMForCausalLM', OpenELMForCausalLM]],
72397256
['qwen2', ['Qwen2ForCausalLM', Qwen2ForCausalLM]],
72407257
['phi', ['PhiForCausalLM', PhiForCausalLM]],

tests/models/glm/test_modeling_glm.js

+51
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
import { PreTrainedTokenizer, GlmForCausalLM } from "../../../src/transformers.js";
2+
3+
import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js";
4+
5+
export default () => {
6+
describe("GlmForCausalLM", () => {
7+
const model_id = "hf-internal-testing/tiny-random-GlmForCausalLM";
8+
/** @type {GlmForCausalLM} */
9+
let model;
10+
/** @type {PreTrainedTokenizer} */
11+
let tokenizer;
12+
beforeAll(async () => {
13+
model = await GlmForCausalLM.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS);
14+
tokenizer = await PreTrainedTokenizer.from_pretrained(model_id);
15+
tokenizer.padding_side = "left";
16+
}, MAX_MODEL_LOAD_TIME);
17+
18+
it(
19+
"batch_size=1",
20+
async () => {
21+
const inputs = tokenizer("hello");
22+
const outputs = await model.generate({
23+
...inputs,
24+
max_length: 10,
25+
});
26+
expect(outputs.tolist()).toEqual([[23582n, 5797n, 38238n, 24486n, 36539n, 34489n, 6948n, 34489n, 6948n, 16014n]]);
27+
},
28+
MAX_TEST_EXECUTION_TIME,
29+
);
30+
31+
it(
32+
"batch_size>1",
33+
async () => {
34+
const inputs = tokenizer(["hello", "hello world"], { padding: true });
35+
const outputs = await model.generate({
36+
...inputs,
37+
max_length: 10,
38+
});
39+
expect(outputs.tolist()).toEqual([
40+
[59246n, 23582n, 5797n, 38238n, 24486n, 36539n, 34489n, 6948n, 34489n, 6948n],
41+
[23582n, 2901n, 39936n, 25036n, 55411n, 10337n, 3424n, 39183n, 30430n, 37285n],
42+
]);
43+
},
44+
MAX_TEST_EXECUTION_TIME,
45+
);
46+
47+
afterAll(async () => {
48+
await model?.dispose();
49+
}, MAX_MODEL_DISPOSE_TIME);
50+
});
51+
};
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
import { PreTrainedTokenizer, HeliumForCausalLM } from "../../../src/transformers.js";
2+
3+
import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js";
4+
5+
export default () => {
6+
describe("HeliumForCausalLM", () => {
7+
const model_id = "hf-internal-testing/tiny-random-HeliumForCausalLM";
8+
/** @type {HeliumForCausalLM} */
9+
let model;
10+
/** @type {PreTrainedTokenizer} */
11+
let tokenizer;
12+
beforeAll(async () => {
13+
model = await HeliumForCausalLM.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS);
14+
tokenizer = await PreTrainedTokenizer.from_pretrained(model_id);
15+
tokenizer.padding_side = "left";
16+
}, MAX_MODEL_LOAD_TIME);
17+
18+
it(
19+
"batch_size=1",
20+
async () => {
21+
const inputs = tokenizer("hello");
22+
const outputs = await model.generate({
23+
...inputs,
24+
max_length: 10,
25+
});
26+
expect(outputs.tolist()).toEqual([[1n, 456n, 5660n, 1700n, 1486n, 37744n, 35669n, 39396n, 12024n, 32253n]]);
27+
},
28+
MAX_TEST_EXECUTION_TIME,
29+
);
30+
31+
it(
32+
"batch_size>1",
33+
async () => {
34+
const inputs = tokenizer(["hello", "hello world"], { padding: true });
35+
const outputs = await model.generate({
36+
...inputs,
37+
max_length: 10,
38+
});
39+
expect(outputs.tolist()).toEqual([
40+
[3n, 1n, 456n, 5660n, 1700n, 1486n, 37744n, 35669n, 39396n, 12024n],
41+
[1n, 456n, 5660n, 998n, 6136n, 2080n, 172n, 8843n, 40579n, 23953n],
42+
]);
43+
},
44+
MAX_TEST_EXECUTION_TIME,
45+
);
46+
47+
afterAll(async () => {
48+
await model?.dispose();
49+
}, MAX_MODEL_DISPOSE_TIME);
50+
});
51+
};

0 commit comments

Comments
 (0)