Skip to content

Commit d2260b9

Browse files
add_Jittor: Passing model tests, Parameter and Module Container test
add_Jittor: Passing model tests, Parameter and Module Container test. Additional Functionality: 1- TrainOneStep integration. 2- Updated core/train_jt to enable accuracy to be measured. 3- Updated Jittor Optimizer: replaced gradient and apply_gradient function with jittors default functions Zero_grad() and Step(). included a new function Set() to set the trainable_weights paramters for the optimizer. 4- Updated Jittor Metrics for Accuracy, Recall, Precision and AUC. 5- Creating Jittor model tutorial file jittor_module_tutorial.py 6- Module Container and Parameter Container: Updated core_jittor ModuleList and ParameterDict to enable OrderedDict intialization which was not available due to the parent class (Jittor Module) initializing Dict by default which caused integration issues. This issue was handled by updating the function and also excluding the parent Module for these functions. Areas to optimize integration: Enabling Jittor integration to run large model training as currently it is limited in the complexity of NN layers.
1 parent 772af7a commit d2260b9

24 files changed

+937
-1093
lines changed

examples/basic_tutorials/cifar10_cnn.py

+190-187
Large diffs are not rendered by default.

examples/basic_tutorials/cifar10_cnn_dist.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,11 @@
22
# -*- coding: utf-8 -*-
33

44
import os
5-
os.environ['TL_BACKEND'] = 'paddle'
5+
# os.environ['TL_BACKEND'] = 'paddle'
6+
# os.environ['TL_BACKEND'] = 'jittor'
67
# os.environ['TL_BACKEND'] = 'tensorflow'
78
# os.environ['TL_BACKEND'] = 'mindspore'
8-
# os.environ['TL_BACKEND'] = 'torch'
9+
os.environ['TL_BACKEND'] = 'torch'
910

1011
import paddle
1112
from paddle.distributed import fleet

examples/basic_tutorials/cifar10_cnn_train.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,11 @@
55

66
import os
77
# os.environ['TL_BACKEND'] = 'paddle'
8-
9-
os.environ['TL_BACKEND'] = 'jittor'
108
# os.environ['TL_BACKEND'] = 'tensorflow'
119
# os.environ['TL_BACKEND'] = 'mindspore'
10+
# os.environ['TL_BACKEND'] = 'jittor'
1211

13-
# os.environ['TL_BACKEND'] = 'torch'
12+
os.environ['TL_BACKEND'] = 'torch'
1413

1514

1615

@@ -76,7 +75,7 @@ def forward(self, x):
7675

7776
# 定义损失函数、优化器等
7877
loss_fn=tlx.losses.softmax_cross_entropy_with_logits
79-
optimizer = tlx.optimizers.Adam(net.trainable_weights, lr=learning_rate)
78+
optimizer = tlx.optimizers.Adam(learning_rate)
8079
metrics = tlx.metrics.Accuracy()
8180

8281

examples/basic_tutorials/gradient_clip_mixed_tensorflow.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,10 @@
22
# -*- coding: utf-8 -*-
33
# The tensorlayerx and tensorflow operators can be mixed
44
import os
5-
os.environ['TL_BACKEND'] = 'tensorflow'
5+
# os.environ['TL_BACKEND'] = 'tensorflow'
66
# os.environ['TL_BACKEND'] = 'paddle'
7-
# os.environ['TL_BACKEND'] = 'torch'
7+
os.environ['TL_BACKEND'] = 'torch'
8+
# os.environ['TL_BACKEND'] = 'jittor'
89

910

1011
import time

0 commit comments

Comments
 (0)