Skip to content

add headings to weights table. #6139

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 19 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 38 additions & 8 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -375,7 +375,16 @@ def inject_weight_metadata(app, what, name, obj, options, lines):
lines.append("")


def generate_weights_table(module, table_name, metrics, dataset, include_patterns=None, exclude_patterns=None):
def generate_weights_table(
module,
table_name,
metrics,
dataset,
include_patterns=None,
exclude_patterns=None,
table_description="",
title_character="-",
):
weights_endswith = "_QuantizedWeights" if module.__name__.split(".")[-1] == "quantization" else "_Weights"
weight_enums = [getattr(module, name) for name in dir(module) if name.endswith(weights_endswith)]
weights = [w for weight_enum in weight_enums for w in weight_enum]
Expand Down Expand Up @@ -403,50 +412,71 @@ def generate_weights_table(module, table_name, metrics, dataset, include_pattern
generated_dir = Path("generated")
generated_dir.mkdir(exist_ok=True)
with open(generated_dir / f"{table_name}_table.rst", "w+") as table_file:
table_file.write(
f"Table of all available {table_name.replace('_',' ').title()} Weights \n{(32 + len(table_name))*title_character}\n"
)
table_file.write(f"{table_description}\n\n")
Comment on lines +415 to +418
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC this is the same as before but instead of writing these lines in the .rst file (as preferred), we're now generating it here and writing it in the table files.

Could you explain what the difference is, and why it "works"?

Copy link
Contributor Author

@abhi-glitchhg abhi-glitchhg Jun 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC this is the same as before, but instead of writing these lines in the .rst file (as preferred), we're now generating it here and writing it in the table files.

Yes, you are right!

So there was no title/headings in the generated table rst files, that is why we were getting results like below.
(notice the <no title>)

image

So to solve this, we needed to add the titles in the generated table files and not manually write them in models.rst.
And the descriptions for the tables should be written after the headings. So there was no choice but to add the description in the generated table rst.

So, I have shifted the title and description of tables from models.rst to generated table rst files. Otherwise, there is no difference.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NicolasHug, any updates on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @abhi-glitchhg , sorry for the late reply

I'm a little uncomfortable with this solution because it makes our solution slightly more complex and somewhat hides the structure of the models.rst file, which now also depends on the auto-generation code in conf.py. On top of that it's not really clear why this works while the our original solution doesn't.

It feels like we're patching a limitation of sphinx's search by working around it, without addressing the actual core of the issue. Did we figure out why writing the title within the file makes the search render better?

BTW, the search still looks like this:

image

which is better because we have the title, but it still looks broken. Considering how much time we have spent on this already (especially you!), I wonder if it's worth continuing trying to fix this. It seems to me like a benign issue to begin with.

Copy link
Contributor Author

@abhi-glitchhg abhi-glitchhg Aug 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels like we're patching a limitation of sphinx's search by working around it without addressing the actual core of the issue. Did we figure out why writing the title within the file makes the search render better?

Yeah maybe!

which is better because we have the title, but it still looks broken.

Agree! Closing this pr as it doesn't properly solve the issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe someone with a good understanding of the sphinx theme could have a look at this!

maybe @ain-soph (sorry for shameless tagging, i really liked how you modified the theme for your project) If you have spare time ;-;

table_file.write(".. rst-class:: table-weights\n") # Custom CSS class, see custom_torchvision.css
table_file.write(".. table::\n")
table_file.write(f".. table:: {table_name}\n")
table_file.write(f" :widths: 100 {'20 ' * len(metrics_names)} 20 10\n\n")
table_file.write(f"{textwrap.indent(table, ' ' * 4)}\n\n")


generate_weights_table(
module=M, table_name="classification", metrics=[("acc@1", "Acc@1"), ("acc@5", "Acc@5")], dataset="ImageNet-1K"
module=M,
table_name="classification",
metrics=[("acc@1", "Acc@1"), ("acc@5", "Acc@5")],
dataset="ImageNet-1K",
table_description="Accuracies are reported on ImageNet-1K using single crops:",
)
generate_weights_table(
module=M.quantization,
table_name="classification_quant",
table_name="quantized_classification",
metrics=[("acc@1", "Acc@1"), ("acc@5", "Acc@5")],
dataset="ImageNet-1K",
table_description="Accuracies are reported on ImageNet-1K using single crops:",
title_character="^",
)
generate_weights_table(
module=M.detection,
table_name="detection",
table_name="object_detection",
metrics=[("box_map", "Box MAP")],
exclude_patterns=["Mask", "Keypoint"],
dataset="COCO-val2017",
table_description="Box MAPs are reported on COCO val2017:",
title_character="^",
)
generate_weights_table(
module=M.detection,
table_name="instance_segmentation",
metrics=[("box_map", "Box MAP"), ("mask_map", "Mask MAP")],
dataset="COCO-val2017",
include_patterns=["Mask"],
table_description="Box and Mask MAPs are reported on COCO val2017:",
title_character="^",
)
generate_weights_table(
module=M.detection,
table_name="detection_keypoint",
table_name="keypoint_detection",
metrics=[("box_map", "Box MAP"), ("kp_map", "Keypoint MAP")],
dataset="COCO-val2017",
include_patterns=["Keypoint"],
table_description="Box and Keypoint MAPs are reported on COCO val2017:",
title_character="^",
)
generate_weights_table(
module=M.segmentation,
table_name="segmentation",
table_name="semantic_segmentation",
metrics=[("miou", "Mean IoU"), ("pixel_acc", "pixelwise Acc")],
dataset="COCO-val2017-VOC-labels",
table_description="All models are evaluated a subset of COCO val2017, on the 20 categories that are present in the Pascal VOC dataset:",
)
generate_weights_table(
module=M.video, table_name="video", metrics=[("acc@1", "Acc@1"), ("acc@5", "Acc@5")], dataset="Kinetics-400"
module=M.video,
table_name="video_classification",
metrics=[("acc@1", "Acc@1"), ("acc@5", "Acc@5")],
dataset="Kinetics-400",
table_description="Accuracies are reported on Kinetics-400 using single crops for clip length 16:",
)


Expand Down
38 changes: 5 additions & 33 deletions docs/source/models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -251,10 +251,6 @@ Here is an example of how to use the pre-trained image classification models:

The classes of the pre-trained model outputs can be found at ``weights.meta["categories"]``.

Table of all available classification weights
---------------------------------------------

Accuracies are reported on ImageNet-1K using single crops:

.. include:: generated/classification_table.rst

Expand Down Expand Up @@ -309,12 +305,8 @@ Here is an example of how to use the pre-trained quantized image classification
The classes of the pre-trained model outputs can be found at ``weights.meta["categories"]``.


Table of all available quantized classification weights
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Accuracies are reported on ImageNet-1K using single crops:

.. include:: generated/classification_quant_table.rst
.. include:: generated/quantized_classification_table.rst

Semantic Segmentation
=====================
Expand Down Expand Up @@ -367,12 +359,8 @@ The classes of the pre-trained model outputs can be found at ``weights.meta["cat
The output format of the models is illustrated in :ref:`semantic_seg_output`.


Table of all available semantic segmentation weights
----------------------------------------------------

All models are evaluated a subset of COCO val2017, on the 20 categories that are present in the Pascal VOC dataset:

.. include:: generated/segmentation_table.rst
.. include:: generated/semantic_segmentation_table.rst


.. _object_det_inst_seg_pers_keypoint_det:
Expand Down Expand Up @@ -442,12 +430,8 @@ Here is an example of how to use the pre-trained object detection models:
The classes of the pre-trained model outputs can be found at ``weights.meta["categories"]``.
For details on how to plot the bounding boxes of the models, you may refer to :ref:`instance_seg_output`.

Table of all available Object detection weights
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Box MAPs are reported on COCO val2017:

.. include:: generated/detection_table.rst
.. include:: generated/object_detection_table.rst


Instance Segmentation
Expand All @@ -468,10 +452,6 @@ weights:

For details on how to plot the masks of the models, you may refer to :ref:`instance_seg_output`.

Table of all available Instance segmentation weights
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Box and Mask MAPs are reported on COCO val2017:

.. include:: generated/instance_segmentation_table.rst

Expand All @@ -493,12 +473,8 @@ pre-trained weights:
The classes of the pre-trained model outputs can be found at ``weights.meta["keypoint_names"]``.
For details on how to plot the bounding boxes of the models, you may refer to :ref:`keypoint_output`.

Table of all available Keypoint detection weights
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Box and Keypoint MAPs are reported on COCO val2017:

.. include:: generated/detection_keypoint_table.rst
.. include:: generated/keypoint_detection_table.rst


Video Classification
Expand Down Expand Up @@ -551,12 +527,8 @@ Here is an example of how to use the pre-trained video classification models:
The classes of the pre-trained model outputs can be found at ``weights.meta["categories"]``.


Table of all available video classification weights
---------------------------------------------------

Accuracies are reported on Kinetics-400 using single crops for clip length 16:

.. include:: generated/video_table.rst
.. include:: generated/video_classification_table.rst

Optical Flow
============
Expand Down