diff --git a/README.md b/README.md
index b5ceb10d..e01cfcec 100644
--- a/README.md
+++ b/README.md
@@ -18,15 +18,12 @@
> `plotly_resampler`: visualize large sequential data by **adding resampling functionality to Plotly figures**
-[Plotly](https://github.com/plotly/plotly.py) is an awesome interactive visualization library, however it can get pretty slow when a lot of data points are visualized (100 000+ datapoints). This library solves this by downsampling (aggregating) the data respective to the view and then plotting the aggregated points. When you interact with the plot (panning, zooming, ...), callbacks are used to aggregate data and update the figure.
+[Plotly](https://github.com/plotly/plotly.py) is an awesome interactive visualization library, however it can get pretty slow when a lot of data points are visualized (100 000+ datapoints). This library solves this by downsampling (aggregating) the data respective to the view and then plotting the aggregated points. When you interact with the plot (panning, zooming, ...), callbacks are used to aggregate data and update the figure.
-
-
-
-
-
+
-In [this Plotly-Resampler demo](https://github.com/predict-idlab/plotly-resampler/blob/main/examples/basic_example.ipynb) over `110,000,000` data points are visualized!
+
+In [this Plotly-Resampler demo](https://github.com/predict-idlab/plotly-resampler/blob/main/examples/basic_example.ipynb) over `110,000,000` data points are visualized!
@@ -39,79 +36,144 @@ In [this Plotly-Resampler demo](https://github.com/predict-idlab/plotly-resample
### Installation
-| [**pip**](https://pypi.org/project/plotly_resampler/) | `pip install plotly-resampler` |
+| [**pip**](https://pypi.org/project/plotly_resampler/) | `pip install plotly-resampler` |
| ---| ----|
+
## Usage
-To **add dynamic resampling** to your plotly Figure
-* using a web application with *Dash* callbacks, you should;
- 1. wrap the plotly Figure with `FigureResampler`
- 2. call `.show_dash()` on the Figure
-* within a *jupyter* environment and *without creating a web application*, you should:
- 1. wrap the plotly Figure with `FigureWidgetResampler`
- 2. output the `FigureWidgetResampler` instance in a cell
+**Add dynamic aggregation** to your plotly Figure _(unfold your fitting use case)_
+* 🤖 Automatically _(minimal code overhead)_:
+ Use the register_plotly_resampler
function
+
-> **Note**:
-> Any plotly Figure can be wrapped with `FigureResampler` and `FigureWidgetResampler`! 🎉
-> But, (obviously) only the scatter traces will be resampled.
+ 1. Import and call the `register_plotly_resampler` method
+ 2. Just use your regular graph construction code
+
+ * **code example**:
+ ```python
+ import plotly.graph_objects as go; import numpy as np
+ from plotly_resampler import register_plotly_resampler
+
+ # Call the register function once and all Figures/FigureWidgets will be wrapped
+ # according to the register_plotly_resampler its `mode` argument
+ register_plotly_resampler(mode='auto')
+
+ x = np.arange(1_000_000)
+ noisy_sin = (3 + np.sin(x / 200) + np.random.randn(len(x)) / 10) * x / 1_000
+
+
+ # auto mode: when working in an IPython environment, this will automatically be a
+ # FigureWidgetResampler else, this will be an FigureResampler
+ f = go.Figure()
+ f.add_trace({"y": noisy_sin + 2, "name": "yp2"})
+ f
+ ```
+
+ > **Note**: This wraps **all** plotly graph object figures with a
+ > `FigureResampler` | `FigureWidgetResampler`. This can thus also be
+ > used for the `plotly.express` interface. 🎉
-> **Tip** 💡:
-> For significant faster initial loading of the Figure, we advise to wrap the constructor of the plotly Figure and add the trace data as `hf_x` and `hf_y`
+
-### Minimal example
+* 👷 Manually _(higher data aggregation configurability, more speedup possibilities)_:
+
+ Within a jupyter environment without creating a web application
+
-```python
-import plotly.graph_objects as go; import numpy as np
-from plotly_resampler import FigureResampler, FigureWidgetResampler
+ 1. wrap the plotly Figure with `FigureWidgetResampler`
+ 2. output the `FigureWidgetResampler` instance in a cell
-x = np.arange(1_000_000)
-noisy_sin = (3 + np.sin(x / 200) + np.random.randn(len(x)) / 10) * x / 1_000
+ * **code example**:
+ ```python
+ import plotly.graph_objects as go; import numpy as np
+ from plotly_resampler import FigureResampler, FigureWidgetResampler
-# OPTION 1 - FigureResampler: dynamic aggregation via a Dash web-app
-fig = FigureResampler(go.Figure())
-fig.add_trace(go.Scattergl(name='noisy sine', showlegend=True), hf_x=x, hf_y=noisy_sin)
+ x = np.arange(1_000_000)
+ noisy_sin = (3 + np.sin(x / 200) + np.random.randn(len(x)) / 10) * x / 1_000
-fig.show_dash(mode='inline')
-```
+ # OPTION 1 - FigureWidgetResampler: dynamic aggregation via `FigureWidget.layout.on_change`
+ fig = FigureWidgetResampler(go.Figure())
+ fig.add_trace(go.Scattergl(name='noisy sine', showlegend=True), hf_x=x, hf_y=noisy_sin)
-#### FigureWidgetResampler: dynamic aggregation via `FigureWidget.layout.on_change`
-```python
-...
-# OPTION 2 - FigureWidgetResampler: dynamic aggregation via `FigureWidget.layout.on_change`
-fig = FigureWidgetResampler(go.Figure())
-fig.add_trace(go.Scattergl(name='noisy sine', showlegend=True), hf_x=x, hf_y=noisy_sin)
+ fig
+ ```
+
+
+ Using a web-application with dash callbacks
+
-fig
-```
+ 1. wrap the plotly Figure with `FigureResampler`
+ 2. call `.show_dash()` on the `Figure`
-### Features
+ * **code example**:
+ ```python
+ import plotly.graph_objects as go; import numpy as np
+ from plotly_resampler import FigureResampler, FigureWidgetResampler
-* **Convenient** to use:
- * just add either
- * `FigureResampler` decorator around a plotly Figure and call `.show_dash()`
- * `FigureWidgetResampler` decorator around a plotly Figure and output the instance in a cell
- * allows all other plotly figure construction flexibility to be used!
-* **Environment-independent**
- * can be used in Jupyter, vscode-notebooks, Pycharm-notebooks, Google Colab, and even as application (on a server)
-* Interface for **various aggregation algorithms**:
- * ability to develop or select your preferred sequence aggregation method
+ x = np.arange(1_000_000)
+ noisy_sin = (3 + np.sin(x / 200) + np.random.randn(len(x)) / 10) * x / 1_000
+ # OPTION 2 - FigureResampler: dynamic aggregation via a Dash web-app
+ fig = FigureResampler(go.Figure())
+ fig.add_trace(go.Scattergl(name='noisy sine', showlegend=True), hf_x=x, hf_y=noisy_sin)
+
+ fig.show_dash(mode='inline')
+ ```
+
+
+
+
+ > **Tip** 💡:
+ > For significant faster initial loading of the Figure, we advise to wrap the
+ > constructor of the plotly Figure and add the trace data as `hf_x` and `hf_y`
+
+
+
+> **Note**:
+> Any plotly Figure can be wrapped with `FigureResampler` and `FigureWidgetResampler`! 🎉
+> But, (obviously) only the scatter traces will be resampled.
+
+
+
+
+
+Features
+
+ * **Convenient** to use:
+ * just add either
+ * `register_plotly_resampler` function to your notebook with the best suited `mode` argument.
+ * `FigureResampler` decorator around a plotly Figure and call `.show_dash()`
+ * `FigureWidgetResampler` decorator around a plotly Figure and output the instance in a cell
+ * allows all other plotly figure construction flexibility to be used!
+ * **Environment-independent**
+ * can be used in Jupyter, vscode-notebooks, Pycharm-notebooks, Google Colab, and even as application (on a server)
+ * Interface for **various aggregation algorithms**:
+ * ability to develop or select your preferred sequence aggregation method
+
### Important considerations & tips
* When running the code on a server, you should forward the port of the `FigureResampler.show_dash()` method to your local machine.
**Note** that you can add dynamic aggregation to plotly figures with the `FigureWidgetResampler` wrapper without needing to forward a port!
-* In general, when using downsampling one should be aware of (possible) [aliasing](https://en.wikipedia.org/wiki/Aliasing) effects.
- The [R] in the legend indicates when the corresponding trace is being resampled (and thus possibly distorted) or not. Additionally, the `~` suffix represent the mean aggregation bin size in terms of the sequence index.
+* In general, when using downsampling one should be aware of (possible) [aliasing](https://en.wikipedia.org/wiki/Aliasing) effects.
+ The [R] in the legend indicates when the corresponding trace is being resampled (and thus possibly distorted) or not. Additionally, the `~` suffix represent the mean aggregation bin size in terms of the sequence index.
* The plotly **autoscale** event (triggered by the autoscale button or a double-click within the graph), **does not reset the axes but autoscales the current graph-view** of plotly-resampler figures. This design choice was made as it seemed more intuitive for the developers to support this behavior with double-click than the default axes-reset behavior. The graph axes can ofcourse be resetted by using the `reset_axis` button. If you want to give feedback and discuss this further with the developers, see issue [#49](https://github.com/predict-idlab/plotly-resampler/issues/49).
+
## Future work 🔨
-* Support `.add_traces()` (currently only `.add_trace` is supported)
+- [x] Support `.add_traces()` (currently only `.add_trace` is supported)
+- [ ] Support `hf_color` and `hf_markersize`, see [#50](https://github.com/predict-idlab/plotly-resampler/pull/50)
+- [ ] Create C bindings for our EfficientLTTB algorithm.
diff --git a/docs/sphinx/api_reference.rst b/docs/sphinx/api_reference.rst
index 088d36f5..1d72b39a 100644
--- a/docs/sphinx/api_reference.rst
+++ b/docs/sphinx/api_reference.rst
@@ -1,5 +1,5 @@
-API reference 📖
-================
+API 📖
+======
.. autosummary::
:toctree: _autosummary
@@ -9,4 +9,7 @@ API reference 📖
plotly_resampler.figure_resampler
.. _aggregation
- plotly_resampler.aggregation
\ No newline at end of file
+ plotly_resampler.aggregation
+
+ .. _registering
+ plotly_resampler.registering
\ No newline at end of file
diff --git a/docs/sphinx/conf.py b/docs/sphinx/conf.py
index 8712aa23..73beb5fc 100644
--- a/docs/sphinx/conf.py
+++ b/docs/sphinx/conf.py
@@ -13,7 +13,8 @@
import os
import sys
-sys.path.insert(0, os.path.abspath("../plotly_resampler"))
+sys.path.append(os.path.abspath("../../"))
+sys.path.append(os.path.abspath("../../plotly_resampler"))
# -- Project information -----------------------------------------------------
@@ -44,7 +45,7 @@
"sphinx.ext.autosummary",
"sphinx_autodoc_typehints",
"sphinx.ext.todo",
- 'sphinx.ext.autosectionlabel',
+ "sphinx.ext.autosectionlabel",
"sphinx.ext.viewcode",
# 'sphinx.ext.githubpages',
]
@@ -88,6 +89,7 @@
html_theme = "pydata_sphinx_theme"
html_logo = "_static/logo.png"
html_favicon = "_static/icon.png"
+language = "en"
html_theme_options = {
# "show_nav_level": 2,
@@ -104,12 +106,22 @@
"type": "fontawesome", # Default is fontawesome
}
],
+ "pygment_light_style": "tango", # tango
+ "pygment_dark_style": "native",
+ "navbar_end": [
+ "theme-switcher.html",
+ "navbar-icon-links.html",
+ "search-field.html",
+ ],
}
html_sidebars = {
- 'figure_resampler*': [],
- 'aggregation*': []
+ "figure_resampler*": [],
+ "aggregation*": [],
+ "_autosummary*": [],
+ "*": [],
}
+# html_sidebars = {"figure_resampler*": [], "aggregation*": []}
# Add any paths that contain custom static files (such as style sheets) here,
diff --git a/docs/sphinx/dash_app_integration.rst b/docs/sphinx/dash_app_integration.rst
index bf70255e..2a71c88c 100644
--- a/docs/sphinx/dash_app_integration.rst
+++ b/docs/sphinx/dash_app_integration.rst
@@ -7,8 +7,8 @@
-Integration with a dash app 🤝
-==============================
+Dash integration 🤝
+===================
This documentation page describes how you can integrate ``plotly-resampler`` in a `dash `_ application.
diff --git a/docs/sphinx/figure_resampler.rst b/docs/sphinx/figure_resampler.rst
index 489da970..01baa8d6 100644
--- a/docs/sphinx/figure_resampler.rst
+++ b/docs/sphinx/figure_resampler.rst
@@ -27,3 +27,11 @@ FigureWidgetResampler
:undoc-members:
:show-inheritance:
+^^^^^^^^^^^^^^^^^
+utility functions
+^^^^^^^^^^^^^^^^^
+
+.. automodule:: plotly_resampler.figure_resampler.utils
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sphinx/getting_started.rst b/docs/sphinx/getting_started.rst
index b08a3d44..1bc94462 100644
--- a/docs/sphinx/getting_started.rst
+++ b/docs/sphinx/getting_started.rst
@@ -21,40 +21,79 @@ Install via `pip `_:
How to use 📈
-------------
-Dynamic resampling callbacks are realized with either:
+Dynamic resampling callbacks are realized:
-* `Dash `_ callbacks, when a ``go.Figure`` object is wrapped with dynamic aggregation functionality.
+* **Automatically** (low code overhead):
- .. note::
+ * using the :func:`register_plotly_resampler ` function
- This is especially useful when working with **dash functionality** or when you do **not want to solely operate in jupyter environments**.
+ **To add dynamic resampling using a FigureWidget, you should**:
+ 1. Import and call the :func:`register_plotly_resampler ` method
+ 2. Just use your regular graph construction code
- To **add dynamic resampling**, you should:
- 1. wrap the plotly Figure with :class:`FigureResampler `
- 2. call :func:`.show_dash() ` on the Figure
+ Once this method is called, it will automatically convert all new defined plotly
+ graph objects into a :class:`FigureResampler ` or :class:`FigureWidgetResampler ` object.
+ The ``mode`` parameter of this method allows to define which type of the aforementioned resampling objects is used.
-* `FigureWidget.layout.on_change `_ , when a ``go.FigureWidget`` is used within a ``.ipynb`` environment.
+* **Manually** (data aggregation configurability, graph construction speedups):
- .. note::
+ * `Dash `_ callbacks, when a ``go.Figure`` object is wrapped with dynamic aggregation functionality.
- This is especially useful when developing in ``jupyter`` environments and when **you cannot open/forward a network-port**.
+ .. note::
+ This is especially useful when working with **dash functionality** or when you do **not want to solely operate in jupyter environments**.
- To **add dynamic resampling** using a **FigureWidget**, you should:
- 1. wrap your plotly Figure (can be a ``go.Figure``) with :class:`FigureWidgetResampler `
- 2. output the ```FigureWidgetResampler`` instance in a cell
+ **To add dynamic resampling, you should**:
+ 1. wrap the plotly Figure with :class:`FigureResampler `
+ 2. call :func:`.show_dash() ` on the Figure
-.. tip::
+ * `FigureWidget.layout.on_change `_ , when a ``go.FigureWidget`` is used within a ``.ipynb`` environment.
+
+ .. note::
+
+ This is especially useful when developing in ``jupyter`` environments and when **you cannot open/forward a network-port**.
- For **significant faster initial loading** of the Figure, we advise to wrap the constructor of the plotly Figure with either :class:`FigureResampler ` or :class:`FigureWidgetResampler ` and add the trace data as ``hf_x`` and ``hf_y``
-.. note::
+ **To add dynamic resampling using a FigureWidget, you should**:
+ 1. wrap your plotly Figure (can be a ``go.Figure``) with :class:`FigureWidgetResampler `
+ 2. output the ```FigureWidgetResampler`` instance in a cell
- Any plotly Figure can be wrapped with dynamic aggregation functionality! 🎉 :raw-html:`
`
- But, (obviously) only the scatter traces will be resampled.
+ .. tip::
+
+ For **significant faster initial loading** of the Figure, we advise to wrap the constructor of the plotly Figure with either :class:`FigureResampler ` or :class:`FigureWidgetResampler ` and add the trace data as ``hf_x`` and ``hf_y``
+
+ .. note::
+
+ Any plotly Figure can be wrapped with dynamic aggregation functionality! 🎉 :raw-html:`
`
+ But, (obviously) only the scatter traces will be resampled.
Working examples ✅
-------------------
+register_plotly_resampler
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code:: py
+
+ import plotly.graph_objects as go; import numpy as np
+ from plotly_resampler import register_plotly_resampler
+
+ # Call the register function once and all Figures/FigureWidgets will be wrapped
+ # according to the register_plotly_resampler its `mode` argument
+ register_plotly_resampler(mode='auto')
+
+ x = np.arange(1_000_000)
+ noisy_sin = (3 + np.sin(x / 200) + np.random.randn(len(x)) / 10) * x / 1_000
+
+
+ # when working in an IPython environment, this will automatically be a
+ # FigureWidgetResampler else, this will be an FigureResampler
+ f = go.Figure()
+ f.add_trace({"y": noisy_sin + 2, "name": "yp2"})
+ f
+
+
+FigureResampler
+^^^^^^^^^^^^^^^
.. code:: py
@@ -69,6 +108,8 @@ Working examples ✅
fig.show_dash(mode='inline')
+FigureWidget
+^^^^^^^^^^^^
The gif below demonstrates the example usage of of :class:`FigureWidgetResampler `, where ``JupyterLab`` is used as environment and the ``FigureWidgetResampler`` instance it's output is redirected into a new view. Also note how you are able to dynamically add traces!
.. image:: https://raw.githubusercontent.com/predict-idlab/plotly-resampler/main/docs/sphinx/_static/figurewidget.gif
diff --git a/docs/sphinx/registering.rst b/docs/sphinx/registering.rst
new file mode 100644
index 00000000..82295009
--- /dev/null
+++ b/docs/sphinx/registering.rst
@@ -0,0 +1,8 @@
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+(un)wrapping plotly figures
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. automodule:: plotly_resampler.registering
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/plotly_resampler/__init__.py b/plotly_resampler/__init__.py
index 49490c19..3e82dcca 100644
--- a/plotly_resampler/__init__.py
+++ b/plotly_resampler/__init__.py
@@ -1,25 +1,31 @@
"""**plotly\_resampler**: visualizing large sequences."""
-from .aggregation import (
- LTTB,
- EfficientLTTB,
- EveryNthPoint,
- FuncAggregator,
- MinMaxOverlapAggregator,
-)
+from .aggregation import LTTB, EfficientLTTB, EveryNthPoint
from .figure_resampler import FigureResampler, FigureWidgetResampler
+from .registering import register_plotly_resampler, unregister_plotly_resampler
__docformat__ = "numpy"
__author__ = "Jonas Van Der Donckt, Jeroen Van Der Donckt, Emiel Deprost"
-__version__ = "0.6.4.2"
+__version__ = "0.7.0"
__all__ = [
"__version__",
"FigureResampler",
"FigureWidgetResampler",
"EfficientLTTB",
- "MinMaxOverlapAggregator",
"LTTB",
"EveryNthPoint",
- "FuncAggregator",
+ "register_plotly_resampler",
+ "unregister_plotly_resampler",
]
+
+
+try: # Enable ipywidgets on google colab!
+ import sys
+
+ if "google.colab" in sys.modules:
+ from google.colab import output
+
+ output.enable_custom_widget_manager()
+except ImportError:
+ pass
diff --git a/plotly_resampler/figure_resampler/figure_resampler.py b/plotly_resampler/figure_resampler/figure_resampler.py
index f26f0822..efccca25 100644
--- a/plotly_resampler/figure_resampler/figure_resampler.py
+++ b/plotly_resampler/figure_resampler/figure_resampler.py
@@ -17,10 +17,12 @@
import plotly.graph_objects as go
from dash import Dash
from jupyter_dash import JupyterDash
+from plotly.basedatatypes import BaseFigure
from trace_updater import TraceUpdater
-from .figure_resampler_interface import AbstractFigureAggregator
from ..aggregation import AbstractSeriesAggregator, EfficientLTTB
+from .figure_resampler_interface import AbstractFigureAggregator
+from .utils import is_figure, is_fr
class FigureResampler(AbstractFigureAggregator, go.Figure):
@@ -28,7 +30,7 @@ class FigureResampler(AbstractFigureAggregator, go.Figure):
def __init__(
self,
- figure: go.Figure = None,
+ figure: BaseFigure | dict = None,
convert_existing_traces: bool = True,
default_n_shown_samples: int = 1000,
default_downsampler: AbstractSeriesAggregator = EfficientLTTB(),
@@ -39,12 +41,27 @@ def __init__(
show_mean_aggregation_size: bool = True,
verbose: bool = False,
):
- if figure is None:
- figure = go.Figure()
+ # Parse the figure input before calling `super`
+ if is_figure(figure) and not is_fr(figure): # go.Figure
+ # Base case, the figure does not need to be adjusted
+ f = figure
+ else:
+ # Create a new figure object and make sure that the trace uid will not get
+ # adjusted when they are added.
+ f = self._get_figure_class(go.Figure)()
+ f._data_validator.set_uid = False
+
+ if isinstance(figure, BaseFigure): # go.FigureWidget or AbstractFigureAggregator
+ # A base figure object, we first copy the layout and grid ref
+ f.layout = figure.layout
+ f._grid_ref = figure._grid_ref
+ f.add_traces(figure.data)
+ elif isinstance(figure, (dict, list)):
+ # A single trace dict or a list of traces
+ f.add_traces(figure)
- assert isinstance(figure, go.Figure)
super().__init__(
- figure,
+ f,
convert_existing_traces,
default_n_shown_samples,
default_downsampler,
@@ -53,6 +70,23 @@ def __init__(
verbose,
)
+ if isinstance(figure, AbstractFigureAggregator):
+ # Copy the `_hf_data` if the previous figure was an AbstractFigureAggregator
+ # and adjust the default `max_n_samples` and `downsampler`
+ self._hf_data.update(
+ self._copy_hf_data(figure._hf_data, adjust_default_values=True)
+ )
+
+ # Note: This hack ensures that the this figure object initially uses
+ # data of the whole view. More concretely; we create a dict
+ # serialization figure and adjust the hf-traces to the whole view
+ # with the check-update method (by passing no range / filter args)
+ with self.batch_update():
+ graph_dict: dict = self._get_current_graph()
+ update_indices = self._check_update_figure_dict(graph_dict)
+ for idx in update_indices:
+ self.data[idx].update(graph_dict["data"][idx])
+
# The FigureResampler needs a dash app
self._app: JupyterDash | Dash | None = None
self._port: int | None = None
diff --git a/plotly_resampler/figure_resampler/figure_resampler_interface.py b/plotly_resampler/figure_resampler/figure_resampler_interface.py
index 72895cca..27e98d29 100644
--- a/plotly_resampler/figure_resampler/figure_resampler_interface.py
+++ b/plotly_resampler/figure_resampler/figure_resampler_interface.py
@@ -16,6 +16,7 @@
from copy import copy
from typing import Dict, Iterable, List, Optional, Tuple, Union
from uuid import uuid4
+from collections import namedtuple
import dash
import numpy as np
@@ -24,14 +25,18 @@
from plotly.basedatatypes import BaseTraceType, BaseFigure
from ..aggregation import AbstractSeriesAggregator, EfficientLTTB
-from ..utils import round_td_str, round_number_str
+from .utils import round_td_str, round_number_str
from abc import ABC
+_hf_data_container = namedtuple("DataContainer", ["x", "y", "text", "hovertext"])
+
class AbstractFigureAggregator(BaseFigure, ABC):
"""Abstract interface for data aggregation functionality for plotly figures."""
+ _high_frequency_traces = ["scatter", "scattergl"]
+
def __init__(
self,
figure: BaseFigure,
@@ -52,7 +57,7 @@ def __init__(
figure: BaseFigure
The figure that will be decorated. Can be either an empty figure
(e.g., ``go.Figure()``, ``make_subplots()``, ``go.FigureWidget``) or an
- existing figure, by default a go.Figure().
+ existing figure.
convert_existing_traces: bool
A bool indicating whether the high-frequency traces of the passed ``figure``
should be resampled, by default True. Hence, when set to False, the
@@ -91,6 +96,10 @@ def __init__(
self._global_downsampler = default_downsampler
+ # Given figure should always be a BaseFigure that is not wrapped by
+ # a plotly-resampler class
+ assert isinstance(figure, BaseFigure)
+ assert not issubclass(type(figure), AbstractFigureAggregator)
self._figure_class = figure.__class__
if convert_existing_traces:
@@ -100,10 +109,29 @@ def __init__(
f_._grid_ref = figure._grid_ref
super().__init__(f_)
- for trace in figure.data:
- self.add_trace(trace)
+ # make sure that the UIDs of these traces do not get adjusted
+ self._data_validator.set_uid = False
+ self.add_traces(figure.data)
else:
super().__init__(figure)
+ self._data_validator.set_uid = False
+
+ # A list of al xaxis and yaxis string names
+ # e.g., "xaxis", "xaxis2", "xaxis3", .... for _xaxis_list
+ self._xaxis_list = self._re_matches(re.compile("xaxis\d*"), self._layout.keys())
+ self._yaxis_list = self._re_matches(re.compile("yaxis\d*"), self._layout.keys())
+ # edge case: an empty `go.Figure()` does not yet contain axes keys
+ if not len(self._xaxis_list):
+ self._xaxis_list = ["xaxis"]
+ self._yaxis_list = ["yaxis"]
+
+ # Make sure to reset the layout its range
+ self.update_layout(
+ {
+ axis: {"autorange": True, "range": None}
+ for axis in self._xaxis_list + self._yaxis_list
+ }
+ )
def _print(self, *values):
"""Helper method for printing if ``verbose`` is set to True."""
@@ -398,6 +426,29 @@ def _check_update_figure_dict(
updated_trace_indices.append(idx)
return updated_trace_indices
+ @staticmethod
+ def _get_figure_class(constr: type) -> type:
+ """Get the plotly figure class (constructor) for the given class (constructor).
+
+ .. Note::
+ This method will always return a plotly constructor, even when the given
+ `constr` is decorated (after executing the ``register_plotly_resampler``
+ function).
+
+ Parameters
+ ----------
+ constr: type
+ The constructor class for which we want to retrieve the plotly constructor.
+
+ Returns
+ -------
+ type:
+ The plotly figure class (constructor) of the given `constr`.
+
+ """
+ from ..registering import _get_plotly_constr # To avoid ImportError
+ return _get_plotly_constr(constr)
+
@staticmethod
def _slice_time(
hf_series: pd.Series,
@@ -454,7 +505,7 @@ def hf_data(self):
"""Property to adjust the `data` component of the current graph
.. note::
- The user has full responisbility to adjust ``hf_data`` properly.
+ The user has full responsibility to adjust ``hf_data`` properly.
Example:
@@ -512,6 +563,208 @@ def _to_hf_series(x: np.ndarray, y: np.ndarray) -> pd.Series:
dtype="category" if y.dtype.type == np.str_ else y.dtype,
)
+ def _parse_get_trace_props(
+ self,
+ trace: BaseTraceType,
+ hf_x: Iterable = None,
+ hf_y: Iterable = None,
+ hf_text: Iterable = None,
+ hf_hovertext: Iterable = None,
+ ) -> _hf_data_container:
+ """Parse and capture the possibly high-frequency trace-props in a datacontainer.
+
+ Parameters
+ ----------
+ trace : BaseTraceType
+ The trace which will be parsed.
+ hf_x : Iterable, optional
+ high-frequency trace "x" data, overrides the current trace its x-data.
+ hf_y : Iterable, optional
+ high-frequency trace "y" data, overrides the current trace its y-data.
+ hf_text : Iterable, optional
+ high-frequency trace "text" data, overrides the current trace its text-data.
+ hf_hovertext : Iterable, optional
+ high-frequency trace "hovertext" data, overrides the current trace its
+ hovertext data.
+
+ Returns
+ -------
+ _hf_data_container
+ A namedtuple which serves as a datacontainer.
+
+ """
+ hf_x = (
+ trace["x"]
+ if hasattr(trace, "x") and hf_x is None
+ else hf_x.values
+ if isinstance(hf_x, pd.Series)
+ else hf_x
+ if isinstance(hf_x, pd.Index)
+ else np.asarray(hf_x)
+ )
+
+ hf_y = (
+ trace["y"]
+ if hasattr(trace, "y") and hf_y is None
+ else hf_y.values
+ if isinstance(hf_y, (pd.Series, pd.Index))
+ else hf_y
+ )
+ hf_y = np.asarray(hf_y)
+
+ hf_text = (
+ hf_text
+ if hf_text is not None
+ else trace["text"]
+ if hasattr(trace, "text") and trace["text"] is not None
+ else None
+ )
+
+ hf_hovertext = (
+ hf_hovertext
+ if hf_hovertext is not None
+ else trace["hovertext"]
+ if hasattr(trace, "hovertext") and trace["hovertext"] is not None
+ else None
+ )
+
+ if trace["type"].lower() in self._high_frequency_traces:
+ if hf_x is None: # if no data as x or hf_x is passed
+ if hf_y.ndim != 0: # if hf_y is an array
+ hf_x = pd.RangeIndex(0, len(hf_y)) # np.arange(len(hf_y))
+ else: # if no data as y or hf_y is passed
+ hf_x = np.asarray(None)
+
+ assert hf_y.ndim == np.ndim(hf_x), (
+ "plotly-resampler requires scatter data "
+ "(i.e., x and y, or hf_x and hf_y) to have the same dimensionality!"
+ )
+ # When the x or y of a trace has more than 1 dimension, it is not at all
+ # straightforward how it should be resampled.
+ assert hf_y.ndim <= 1 and np.ndim(hf_x) <= 1, (
+ "plotly-resampler requires scatter data "
+ "(i.e., x and y, or hf_x and hf_y) to be <= 1 dimensional!"
+ )
+
+ # Note: this also converts hf_text and hf_hovertext to a np.ndarray
+ if isinstance(hf_text, (list, np.ndarray, pd.Series)):
+ hf_text = np.asarray(hf_text)
+ if isinstance(hf_hovertext, (list, np.ndarray, pd.Series)):
+ hf_hovertext = np.asarray(hf_hovertext)
+
+ # Remove NaNs for efficiency (storing less meaningless data)
+ # NaNs introduce gaps between enclosing non-NaN data points & might distort
+ # the resampling algorithms
+ if pd.isna(hf_y).any():
+ not_nan_mask = ~pd.isna(hf_y)
+ hf_x = hf_x[not_nan_mask]
+ hf_y = hf_y[not_nan_mask]
+ if isinstance(hf_text, np.ndarray):
+ hf_text = hf_text[not_nan_mask]
+ if isinstance(hf_hovertext, np.ndarray):
+ hf_hovertext = hf_hovertext[not_nan_mask]
+
+ # If the categorical or string-like hf_y data is of type object (happens
+ # when y argument is used for the trace constructor instead of hf_y), we
+ # transform it to type string as such it will be sent as categorical data
+ # to the downsampling algorithm
+ if hf_y.dtype == "object":
+ hf_y = hf_y.astype("str")
+
+ # orjson encoding doesn't like to encode with uint8 & uint16 dtype
+ if str(hf_y.dtype) in ["uint8", "uint16"]:
+ hf_y = hf_y.astype("uint32")
+
+ assert len(hf_x) == len(hf_y), "x and y have different length!"
+ else:
+ self._print(f"trace {trace['type']} is not a high-frequency trace")
+
+ # hf_x and hf_y have priority over the traces' data
+ if hasattr(trace, "x"):
+ trace["x"] = hf_x
+
+ if hasattr(trace, "y"):
+ trace["y"] = hf_y
+
+ if hasattr(trace, "text"):
+ trace["text"] = hf_text
+
+ if hasattr(trace, "hovertext"):
+ trace["hovertext"] = hf_hovertext
+
+ return _hf_data_container(hf_x, hf_y, hf_text, hf_hovertext)
+
+ def _construct_hf_data_dict(
+ self,
+ dc: _hf_data_container,
+ trace: BaseTraceType,
+ downsampler: AbstractSeriesAggregator | None,
+ max_n_samples: int | None,
+ offset=0,
+ ) -> dict:
+ """Create the `hf_data` dict which will be put in the `_hf_data` property.
+
+ Parameters
+ ----------
+ dc : _hf_data_container
+ The hf_data container, withholding the parsed hf-data.
+ trace : BaseTraceType
+ The trace.
+ downsampler : AbstractSeriesAggregator | None
+ The downsampler which will be used.
+ max_n_samples : int | None
+ The max number of output samples.
+
+ Returns
+ -------
+ dict
+ The hf_data dict.
+ """
+ # We will re-create this each time as hf_x and hf_y withholds
+ # high-frequency data and can be adjusted on the fly with the public hf_data
+ # property.
+ hf_series = self._to_hf_series(x=dc.x, y=dc.y)
+
+ # Checking this now avoids less interpretable `KeyError` when resampling
+ assert hf_series.index.is_monotonic_increasing
+
+ # As we support prefix-suffixing of downsampled data, we assure that
+ # each trace has a name
+ # https://github.com/plotly/plotly.py/blob/ce0ed07d872c487698bde9d52e1f1aadf17aa65f/packages/python/plotly/plotly/basedatatypes.py#L539
+ # The link above indicates that the trace index is derived from `data`
+ if trace.name is None:
+ trace.name = f"trace {len(self.data) + offset}"
+
+ # Determine (1) the axis type and (2) the downsampler instance
+ # & (3) store a hf_data entry for the corresponding trace,
+ # identified by its UUID
+ axis_type = "date" if isinstance(dc.x, pd.DatetimeIndex) else "linear"
+
+ default_n_samples = False
+ if max_n_samples is None:
+ default_n_samples = True
+ max_n_samples = self._global_n_shown_samples
+
+ default_downsampler = False
+ if downsampler is None:
+ default_downsampler = True
+ downsampler = self._global_downsampler
+
+ # TODO -> can't we just store the DC here (might be less duplication of
+ # code knowledge, because now, you need to know all the eligible hf_keys in
+ # dc
+ return {
+ "max_n_samples": max_n_samples,
+ "default_n_samples": default_n_samples,
+ "x": dc.x,
+ "y": dc.y,
+ "axis_type": axis_type,
+ "downsampler": downsampler,
+ "default_downsampler": default_downsampler,
+ "text": dc.text,
+ "hovertext": dc.hovertext,
+ }
+
def add_trace(
self,
trace: Union[BaseTraceType, dict],
@@ -620,149 +873,51 @@ def add_trace(
also storing the low-frequency series in the back-end.
"""
- if max_n_samples is None:
- max_n_samples = self._global_n_shown_samples
+ # to comply with the plotly data input acceptance behavior
+ if isinstance(trace, (list, tuple)):
+ raise ValueError("Trace must be either a dict or a BaseTraceType")
- # First add the trace, as each (even the non-hf_data traces), must contain this
- # key for comparison
- uuid = str(uuid4())
- trace.uid = uuid
-
- hf_x = (
- trace["x"]
- if hasattr(trace, "x") and hf_x is None
- else hf_x.values
- if isinstance(hf_x, pd.Series)
- else hf_x
- if isinstance(hf_x, pd.Index)
- else np.asarray(hf_x)
+ max_out_s = (
+ self._global_n_shown_samples if max_n_samples is None else max_n_samples
)
- hf_y = (
- trace["y"]
- if hasattr(trace, "y") and hf_y is None
- else hf_y.values
- if isinstance(hf_y, (pd.Series, pd.Index))
- else hf_y
- )
- hf_y = np.asarray(hf_y)
+ # Validate the trace and convert to a trace object
+ if not isinstance(trace, BaseTraceType):
+ trace = self._data_validator.validate_coerce(trace)[0]
- hf_text = (
- hf_text
- if hf_text is not None
- else trace["text"]
- if hasattr(trace, "text") and trace["text"] is not None
- else None
- )
+ # First add an UUID, as each (even the non-hf_data traces), must contain this
+ # key for comparison. If the trace already has an UUID, we will keep it.
+ uuid_str = str(uuid4()) if trace.uid is None else trace.uid
+ trace.uid = uuid_str
- hf_hovertext = (
- hf_hovertext
- if hf_hovertext is not None
- else trace["hovertext"]
- if hasattr(trace, "hovertext") and trace["hovertext"] is not None
- else None
- )
-
- high_frequency_traces = ["scatter", "scattergl"]
- if trace["type"].lower() in high_frequency_traces:
- if hf_x is None: # if no data as x or hf_x is passed
- if hf_y.ndim != 0: # if hf_y is an array
- hf_x = np.arange(len(hf_y))
- else: # if no data as y or hf_y is passed
- hf_x = np.asarray(None)
-
- assert hf_y.ndim == np.ndim(hf_x), (
- "plotly-resampler requires scatter data "
- "(i.e., x and y, or hf_x and hf_y) to have the same dimensionality!"
- )
- # When the x or y of a trace has more than 1 dimension, it is not at all
- # straightforward how it should be resampled.
- assert hf_y.ndim <= 1 and np.ndim(hf_x) <= 1, (
- "plotly-resampler requires scatter data "
- "(i.e., x and y, or hf_x and hf_y) to be <= 1 dimensional!"
- )
+ # construct the hf_data_container
+ # TODO in future version -> maybe regex on kwargs which start with `hf_`
+ dc = self._parse_get_trace_props(trace, hf_x, hf_y, hf_text, hf_hovertext)
- # Note: this also converts hf_text and hf_hovertext to a np.ndarray
- if isinstance(hf_text, (list, np.ndarray, pd.Series)):
- hf_text = np.asarray(hf_text)
- if isinstance(hf_hovertext, (list, np.ndarray, pd.Series)):
- hf_hovertext = np.asarray(hf_hovertext)
-
- # Remove NaNs for efficiency (storing less meaningless data)
- # NaNs introduce gaps between enclosing non-NaN data points & might distort
- # the resampling algorithms
- if pd.isna(hf_y).any():
- not_nan_mask = ~pd.isna(hf_y)
- hf_x = hf_x[not_nan_mask]
- hf_y = hf_y[not_nan_mask]
- if isinstance(hf_text, np.ndarray):
- hf_text = hf_text[not_nan_mask]
- if isinstance(hf_hovertext, np.ndarray):
- hf_hovertext = hf_hovertext[not_nan_mask]
-
- # If the categorical or string-like hf_y data is of type object (happens
- # when y argument is used for the trace constructor instead of hf_y), we
- # transform it to type string as such it will be sent as categorical data
- # to the downsampling algorithm
- if hf_y.dtype == "object":
- hf_y = hf_y.astype("str")
-
- # orjson encoding doesn't like to encode with uint8 & uint16 dtype
- if str(hf_y.dtype) in ["uint8", "uint16"]:
- hf_y = hf_y.astype("uint32")
-
- assert len(hf_x) == len(hf_y), "x and y have different length!"
-
- n_samples = len(hf_x)
- # These traces will determine the autoscale RANGE!
- # -> so also store when `limit_to_view` is set.
- if n_samples > max_n_samples or limit_to_view:
+ n_samples = len(dc.x)
+ # These traces will determine the autoscale RANGE!
+ # -> so also store when `limit_to_view` is set.
+ if trace["type"].lower() in self._high_frequency_traces:
+ if n_samples > max_out_s or limit_to_view:
self._print(
- f"\t[i] DOWNSAMPLE {trace['name']}\t{n_samples}->{max_n_samples}"
+ f"\t[i] DOWNSAMPLE {trace['name']}\t{n_samples}->{max_out_s}"
)
- # We will re-create this each time as hf_x and hf_y withholds
- # high-frequency data
- # index = pd.Index(hf_x, copy=False, name="timestamp")
- hf_series = self._to_hf_series(x=hf_x, y=hf_y)
-
- # Checking this now avoids less interpretable `KeyError` when resampling
- assert hf_series.index.is_monotonic_increasing
-
- # As we support prefix-suffixing of downsampled data, we assure that
- # each trace has a name
- # https://github.com/plotly/plotly.py/blob/ce0ed07d872c487698bde9d52e1f1aadf17aa65f/packages/python/plotly/plotly/basedatatypes.py#L539
- # The link above indicates that the trace index is derived from `data`
- if trace.name is None:
- trace.name = f"trace {len(self.data)}"
-
- # Determine (1) the axis type and (2) the downsampler instance
- # & (3) store a hf_data entry for the corresponding trace,
- # identified by its UUID
- axis_type = "date" if isinstance(hf_x, pd.DatetimeIndex) else "linear"
- d = self._global_downsampler if downsampler is None else downsampler
- self._hf_data[uuid] = {
- "max_n_samples": max_n_samples,
- "x": hf_x,
- "y": hf_y,
- "axis_type": axis_type,
- "downsampler": d,
- "text": hf_text,
- "hovertext": hf_hovertext,
- }
+ self._hf_data[uuid_str] = self._construct_hf_data_dict(
+ dc,
+ trace=trace,
+ downsampler=downsampler,
+ max_n_samples=max_n_samples,
+ )
# Before we update the trace, we create a new pointer to that trace in
# which the downsampled data will be stored. This way, the original
# data of the trace to this `add_trace` method will not be altered.
# We copy (by reference) all the non-data properties of the trace in
# the new trace.
- if not isinstance(trace, dict):
- trace = trace._props
+ trace = trace._props # convert the trace into a dict
trace = {
- k: trace[k]
- for k in set(trace.keys()).difference(
- {"text", "hovertext", "x", "y"}
- )
+ k: trace[k] for k in set(trace.keys()).difference(set(dc._fields))
}
# NOTE:
@@ -771,47 +926,179 @@ def add_trace(
# Hence, you first downsample the trace.
trace = self._check_update_trace_data(trace)
assert trace is not None
- super(self._figure_class, self).add_trace(trace=trace, **trace_kwargs)
- self.data[-1].uid = uuid
- return
+ return super(self._figure_class, self).add_trace(trace, **trace_kwargs)
else:
self._print(f"[i] NOT resampling {trace['name']} - len={n_samples}")
- trace.x = hf_x
- trace.y = hf_y
- trace.text = hf_text
- trace.hovertext = hf_hovertext
- return super(self._figure_class, self).add_trace(
- trace=trace, **trace_kwargs
- )
- else:
- self._print(f"trace {trace['type']} is not a high-frequency trace")
+ # TODO: can be made more generic
+ trace.x = dc.x
+ trace.y = dc.y
+ trace.text = dc.text
+ trace.hovertext = dc.hovertext
+ return super(self._figure_class, self).add_trace(trace, **trace_kwargs)
- # hf_x and hf_y have priority over the traces' data
- if hasattr(trace, "x"):
- trace["x"] = hf_x
+ return super(self._figure_class, self).add_trace(trace, **trace_kwargs)
- if hasattr(trace, "y"):
- trace["y"] = hf_y
+ def add_traces(
+ self,
+ data: List[BaseTraceType | dict] | BaseTraceType | Dict,
+ max_n_samples: None | List[int] | int = None,
+ downsamplers: None
+ | List[AbstractSeriesAggregator]
+ | AbstractFigureAggregator = None,
+ limit_to_views: List[bool] | bool = False,
+ **traces_kwargs,
+ ):
+ """Add traces to the figure.
- if hasattr(trace, "text"):
- trace["text"] = hf_text
+ .. note::
+ Make sure to look at the :func:`add_trace` function for more info about
+ **speed optimization**, and dealing with not ``high-frequency`` data, but
+ still want to resample / limit the data to the front-end view.
- if hasattr(trace, "hovertext"):
- trace["hovertext"] = hf_hovertext
+ Parameters
+ ----------
+ data : List[BaseTraceType | dict]
+ A list of trace specifications to be added.
+ Trace specifications may be either:
- return super(self._figure_class, self).add_trace(
- trace=trace, **trace_kwargs
+ - Instances of trace classes from the plotly.graph_objs
+ package (e.g plotly.graph_objs.Scatter, plotly.graph_objs.Bar).
+ - Dicts where:
+
+ - The 'type' property specifies the trace type (e.g.
+ 'scatter', 'bar', 'area', etc.). If the dict has no 'type'
+ property then 'scatter' is assumed.
+ - All remaining properties are passed to the constructor
+ of the specified trace type.
+
+ max_n_samples : None | List[int] | int, optional
+ The maximum number of samples that will be shown for each trace.
+ If a single integer is passed, all traces will use this number. If this
+ variable is not set; ``_global_n_shown_samples`` will be used.
+ downsamplers : None | List[AbstractSeriesAggregator] | AbstractFigureAggregator, optional
+ The downsampler that will be used to aggregate the traces. If a single
+ aggregator is passed, all traces will use this aggregator.
+ If this variable is not set, ``_global_downsampler`` will be used.
+ limit_to_views : None | List[bool] | bool, optional
+ List of limit_to_view booleans for the added traces. If set to True
+ the trace's datapoints will be cut to the corresponding front-end view,
+ even if the total number of samples is lower than ``max_n_samples``. If a
+ single boolean is passed, all to be added traces will use this value,
+ by default False.\n
+ Remark that setting this parameter to True ensures that low frequency traces
+ are added to the ``hf_data`` property.
+ **traces_kwargs: dict
+ Additional trace related keyword arguments.
+ e.g.: rows=.., cols=..., secondary_ys=...
+
+ .. seealso::
+ `Figure.add_traces `_ docs.
+
+ """
+ # note: Plotly its add_traces also a allows non list-like input e.g. a scatter
+ # object; the code below is an exact copy of their internally applied parsing
+ if not isinstance(data, (list, tuple)):
+ data = [data]
+
+ # Convert each trace into a BaseTraceType object
+ data = [
+ self._data_validator.validate_coerce(trace)[0]
+ if not isinstance(trace, BaseTraceType)
+ else trace
+ for trace in data
+ ]
+
+ # First add an UUID, as each (even the non-hf_data traces), must contain this
+ # key for comparison. If the trace already has an UUID, we will keep it.
+ for trace in data:
+ uuid_str = str(uuid4()) if trace.uid is None else trace.uid
+ trace.uid = uuid_str
+
+ # Convert the data properties
+ if isinstance(max_n_samples, (int, np.integer)) or max_n_samples is None:
+ max_n_samples = [max_n_samples] * len(data)
+ if isinstance(downsamplers, AbstractSeriesAggregator) or downsamplers is None:
+ downsamplers = [downsamplers] * len(data)
+ if isinstance(limit_to_views, bool):
+ limit_to_views = [limit_to_views] * len(data)
+
+ for i, (trace, max_out, downsampler, limit_to_view) in enumerate(
+ zip(data, max_n_samples, downsamplers, limit_to_views)
+ ):
+ if (
+ trace.type.lower() not in self._high_frequency_traces
+ or self._hf_data.get(trace.uid) is not None
+ ):
+ continue
+
+ max_out_s = self._global_n_shown_samples if max_out is None else max_out
+ if not limit_to_view and (trace.y is None or len(trace.y) <= max_out_s):
+ continue
+
+ dc = self._parse_get_trace_props(trace)
+ self._hf_data[trace.uid] = self._construct_hf_data_dict(
+ dc,
+ trace=trace,
+ downsampler=downsampler,
+ max_n_samples=max_out,
+ offset=i,
)
- # def add_traces(*args, **kwargs):
- # raise NotImplementedError("This functionality is not (yet) supported")
+ # convert the trace into a dict, and only withholds the non-hf props
+ trace = trace._props
+ trace = {k: trace[k] for k in set(trace.keys()).difference(set(dc._fields))}
+
+ # update the trace data with the HF props
+ trace = self._check_update_trace_data(trace)
+ assert trace is not None
+ data[i] = trace
+
+ super(self._figure_class, self).add_traces(data, **traces_kwargs)
def _clear_figure(self):
"""Clear the current figure object it's data and layout."""
self._hf_data = {}
self.data = []
+ self._data = []
+ self._layout = {}
self.layout = {}
+ def _copy_hf_data(self, hf_data: dict, adjust_default_values: bool = False) -> dict:
+ """Copy (i.e. create a new key reference, not a deep copy) of a hf_data dict.
+
+ Parameters
+ ----------
+ hf_data : dict
+ The hf_data dict, having the trace 'uid' as key and the
+ hf-data, together with its aggregation properties as dict-values
+ adjust_default_values: bool
+ Whether the default values (of the downsampler, max # shown samples) will
+ be adjusted according to the values of this object, by default False
+
+ Returns
+ -------
+ dict
+ The copied (& default values adjusted) output dict.
+
+ """
+ hf_data_cp = {
+ uid: {
+ k: hf_dict[k]
+ for k in set(hf_dict.keys())
+ }
+ for uid, hf_dict in hf_data.items()
+ }
+
+ # Adjust the default arguments to the current argument values
+ if adjust_default_values:
+ for hf_props in hf_data_cp.values():
+ if hf_props.get("default_downsampler", False):
+ hf_props["downsampler"] = self._global_downsampler
+ if hf_props.get("default_n_samples", False):
+ hf_props["max_n_samples"] = self._global_n_shown_samples
+
+ return hf_data_cp
+
def replace(self, figure: go.Figure, convert_existing_traces: bool = True):
"""Replace the current figure layout with the passed figure object.
diff --git a/plotly_resampler/figure_resampler/figurewidget_resampler.py b/plotly_resampler/figure_resampler/figurewidget_resampler.py
index 136be9d7..6e12aa40 100644
--- a/plotly_resampler/figure_resampler/figurewidget_resampler.py
+++ b/plotly_resampler/figure_resampler/figurewidget_resampler.py
@@ -10,13 +10,13 @@
__author__ = "Jonas Van Der Donckt, Jeroen Van Der Donckt, Emiel Deprost"
-import re
from typing import Tuple
import plotly.graph_objects as go
+from plotly.basedatatypes import BaseFigure
-from .figure_resampler import AbstractFigureAggregator
from ..aggregation import AbstractSeriesAggregator, EfficientLTTB
+from .figure_resampler_interface import AbstractFigureAggregator
class _FigureWidgetResamplerM(type(AbstractFigureAggregator), type(go.FigureWidget)):
@@ -40,7 +40,7 @@ class FigureWidgetResampler(
def __init__(
self,
- figure: go.FigureWidget | go.Figure = None,
+ figure: BaseFigure | dict = None,
convert_existing_traces: bool = True,
default_n_shown_samples: int = 1000,
default_downsampler: AbstractSeriesAggregator = EfficientLTTB(),
@@ -51,14 +51,21 @@ def __init__(
show_mean_aggregation_size: bool = True,
verbose: bool = False,
):
- if figure is None:
- figure = go.FigureWidget()
-
- if not isinstance(figure, go.FigureWidget):
- figure = go.FigureWidget(figure)
+ # Parse the figure input before calling `super`
+ f = self._get_figure_class(go.FigureWidget)()
+ f._data_validator.set_uid = False
+
+ if isinstance(figure, BaseFigure): # go.Figure or go.FigureWidget or AbstractFigureAggregator
+ # A base figure object, we first copy the layout and grid ref
+ f.layout = figure.layout
+ f._grid_ref = figure._grid_ref
+ f.add_traces(figure.data)
+ elif isinstance(figure, (dict, list)):
+ # A single trace dict or a list of traces
+ f.add_traces(figure)
super().__init__(
- figure,
+ f,
convert_existing_traces,
default_n_shown_samples,
default_downsampler,
@@ -67,20 +74,28 @@ def __init__(
verbose,
)
+ if isinstance(figure, AbstractFigureAggregator):
+ # Copy the `_hf_data` if the previous figure was an AbstractFigureAggregator
+ # And adjust the default max_n_samples and
+ self._hf_data.update(
+ self._copy_hf_data(figure._hf_data, adjust_default_values=True)
+ )
+
+ # Note: This hack ensures that the this figure object initially uses
+ # data of the whole view. More concretely; we create a dict
+ # serialization figure and adjust the hf-traces to the whole view
+ # with the check-update method (by passing no range / filter args)
+ with self.batch_update():
+ graph_dict: dict = self._get_current_graph()
+ update_indices = self._check_update_figure_dict(graph_dict)
+ for idx in update_indices:
+ self.data[idx].update(graph_dict["data"][idx])
+
self._prev_layout = None # Contains the previous xaxis layout configuration
# used for logging purposes to save a history of layout changes
self._relayout_hist = []
- # A list of al xaxis and yaxis string names
- # e.g., "xaxis", "xaxis2", "xaxis3", .... for _xaxis_list
- self._xaxis_list = self._re_matches(re.compile("xaxis\d*"), self._layout.keys())
- self._yaxis_list = self._re_matches(re.compile("yaxis\d*"), self._layout.keys())
- # edge case: an empty `go.Figure()` does not yet contain axes keys
- if not len(self._xaxis_list):
- self._xaxis_list = ["xaxis"]
- self._yaxis_list = ["yaxis"]
-
# Assign the the update-methods to the corresponding classes
showspike_keys = [f"{xaxis}.showspikes" for xaxis in self._xaxis_list]
self.layout.on_change(self._update_spike_ranges, *showspike_keys)
diff --git a/plotly_resampler/figure_resampler/utils.py b/plotly_resampler/figure_resampler/utils.py
new file mode 100644
index 00000000..8913daac
--- /dev/null
+++ b/plotly_resampler/figure_resampler/utils.py
@@ -0,0 +1,168 @@
+"""Utility functions for the figure_resampler submodule."""
+
+import math
+import pandas as pd
+
+from plotly.basedatatypes import BaseFigure
+from plotly.basewidget import BaseFigureWidget
+
+from typing import Any
+
+### Checks for the figure type
+
+
+def is_figure(figure: Any) -> bool:
+ """Check if the figure is a plotly go.Figure or a FigureResampler.
+
+ .. Note::
+ This method does not use isinstance(figure, go.Figure) as this will not work
+ when go.Figure is decorated (after executing the
+ ``register_plotly_resampler`` function).
+
+ Parameters
+ ----------
+ figure : Any
+ The figure to check.
+
+ Returns
+ -------
+ bool
+ True if the figure is a plotly go.Figure or a FigureResampler.
+ """
+
+ return isinstance(figure, BaseFigure) and (not isinstance(figure, BaseFigureWidget))
+
+
+def is_figurewidget(figure: Any):
+ """Check if the figure is a plotly go.FigureWidget or a FigureWidgetResampler.
+
+ .. Note::
+ This method does not use isinstance(figure, go.FigureWidget) as this will not
+ work when go.FigureWidget is decorated (after executing the
+ ``register_plotly_resampler`` function).
+
+ Parameters
+ ----------
+ figure : Any
+ The figure to check.
+
+ Returns
+ -------
+ bool
+ True if the figure is a plotly go.FigureWidget or a FigureWidgetResampler.
+ """
+ return isinstance(figure, BaseFigureWidget)
+
+
+def is_fr(figure: Any) -> bool:
+ """Check if the figure is a FigureResampler.
+
+ .. Note::
+ This method will not return True if the figure is a plotly go.Figure.
+
+ Parameters
+ ----------
+ figure : Any
+ The figure to check.
+
+ Returns
+ -------
+ bool
+ True if the figure is a FigureResampler.
+ """
+ from plotly_resampler import FigureResampler
+
+ return isinstance(figure, FigureResampler)
+
+
+def is_fwr(figure: Any) -> bool:
+ """Check if the figure is a FigureWidgetResampler.
+
+ .. Note::
+ This method will not return True if the figure is a plotly go.FigureWidget.
+
+ Parameters
+ ----------
+ figure : Any
+ The figure to check.
+
+ Returns
+ -------
+ bool
+ True if the figure is a FigureWidgetResampler.
+ """
+ from plotly_resampler import FigureWidgetResampler
+
+ return isinstance(figure, FigureWidgetResampler)
+
+
+### Rounding functions for bin size
+
+
+def timedelta_to_str(td: pd.Timedelta) -> str:
+ """Construct a tight string representation for the given timedelta arg.
+
+ Parameters
+ ----------
+ td: pd.Timedelta
+ The timedelta for which the string representation is constructed
+
+ Returns
+ -------
+ str:
+ The tight string bounds of format '$d-$h$m$s.$ms'.
+
+ """
+ out_str = ""
+
+ # Edge case if we deal with negative
+ if td < pd.Timedelta(seconds=0):
+ td *= -1
+ out_str += "NEG"
+
+ # Note: this must happen after the *= -1
+ c = td.components
+ if c.days > 0:
+ out_str += f"{c.days}D"
+ if c.hours > 0 or c.minutes > 0 or c.seconds > 0 or c.milliseconds > 0:
+ out_str += "_" if len(out_str) else ""
+
+ if c.hours > 0:
+ out_str += f"{c.hours}h"
+ if c.minutes > 0:
+ out_str += f"{c.minutes}m"
+ if c.seconds > 0:
+ if c.milliseconds:
+ out_str += (
+ f"{c.seconds}.{str(c.milliseconds / 1000).split('.')[-1].rstrip('0')}s"
+ )
+ else:
+ out_str += f"{c.seconds}s"
+ elif c.milliseconds > 0:
+ out_str += f"{str(c.milliseconds)}ms"
+ if c.microseconds > 0:
+ out_str += f"{str(c.microseconds)}us"
+ if c.nanoseconds > 0:
+ out_str += f"{str(c.nanoseconds)}ns"
+ return out_str
+
+
+def round_td_str(td: pd.Timedelta) -> str:
+ """Round a timedelta to the nearest unit and convert to a string.
+
+ .. seealso::
+ :func:`timedelta_to_str`
+ """
+ for t_s in ["D", "H", "min", "s", "ms", "us", "ns"]:
+ if td > 0.95 * pd.Timedelta(f"1{t_s}"):
+ return timedelta_to_str(td.round(t_s))
+
+
+def round_number_str(number: float) -> str:
+ if number > 0.95:
+ for unit, scaling in [("M", int(1e6)), ("k", int(1e3))]:
+ if number / scaling > 0.95:
+ return f"{round(number / scaling)}{unit}"
+ return str(round(number))
+ # we have a number < 1 --> round till nearest non-zero digit
+ return str(round(number, 1 + abs(int(math.log10(number)))))
diff --git a/plotly_resampler/registering.py b/plotly_resampler/registering.py
new file mode 100644
index 00000000..254cb8ba
--- /dev/null
+++ b/plotly_resampler/registering.py
@@ -0,0 +1,139 @@
+"""Register plotly-resampler to (un)wrap plotly-graph-objects."""
+
+__author__ = "Jeroen Van Der Donckt, Jonas Van Der Donckt, Emiel Deprost"
+
+from plotly_resampler import FigureResampler, FigureWidgetResampler
+from plotly_resampler.figure_resampler.figure_resampler_interface import (
+ AbstractFigureAggregator,
+)
+from functools import wraps
+
+import plotly
+
+WRAPPED_PREFIX = "[Plotly-Resampler]__"
+PLOTLY_MODULES = [
+ plotly.graph_objs,
+ plotly.graph_objects,
+] # wait for this PR https://github.com/plotly/plotly.py/pull/3779
+PLOTLY_CONSTRUCTOR_WRAPPER = {
+ "Figure": FigureResampler,
+ "FigureWidget": FigureWidgetResampler,
+}
+
+
+def _already_wrapped(constr):
+ return constr.__name__.startswith(WRAPPED_PREFIX)
+
+
+def _get_plotly_constr(constr):
+ """Return the constructor of the underlying plotly graph object and thus omit the
+ possibly wrapped :class:`AbstractFigureAggregator `
+ instance.
+
+ Parameters
+ ----------
+ constr : callable
+ The constructor of a instantiatedplotly-object.
+
+ Returns
+ -------
+ callable
+ The constructor of a ``go.FigureWidget`` or a ``go.Figure``.
+ """
+ if _already_wrapped(constr):
+ return constr.__wrapped__ # get the original constructor
+ return constr
+
+
+### Registering the wrappers
+
+
+def _is_ipython_env():
+ """Check if we are in an IPython environment (with a kernel)."""
+ try:
+ from IPython import get_ipython
+
+ return "IPKernelApp" in get_ipython().config
+ except Exception:
+ return False
+
+
+def _register_wrapper(
+ module: type,
+ constr_name: str,
+ pr_class: AbstractFigureAggregator,
+ **aggregator_kwargs,
+):
+ constr = getattr(module, constr_name)
+ constr = _get_plotly_constr(constr) # get the original plotly constructor
+
+ # print(f"Wrapping {constr_name} with {pr_class}")
+
+ @wraps(constr)
+ def wrapped_constr(*args, **kwargs):
+ # print(f"Executing constructor wrapper for {constr_name}", constr)
+ return pr_class(constr(*args, **kwargs), **aggregator_kwargs)
+
+ wrapped_constr.__name__ = WRAPPED_PREFIX + constr_name
+ setattr(module, constr_name, wrapped_constr)
+
+
+def register_plotly_resampler(mode="auto", **aggregator_kwargs):
+ """Register plotly-resampler to plotly.graph_objects.
+
+ This function results in the use of plotly-resampler under the hood.
+
+ .. Note::
+ We advise to use mode= ``widget`` when working in an IPython based environment
+ as this will just behave as a ``go.FigureWidget``, but with dynamic aggregation.
+ When using mode= ``auto`` or ``figure``; most figures will be wrapped as
+ :class:`FigureResampler `,
+ on which
+ :func:`show_dash `
+ needs to be called.
+
+ Parameters
+ ----------
+ mode : str, optional
+ The mode of the plotly-resampler.
+ Possible values are: 'auto', 'figure', 'widget', None.
+ If 'auto' is used, the mode is determined based on the environment; if it is in
+ an ipython environment, the mode is 'widget', otherwise it is 'figure'.
+ If 'figure' is used, all plotly figures are wrapped as FigureResampler objects.
+ If 'widget' is used, all plotly figure widgets are wrapped as
+ FigureWidgetResampler objects (we advise to use this mode in ipython environment
+ with a kernel).
+ If None is used, wrapping is done as expected (go.Figure -> FigureResampler,
+ go.FigureWidget -> FigureWidgetResampler).
+ aggregator_kwargs : dict, optional
+ The keyword arguments to pass to the plotly-resampler decorator its constructor.
+ See more details in :class:`FigureResampler ` and
+ :class:`FigureWidgetResampler `.
+
+ """
+ for constr_name, pr_class in PLOTLY_CONSTRUCTOR_WRAPPER.items():
+ if (mode == "auto" and _is_ipython_env()) or mode == "widget":
+ pr_class = FigureWidgetResampler
+ elif mode == "figure":
+ pr_class = FigureResampler
+ # else: default mode -> wrap according to PLOTLY_CONSTRUCTOR_WRAPPER
+
+ for module in PLOTLY_MODULES:
+ _register_wrapper(module, constr_name, pr_class, **aggregator_kwargs)
+
+
+### Unregistering the wrappers
+
+
+def _unregister_wrapper(module: type, constr_name: str):
+ constr = getattr(module, constr_name)
+ if _already_wrapped(constr):
+ constr = constr.__wrapped__
+ setattr(module, constr_name, constr)
+
+
+def unregister_plotly_resampler():
+ """Unregister plotly-resampler from plotly.graph_objects."""
+ for constr in PLOTLY_CONSTRUCTOR_WRAPPER.keys():
+ for module in PLOTLY_MODULES:
+ _unregister_wrapper(module, constr)
diff --git a/plotly_resampler/utils.py b/plotly_resampler/utils.py
deleted file mode 100644
index 02147f44..00000000
--- a/plotly_resampler/utils.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import math
-
-import pandas as pd
-
-
-def timedelta_to_str(td: pd.Timedelta) -> str:
- """Construct a tight string representation for the given timedelta arg.
-
- Parameters
- ----------
- td: pd.Timedelta
- The timedelta for which the string representation is constructed
-
- Returns
- -------
- str:
- The tight string bounds of format '$d-$h$m$s.$ms'.
-
- """
- out_str = ""
-
- # Edge case if we deal with negative
- if td < pd.Timedelta(seconds=0):
- td *= -1
- out_str += "NEG"
-
- # Note: this must happen after the *= -1
- c = td.components
- if c.days > 0:
- out_str += f"{c.days}D"
- if c.hours > 0 or c.minutes > 0 or c.seconds > 0 or c.milliseconds > 0:
- out_str += "_" if len(out_str) else ""
-
- if c.hours > 0:
- out_str += f"{c.hours}h"
- if c.minutes > 0:
- out_str += f"{c.minutes}m"
- if c.seconds > 0:
- if c.milliseconds:
- out_str += (
- f"{c.seconds}.{str(c.milliseconds / 1000).split('.')[-1].rstrip('0')}s"
- )
- else:
- out_str += f"{c.seconds}s"
- elif c.milliseconds > 0:
- out_str += f"{str(c.milliseconds)}ms"
- if c.microseconds > 0:
- out_str += f"{str(c.microseconds)}us"
- if c.nanoseconds > 0:
- out_str += f"{str(c.nanoseconds)}ns"
- return out_str
-
-
-def round_td_str(td: pd.Timedelta) -> str:
- for t_s in ["D", "H", "min", "s", "ms", "us", "ns"]:
- if td > 0.95 * pd.Timedelta(f"1{t_s}"):
- return timedelta_to_str(td.round(t_s))
-
-
-def round_number_str(number: float) -> str:
- if number > 0.95:
- for unit, scaling in [("M", int(1e6)), ("k", int(1e3))]:
- if number / scaling > 0.95:
- return f"{round(number / scaling)}{unit}"
- return str(round(number))
- # we have a number < 1 --> round till nearest non-zero digit
- return str(round(number, 1 + abs(int(math.log10(number)))))
diff --git a/poetry.lock b/poetry.lock
index aa082c91..7f8f1111 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -250,14 +250,14 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "coverage"
-version = "6.3.2"
+version = "6.4.1"
description = "Code coverage measurement for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
-tomli = {version = "*", optional = true, markers = "extra == \"toml\""}
+tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli"]
@@ -1264,7 +1264,7 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydata-sphinx-theme"
-version = "0.8.1"
+version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
@@ -1274,10 +1274,10 @@ python-versions = ">=3.7"
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
-sphinx = ">=3.5.4,<5"
+sphinx = ">=4.0.2"
[package.extras]
-doc = ["numpydoc", "myst-parser", "pandas", "pytest", "pytest-regressions", "sphinxext-rediraffe", "sphinx-sitemap", "jupyter-sphinx", "plotly", "numpy", "xarray"]
+doc = ["numpydoc", "myst-parser", "pandas", "pytest", "pytest-regressions", "sphinxext-rediraffe", "sphinx-sitemap", "jupyter-sphinx", "plotly", "numpy", "xarray", "sphinx-design"]
test = ["pytest", "pydata-sphinx-theme"]
coverage = ["pytest-cov", "codecov", "pydata-sphinx-theme"]
dev = ["pyyaml", "pre-commit", "nox", "pydata-sphinx-theme"]
@@ -2021,7 +2021,7 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "1.1"
python-versions = "^3.7.1,<3.11"
-content-hash = "f5abe15b64d1eab7fac29c5e7599833783536c8ddb557759f126602ca69fa5b5"
+content-hash = "41efc43af494df2b9aa1e4851bd96ba8f667802dc12d046451decb063b5daf8a"
[metadata.files]
alabaster = [
@@ -2241,47 +2241,47 @@ colorama = [
{file = "colorama-0.4.4.tar.gz", hash = "sha256:5941b2b48a20143d2267e95b1c2a7603ce057ee39fd88e7329b0c292aa16869b"},
]
coverage = [
- {file = "coverage-6.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9b27d894748475fa858f9597c0ee1d4829f44683f3813633aaf94b19cb5453cf"},
- {file = "coverage-6.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:37d1141ad6b2466a7b53a22e08fe76994c2d35a5b6b469590424a9953155afac"},
- {file = "coverage-6.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9987b0354b06d4df0f4d3e0ec1ae76d7ce7cbca9a2f98c25041eb79eec766f1"},
- {file = "coverage-6.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:26e2deacd414fc2f97dd9f7676ee3eaecd299ca751412d89f40bc01557a6b1b4"},
- {file = "coverage-6.3.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4dd8bafa458b5c7d061540f1ee9f18025a68e2d8471b3e858a9dad47c8d41903"},
- {file = "coverage-6.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:46191097ebc381fbf89bdce207a6c107ac4ec0890d8d20f3360345ff5976155c"},
- {file = "coverage-6.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6f89d05e028d274ce4fa1a86887b071ae1755082ef94a6740238cd7a8178804f"},
- {file = "coverage-6.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:58303469e9a272b4abdb9e302a780072c0633cdcc0165db7eec0f9e32f901e05"},
- {file = "coverage-6.3.2-cp310-cp310-win32.whl", hash = "sha256:2fea046bfb455510e05be95e879f0e768d45c10c11509e20e06d8fcaa31d9e39"},
- {file = "coverage-6.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:a2a8b8bcc399edb4347a5ca8b9b87e7524c0967b335fbb08a83c8421489ddee1"},
- {file = "coverage-6.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:f1555ea6d6da108e1999b2463ea1003fe03f29213e459145e70edbaf3e004aaa"},
- {file = "coverage-6.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e5f4e1edcf57ce94e5475fe09e5afa3e3145081318e5fd1a43a6b4539a97e518"},
- {file = "coverage-6.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7a15dc0a14008f1da3d1ebd44bdda3e357dbabdf5a0b5034d38fcde0b5c234b7"},
- {file = "coverage-6.3.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21b7745788866028adeb1e0eca3bf1101109e2dc58456cb49d2d9b99a8c516e6"},
- {file = "coverage-6.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:8ce257cac556cb03be4a248d92ed36904a59a4a5ff55a994e92214cde15c5bad"},
- {file = "coverage-6.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:b0be84e5a6209858a1d3e8d1806c46214e867ce1b0fd32e4ea03f4bd8b2e3359"},
- {file = "coverage-6.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:acf53bc2cf7282ab9b8ba346746afe703474004d9e566ad164c91a7a59f188a4"},
- {file = "coverage-6.3.2-cp37-cp37m-win32.whl", hash = "sha256:8bdde1177f2311ee552f47ae6e5aa7750c0e3291ca6b75f71f7ffe1f1dab3dca"},
- {file = "coverage-6.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:b31651d018b23ec463e95cf10070d0b2c548aa950a03d0b559eaa11c7e5a6fa3"},
- {file = "coverage-6.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:07e6db90cd9686c767dcc593dff16c8c09f9814f5e9c51034066cad3373b914d"},
- {file = "coverage-6.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2c6dbb42f3ad25760010c45191e9757e7dce981cbfb90e42feef301d71540059"},
- {file = "coverage-6.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c76aeef1b95aff3905fb2ae2d96e319caca5b76fa41d3470b19d4e4a3a313512"},
- {file = "coverage-6.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8cf5cfcb1521dc3255d845d9dca3ff204b3229401994ef8d1984b32746bb45ca"},
- {file = "coverage-6.3.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8fbbdc8d55990eac1b0919ca69eb5a988a802b854488c34b8f37f3e2025fa90d"},
- {file = "coverage-6.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:ec6bc7fe73a938933d4178c9b23c4e0568e43e220aef9472c4f6044bfc6dd0f0"},
- {file = "coverage-6.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:9baff2a45ae1f17c8078452e9e5962e518eab705e50a0aa8083733ea7d45f3a6"},
- {file = "coverage-6.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:fd9e830e9d8d89b20ab1e5af09b32d33e1a08ef4c4e14411e559556fd788e6b2"},
- {file = "coverage-6.3.2-cp38-cp38-win32.whl", hash = "sha256:f7331dbf301b7289013175087636bbaf5b2405e57259dd2c42fdcc9fcc47325e"},
- {file = "coverage-6.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:68353fe7cdf91f109fc7d474461b46e7f1f14e533e911a2a2cbb8b0fc8613cf1"},
- {file = "coverage-6.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b78e5afb39941572209f71866aa0b206c12f0109835aa0d601e41552f9b3e620"},
- {file = "coverage-6.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4e21876082ed887baed0146fe222f861b5815455ada3b33b890f4105d806128d"},
- {file = "coverage-6.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:34626a7eee2a3da12af0507780bb51eb52dca0e1751fd1471d0810539cefb536"},
- {file = "coverage-6.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1ebf730d2381158ecf3dfd4453fbca0613e16eaa547b4170e2450c9707665ce7"},
- {file = "coverage-6.3.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd6fe30bd519694b356cbfcaca9bd5c1737cddd20778c6a581ae20dc8c04def2"},
- {file = "coverage-6.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:96f8a1cb43ca1422f36492bebe63312d396491a9165ed3b9231e778d43a7fca4"},
- {file = "coverage-6.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:dd035edafefee4d573140a76fdc785dc38829fe5a455c4bb12bac8c20cfc3d69"},
- {file = "coverage-6.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5ca5aeb4344b30d0bec47481536b8ba1181d50dbe783b0e4ad03c95dc1296684"},
- {file = "coverage-6.3.2-cp39-cp39-win32.whl", hash = "sha256:f5fa5803f47e095d7ad8443d28b01d48c0359484fec1b9d8606d0e3282084bc4"},
- {file = "coverage-6.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:9548f10d8be799551eb3a9c74bbf2b4934ddb330e08a73320123c07f95cc2d92"},
- {file = "coverage-6.3.2-pp36.pp37.pp38-none-any.whl", hash = "sha256:18d520c6860515a771708937d2f78f63cc47ab3b80cb78e86573b0a760161faf"},
- {file = "coverage-6.3.2.tar.gz", hash = "sha256:03e2a7826086b91ef345ff18742ee9fc47a6839ccd517061ef8fa1976e652ce9"},
+ {file = "coverage-6.4.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f1d5aa2703e1dab4ae6cf416eb0095304f49d004c39e9db1d86f57924f43006b"},
+ {file = "coverage-6.4.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4ce1b258493cbf8aec43e9b50d89982346b98e9ffdfaae8ae5793bc112fb0068"},
+ {file = "coverage-6.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83c4e737f60c6936460c5be330d296dd5b48b3963f48634c53b3f7deb0f34ec4"},
+ {file = "coverage-6.4.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:84e65ef149028516c6d64461b95a8dbcfce95cfd5b9eb634320596173332ea84"},
+ {file = "coverage-6.4.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f69718750eaae75efe506406c490d6fc5a6161d047206cc63ce25527e8a3adad"},
+ {file = "coverage-6.4.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e57816f8ffe46b1df8f12e1b348f06d164fd5219beba7d9433ba79608ef011cc"},
+ {file = "coverage-6.4.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:01c5615d13f3dd3aa8543afc069e5319cfa0c7d712f6e04b920431e5c564a749"},
+ {file = "coverage-6.4.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:75ab269400706fab15981fd4bd5080c56bd5cc07c3bccb86aab5e1d5a88dc8f4"},
+ {file = "coverage-6.4.1-cp310-cp310-win32.whl", hash = "sha256:a7f3049243783df2e6cc6deafc49ea123522b59f464831476d3d1448e30d72df"},
+ {file = "coverage-6.4.1-cp310-cp310-win_amd64.whl", hash = "sha256:ee2ddcac99b2d2aec413e36d7a429ae9ebcadf912946b13ffa88e7d4c9b712d6"},
+ {file = "coverage-6.4.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:fb73e0011b8793c053bfa85e53129ba5f0250fdc0392c1591fd35d915ec75c46"},
+ {file = "coverage-6.4.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:106c16dfe494de3193ec55cac9640dd039b66e196e4641fa8ac396181578b982"},
+ {file = "coverage-6.4.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:87f4f3df85aa39da00fd3ec4b5abeb7407e82b68c7c5ad181308b0e2526da5d4"},
+ {file = "coverage-6.4.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:961e2fb0680b4f5ad63234e0bf55dfb90d302740ae9c7ed0120677a94a1590cb"},
+ {file = "coverage-6.4.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:cec3a0f75c8f1031825e19cd86ee787e87cf03e4fd2865c79c057092e69e3a3b"},
+ {file = "coverage-6.4.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:129cd05ba6f0d08a766d942a9ed4b29283aff7b2cccf5b7ce279d50796860bb3"},
+ {file = "coverage-6.4.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:bf5601c33213d3cb19d17a796f8a14a9eaa5e87629a53979a5981e3e3ae166f6"},
+ {file = "coverage-6.4.1-cp37-cp37m-win32.whl", hash = "sha256:269eaa2c20a13a5bf17558d4dc91a8d078c4fa1872f25303dddcbba3a813085e"},
+ {file = "coverage-6.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f02cbbf8119db68455b9d763f2f8737bb7db7e43720afa07d8eb1604e5c5ae28"},
+ {file = "coverage-6.4.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ffa9297c3a453fba4717d06df579af42ab9a28022444cae7fa605af4df612d54"},
+ {file = "coverage-6.4.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:145f296d00441ca703a659e8f3eb48ae39fb083baba2d7ce4482fb2723e050d9"},
+ {file = "coverage-6.4.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d44996140af8b84284e5e7d398e589574b376fb4de8ccd28d82ad8e3bea13"},
+ {file = "coverage-6.4.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2bd9a6fc18aab8d2e18f89b7ff91c0f34ff4d5e0ba0b33e989b3cd4194c81fd9"},
+ {file = "coverage-6.4.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3384f2a3652cef289e38100f2d037956194a837221edd520a7ee5b42d00cc605"},
+ {file = "coverage-6.4.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:9b3e07152b4563722be523e8cd0b209e0d1a373022cfbde395ebb6575bf6790d"},
+ {file = "coverage-6.4.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:1480ff858b4113db2718848d7b2d1b75bc79895a9c22e76a221b9d8d62496428"},
+ {file = "coverage-6.4.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:865d69ae811a392f4d06bde506d531f6a28a00af36f5c8649684a9e5e4a85c83"},
+ {file = "coverage-6.4.1-cp38-cp38-win32.whl", hash = "sha256:664a47ce62fe4bef9e2d2c430306e1428ecea207ffd68649e3b942fa8ea83b0b"},
+ {file = "coverage-6.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:26dff09fb0d82693ba9e6231248641d60ba606150d02ed45110f9ec26404ed1c"},
+ {file = "coverage-6.4.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d9c80df769f5ec05ad21ea34be7458d1dc51ff1fb4b2219e77fe24edf462d6df"},
+ {file = "coverage-6.4.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:39ee53946bf009788108b4dd2894bf1349b4e0ca18c2016ffa7d26ce46b8f10d"},
+ {file = "coverage-6.4.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f5b66caa62922531059bc5ac04f836860412f7f88d38a476eda0a6f11d4724f4"},
+ {file = "coverage-6.4.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fd180ed867e289964404051a958f7cccabdeed423f91a899829264bb7974d3d3"},
+ {file = "coverage-6.4.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84631e81dd053e8a0d4967cedab6db94345f1c36107c71698f746cb2636c63e3"},
+ {file = "coverage-6.4.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8c08da0bd238f2970230c2a0d28ff0e99961598cb2e810245d7fc5afcf1254e8"},
+ {file = "coverage-6.4.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d42c549a8f41dc103a8004b9f0c433e2086add8a719da00e246e17cbe4056f72"},
+ {file = "coverage-6.4.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:309ce4a522ed5fca432af4ebe0f32b21d6d7ccbb0f5fcc99290e71feba67c264"},
+ {file = "coverage-6.4.1-cp39-cp39-win32.whl", hash = "sha256:fdb6f7bd51c2d1714cea40718f6149ad9be6a2ee7d93b19e9f00934c0f2a74d9"},
+ {file = "coverage-6.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:342d4aefd1c3e7f620a13f4fe563154d808b69cccef415415aece4c786665397"},
+ {file = "coverage-6.4.1-pp36.pp37.pp38-none-any.whl", hash = "sha256:4803e7ccf93230accb928f3a68f00ffa80a88213af98ed338a57ad021ef06815"},
+ {file = "coverage-6.4.1.tar.gz", hash = "sha256:4321f075095a096e70aff1d002030ee612b65a205a0a0f5b815280d5dc58100c"},
]
cryptography = [
{file = "cryptography-36.0.2-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:4e2dddd38a5ba733be6a025a1475a9f45e4e41139d1321f412c6b360b19070b6"},
@@ -2864,8 +2864,8 @@ pycparser = [
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydata-sphinx-theme = [
- {file = "pydata_sphinx_theme-0.8.1-py3-none-any.whl", hash = "sha256:af2c99cb0b43d95247b1563860942ba75d7f1596360594fce510caaf8c4fcc16"},
- {file = "pydata_sphinx_theme-0.8.1.tar.gz", hash = "sha256:96165702253917ece13dd895e23b96ee6dce422dcc144d560806067852fe1fed"},
+ {file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
+ {file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydivert = [
{file = "pydivert-2.1.0-py2.py3-none-any.whl", hash = "sha256:382db488e3c37c03ec9ec94e061a0b24334d78dbaeebb7d4e4d32ce4355d9da1"},
diff --git a/pyproject.toml b/pyproject.toml
index f4406875..89a4f578 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -32,7 +32,7 @@ selenium-wire = "^4.5.6"
pyfunctional = "^1.4.3"
dash-bootstrap-components = "^1.0.3"
Sphinx = "^4.4.0"
-pydata-sphinx-theme = "^0.8.0"
+pydata-sphinx-theme = "^0.9.0"
sphinx-autodoc-typehints = "^1.17.0"
ipywidgets = "^7.7.0"
memory-profiler = "^0.60.0"
diff --git a/tests/conftest.py b/tests/conftest.py
index b74e5d01..ad262480 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -9,7 +9,7 @@
import pytest
from plotly.subplots import make_subplots
-from plotly_resampler import FigureResampler, LTTB, EveryNthPoint
+from plotly_resampler import FigureResampler, LTTB, EveryNthPoint, register_plotly_resampler, unregister_plotly_resampler
# hyperparameters
_nb_samples = 10_000
@@ -18,6 +18,14 @@
TESTING_LOCAL = False # SET THIS TO TRUE IF YOU ARE TESTING LOCALLY
+@pytest.fixture
+def registering_cleanup():
+ # Cleans up the registering before and after each test
+ unregister_plotly_resampler()
+ yield
+ unregister_plotly_resampler()
+
+
@pytest.fixture
def driver():
from seleniumwire import webdriver
@@ -47,20 +55,28 @@ def driver():
@pytest.fixture
def float_series() -> pd.Series:
x = np.arange(_nb_samples).astype(np.uint32)
- y = np.sin(x / 300).astype(np.float32) + np.random.randn(_nb_samples) / 5
+ y = np.sin(x / 50).astype(np.float32) + np.random.randn(_nb_samples) / 5
return pd.Series(index=x, data=y)
@pytest.fixture
def cat_series() -> pd.Series:
- cats_list = ["a", "b", "b", "b", "c", "c", "a", "d", "a"]
- return pd.Series(cats_list * (_nb_samples // len(cats_list)), dtype="category")
+ cats_list = ["a", "a", "a", "a"] * 2000
+ for i in np.random.randint(0, len(cats_list), 3):
+ cats_list[i] = "b"
+ for i in np.random.randint(0, len(cats_list), 3):
+ cats_list[i] = "c"
+ return pd.Series(cats_list * (_nb_samples // len(cats_list) + 1), dtype="category")[
+ :_nb_samples
+ ]
@pytest.fixture
def bool_series() -> pd.Series:
- bool_list = [True, False, True, True, True, True] + [True] * 50
- return pd.Series(bool_list * (_nb_samples // len(bool_list)), dtype="bool")
+ bool_list = [True, False, True, True, True, True] + [True] * 1000
+ return pd.Series(bool_list * (_nb_samples // len(bool_list) + 1), dtype="bool")[
+ :_nb_samples
+ ]
@pytest.fixture
@@ -285,7 +301,7 @@ def groupby_consecutive(
' [R]',
),
verbose=False,
- show_mean_aggregation_size=True
+ show_mean_aggregation_size=True,
)
fig.update_layout(height=700)
@@ -410,7 +426,7 @@ def cat_series_box_hist_figure() -> FigureResampler:
fig.add_trace(go.Box(x=float_series.values, name="float_series"), row=1, col=2)
fig.add_trace(
- go.Box(x=float_series.values ** 2, name="float_series**2"), row=1, col=2
+ go.Box(x=float_series.values**2, name="float_series**2"), row=1, col=2
)
# add a not hf-trace
diff --git a/tests/test_aggregators.py b/tests/test_aggregators.py
index ea161ec6..a397554c 100644
--- a/tests/test_aggregators.py
+++ b/tests/test_aggregators.py
@@ -88,7 +88,7 @@ def test_every_nth_point_bool_sequence_data(bool_series):
def test_every_nth_point_empty_series():
- empty_series = pd.Series(name="empty")
+ empty_series = pd.Series(name="empty", dtype='float32')
out = EveryNthPoint(interleave_gaps=True).aggregate(empty_series, n_out=1_000)
assert out.equals(empty_series)
@@ -230,7 +230,7 @@ def test_mmo_bool_sequence_data(bool_series):
def test_mmo_empty_series():
- empty_series = pd.Series(name="empty")
+ empty_series = pd.Series(name="empty", dtype='float32')
out = MinMaxOverlapAggregator(interleave_gaps=True).aggregate(
empty_series, n_out=1_000
)
diff --git a/tests/test_composability.py b/tests/test_composability.py
new file mode 100644
index 00000000..49230e50
--- /dev/null
+++ b/tests/test_composability.py
@@ -0,0 +1,1194 @@
+import plotly.graph_objects as go
+from plotly.subplots import make_subplots
+from plotly_resampler import FigureResampler, FigureWidgetResampler
+
+
+# ----------------------- Figure as Base -----------------------
+if True:
+ # -------- All scatters
+ def test_fr_f_scatter_agg(float_series, bool_series, cat_series):
+ base_fig = make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureResampler object from a go.Figure
+ # 1. All scatters are aggregated
+ fr_f = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_f.data) == 3
+ assert len(fr_f.hf_data) == 3
+ for trace in fr_f.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_f._hf_data
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fr_f = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_f.data) == 3
+ assert len(fr_f.hf_data) == 0
+ for trace in fr_f.data:
+ assert trace.uid not in fr_f._hf_data
+ assert len(trace["y"]) == 10_000
+
+ def test_fwr_f_scatter_agg(float_series, bool_series, cat_series):
+ base_fig = make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ base_fig.add_trace(go.Scatter(y=cat_series), row=1, col=1)
+ base_fig.add_trace(dict(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureWidgetResampler object from a go.Figure
+ # 1. All scatters are aggregated
+ fwr_f = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_f.data) == 3
+ assert len(fwr_f.hf_data) == 3
+ for trace in fwr_f.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_f._hf_data
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fwr_f = FigureWidgetResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fwr_f.data) == 3
+ assert len(fwr_f.hf_data) == 0
+ for trace in fwr_f.data:
+ assert trace.uid not in fwr_f._hf_data
+ assert len(trace["y"]) == 10_000
+
+ # ---- Must not be aggregated
+ def test_fr_f_scatter_not_all_agg(float_series, bool_series, cat_series):
+ base_fig = make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series[:1500]), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series[:800]), row=2, col=1)
+
+ # Create FigureResampler object from a go.Figure
+ fr_f = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_f.data) == 3
+ assert len(fr_f.hf_data) == 1
+ # Only the fist trace will be aggregated
+ for trace in fr_f.data[:1]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_f._hf_data
+ assert len(trace["y"]) == 2_000
+
+ for trace in fr_f.data[1:]:
+ assert trace.uid not in fr_f._hf_data
+ assert len(trace["y"]) != 2_000
+
+ def test_fwr_f_scatter_not_all_agg(float_series, bool_series, cat_series):
+ base_fig = make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ base_fig.add_trace(go.Scatter(y=cat_series), row=1, col=1)
+ base_fig.add_trace(dict(y=bool_series[:1500]), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series[:800]), row=2, col=1)
+
+ # Create FigureResampler object from a go.Figure
+ fwr_f = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_f.data) == 3
+ assert len(fwr_f.hf_data) == 1
+ # Only the fist trace will be aggregated
+ for trace in fwr_f.data[:1]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_f._hf_data
+ assert len(trace["y"]) == 2_000
+
+ for trace in fwr_f.data[1:]:
+ assert trace.uid not in fwr_f._hf_data
+ assert len(trace["y"]) != 2_000
+
+ # ------- Mixed
+ def test_fr_f_mixed_agg(float_series):
+ base_fig = make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fr_f = FigureResampler(base_fig, default_n_shown_samples=1_000)
+ assert len(fr_f.data) == 3
+ assert len(fr_f.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fr_f.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_f._hf_data
+ assert len(trace["y"]) == 1_000
+
+ for trace in fr_f.data[:1] + fr_f.data[2:]:
+ assert trace.uid not in fr_f._hf_data
+ assert trace.y is None # these traces don't even have a y value
+
+ def test_fwr_f_mixed_agg(float_series):
+ base_fig = make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fwr_f = FigureWidgetResampler(base_fig, default_n_shown_samples=1_000)
+ assert len(fwr_f.data) == 3
+ assert len(fwr_f.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fwr_f.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_f._hf_data
+ assert len(trace["y"]) == 1_000
+
+ for trace in fwr_f.data[:1] + fwr_f.data[2:]:
+ assert trace.uid not in fwr_f._hf_data
+ assert trace.y is None # these traces don't even have a y value
+
+ # ---- Must not (all) be aggregated
+ def test_fr_f_mixed_no_agg(float_series):
+ base_fig = make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fr_f = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_f.data) == 3
+ assert len(fr_f.hf_data) == 0
+ assert len(fr_f.data[1]["y"]) == 10_000
+
+ fwr_f = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fwr_f.data) == 3
+ assert len(fwr_f.hf_data) == 0
+ assert len(fwr_f.data[1]["y"]) == 10_000
+
+
+# ----------------------- FigureWidget as Base -----------------------
+if True:
+ # -------- All scatters
+ def test_fr_fw_scatter_agg(float_series, bool_series, cat_series):
+ base_fig = go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureResampler object from a go.Figure
+ # 1. All scatters are aggregated
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 3
+ for trace in fr_fw.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fw._hf_data
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 0
+ for trace in fr_fw.data:
+ assert trace.uid not in fr_fw._hf_data
+ assert len(trace["y"]) == 10_000
+
+ def test_fwr_fw_scatter_agg(float_series, bool_series, cat_series):
+ base_fig = go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ base_fig.add_trace(go.Scatter(y=cat_series), row=1, col=1)
+ base_fig.add_trace(dict(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureWidgetResampler object from a go.Figure
+ # 1. All scatters are aggregated
+ fwr_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_fw.data) == 3
+ assert len(fwr_fw.hf_data) == 3
+ for trace in fwr_fw.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_fw._hf_data
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fwr_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fwr_fw.data) == 3
+ assert len(fwr_fw.hf_data) == 0
+ for trace in fwr_fw.data:
+ assert trace.uid not in fwr_fw._hf_data
+ assert len(trace["y"]) == 10_000
+
+ # ---- Must not be aggregated
+ def test_fr_fw_scatter_not_all_agg(float_series, bool_series, cat_series):
+ base_fig = go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series[:1500]), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series[:800]), row=2, col=1)
+
+ # Create FigureResampler object from a go.Figure
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 1
+ # Only the fist trace will be aggregated
+ for trace in fr_fw.data[:1]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fw._hf_data
+ assert len(trace["y"]) == 2_000
+
+ for trace in fr_fw.data[1:]:
+ assert trace.uid not in fr_fw._hf_data
+ assert len(trace["y"]) != 2_000
+
+ def test_fwr_fw_scatter_not_all_agg(float_series, bool_series, cat_series):
+ base_fig = go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ base_fig.add_trace(go.Scatter(y=cat_series), row=1, col=1)
+ base_fig.add_trace(dict(y=bool_series[:1500]), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series[:800]), row=2, col=1)
+
+ # Create FigureResampler object from a go.Figure
+ fwr_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_fw.data) == 3
+ assert len(fwr_fw.hf_data) == 1
+ # Only the fist trace will be aggregated
+ for trace in fwr_fw.data[:1]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_fw._hf_data
+ assert len(trace["y"]) == 2_000
+
+ for trace in fwr_fw.data[1:]:
+ assert trace.uid not in fwr_fw._hf_data
+ assert len(trace["y"]) != 2_000
+
+ # ------- Mixed
+ def test_fr_fw_mixed_agg(float_series):
+ base_fig = go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=1_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fr_fw.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fw._hf_data
+ assert len(trace["y"]) == 1_000
+
+ for trace in fr_fw.data[:1] + fr_fw.data[2:]:
+ assert trace.uid not in fr_fw._hf_data
+ assert trace.y is None # these traces don't even have a y value
+
+ def test_fwr_fw_mixed_agg(float_series):
+ base_fig = go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fwr_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=1_000)
+ assert len(fwr_fw.data) == 3
+ assert len(fwr_fw.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fwr_fw.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_fw._hf_data
+ assert len(trace["y"]) == 1_000
+
+ for trace in fwr_fw.data[:1] + fwr_fw.data[2:]:
+ assert trace.uid not in fwr_fw._hf_data
+ assert trace.y is None # these traces don't even have a y value
+
+ # ---- Must not (all) be aggregated
+ def test_fr_fw_mixed_no_agg(float_series):
+ base_fig = go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 0
+ assert len(fr_fw.data[1]["y"]) == 10_000
+
+ fwr_fw = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fwr_fw.data) == 3
+ assert len(fwr_fw.hf_data) == 0
+ assert len(fwr_fw.data[1]["y"]) == 10_000
+
+
+# ----------------------- FigureResampler As base -----------------------
+if True:
+ # -------- All scatters
+ def test_fr_fr_scatter_agg(float_series, bool_series, cat_series):
+ base_fig = FigureResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fr_fr = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fr.data) == 3
+ assert len(fr_fr.hf_data) == 3
+ for trace in fr_fr.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fr._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fr_fr = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_fr.data) == 3
+ assert len(fr_fr.hf_data) == 3
+ for trace in fr_fr.data:
+ assert len(trace["y"]) == 10_000
+
+ def test_fr_fr_scatter_no_agg_agg(float_series, bool_series, cat_series):
+ # This initial figure object does not contain any aggregated data as
+ # default_n_shown samples >= the input data
+ base_fig = FigureResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=10_000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ assert len(base_fig.hf_data) == 0
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fr_fr = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fr.data) == 3
+ assert len(fr_fr.hf_data) == 3
+ for trace in fr_fr.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fr._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fr_fr = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_fr.data) == 3
+ assert len(fr_fr.hf_data) == 0
+ for trace in fr_fr.data:
+ assert len(trace["y"]) == 10_000
+
+ def test_fr_fr_scatter_agg_limit_to_view(float_series, bool_series, cat_series):
+ # we test whether the to view limited LF series will also get copied.
+ base_fig = FigureResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(
+ go.Scatter(y=bool_series[:800]), limit_to_view=True, row=1, col=2
+ )
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fr_fr = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fr.data) == 3
+ assert len(fr_fr.hf_data) == 3
+ for trace in fr_fr.data[:1] + fr_fr.data[2:]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fr._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+ assert len(fr_fr.data[1]["y"]) == 800
+
+ # 2. No scatters are aggregated
+ fr_fr = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_fr.data) == 3
+ assert len(fr_fr.hf_data) == 3
+ for trace in fr_fr.data[:1] + fr_fr.data[2:]:
+ assert len(trace["y"]) == 10_000
+ assert len(fr_fr.data[1]["y"]) == 800
+
+ def test_fwr_fr_scatter_agg(float_series, bool_series, cat_series):
+ base_fig = FigureResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureWidgetResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fw_fr = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fw_fr.data) == 3
+ assert len(fw_fr.hf_data) == 3
+ for trace in fw_fr.data:
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fw_fr = FigureWidgetResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fw_fr.data) == 3
+ # NOTE: the hf_data gets copied so the lenght will be the same length as the
+ # original figure
+ assert len(fw_fr.hf_data) == 3
+ for trace in fw_fr.data:
+ assert len(trace["y"]) == 10_000
+
+ def test_fwr_fr_scatter_no_agg_agg(float_series, bool_series, cat_series):
+ # This initial figure object does not contain any aggregated data as
+ # default_n_shown samples >= the input data
+ base_fig = FigureResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=10_000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ assert len(base_fig.hf_data) == 0
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fwr_fr = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_fr.data) == 3
+ assert len(fwr_fr.hf_data) == 3
+ for trace in fwr_fr.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_fr._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fr_fr = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_fr.data) == 3
+ assert len(fr_fr.hf_data) == 0
+ for trace in fr_fr.data:
+ assert len(trace["y"]) == 10_000
+
+ def test_fwr_fr_scatter_agg_limit_to_view(float_series, bool_series, cat_series):
+ # we test whether the to view limited LF series will also get copied.
+ base_fig = FigureResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(
+ go.Scatter(y=bool_series[:800]), limit_to_view=True, row=1, col=2
+ )
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fwr_fr = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_fr.data) == 3
+ assert len(fwr_fr.hf_data) == 3
+ for trace in fwr_fr.data[:1] + fwr_fr.data[2:]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_fr._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+ assert len(fwr_fr.data[1]["y"]) == 800
+
+ # 2. No scatters are aggregated
+ fwr_fr = FigureWidgetResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fwr_fr.data) == 3
+ assert len(fwr_fr.hf_data) == 3
+ for trace in fwr_fr.data[:1] + fwr_fr.data[2:]:
+ assert len(trace["y"]) == 10_000
+ assert len(fwr_fr.data[1]["y"]) == 800
+
+ def test_fr_fr_scatter_agg_no_default(float_series, bool_series, cat_series):
+ base_fig = FigureResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2, max_n_samples=1000)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureREsampler object from a FigureResampler
+ # 1. All scatters are aggregated
+ fr_fr = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fr.data) == 3
+ assert len(fr_fr.hf_data) == 3
+ for trace in fr_fr.data[:1] + fr_fr.data[2:]:
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # this was not a default value, so it remains its original value; i.e 1000
+ assert len(fr_fr.data[1]["y"]) == 1000
+
+ def test_fwr_fr_scatter_agg_no_default(float_series, bool_series, cat_series):
+ base_fig = FigureResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2, max_n_samples=1000)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureWidgetResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fwr_fr = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_fr.data) == 3
+ assert len(fwr_fr.hf_data) == 3
+ for trace in fwr_fr.data[:1] + fwr_fr.data[2:]:
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # this was not a default value, so it remains its original value; i.e 1000
+ assert len(fwr_fr.data[1]["y"]) == 1000
+
+ # -------- Mixed
+ def test_fr_fr_mixed_agg(float_series):
+ base_fig = FigureResampler(
+ go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ ),
+ default_n_shown_samples=999,
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fr_fr_mixed = FigureResampler(base_fig, default_n_shown_samples=1_020)
+ assert len(fr_fr_mixed.data) == 3
+ assert len(fr_fr_mixed.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fr_fr_mixed.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fr_mixed._hf_data
+ assert len(trace["y"]) == 1_020
+
+ for trace in fr_fr_mixed.data[:1] + fr_fr_mixed.data[2:]:
+ assert trace.uid not in fr_fr_mixed._hf_data
+
+ def test_fr_fr_mixed_no_default_agg(float_series):
+ base_fig = FigureResampler(
+ go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2, max_n_samples=1054)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fr_fr_mixed = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fr_mixed.data) == 3
+ assert len(fr_fr_mixed.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fr_fr_mixed.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fr_mixed._hf_data
+ assert len(trace["y"]) == 1054
+
+ for trace in fr_fr_mixed.data[:1] + fr_fr_mixed.data[2:]:
+ assert trace.uid not in fr_fr_mixed._hf_data
+
+ def test_fw_fr_mixed_agg(float_series):
+ base_fig = FigureResampler(
+ go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ ),
+ default_n_shown_samples=999,
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fw_fr_mixed = FigureWidgetResampler(base_fig, default_n_shown_samples=1_020)
+ assert len(fw_fr_mixed.data) == 3
+ assert len(fw_fr_mixed.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fw_fr_mixed.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fw_fr_mixed._hf_data
+ assert len(trace["y"]) == 1_020
+
+ for trace in fw_fr_mixed.data[:1] + fw_fr_mixed.data[2:]:
+ assert trace.uid not in fw_fr_mixed._hf_data
+
+ def test_fw_fr_mixed_no_default_agg(float_series):
+ base_fig = FigureResampler(
+ go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2, max_n_samples=1054)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fw_fr_mixed = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fw_fr_mixed.data) == 3
+ assert len(fw_fr_mixed.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fw_fr_mixed.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fw_fr_mixed._hf_data
+ assert len(trace["y"]) == 1054
+
+ for trace in fw_fr_mixed.data[:1] + fw_fr_mixed.data[2:]:
+ assert trace.uid not in fw_fr_mixed._hf_data
+
+
+# ----------------------- FigureWidgetResampler As base -----------------------
+if True:
+ # -------- All scatters
+ def test_fr_fwr_scatter_agg(float_series, bool_series, cat_series):
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 3
+ for trace in fr_fw.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fw._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 3
+ for trace in fr_fw.data:
+ assert len(trace["y"]) == 10_000
+
+ def test_fr_fwr_scatter_no_agg_agg(float_series, bool_series, cat_series):
+ # This inital figure object does not contain any aggregated data as
+ # default_n_shown samples >= the input data
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=10_000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ assert len(base_fig.hf_data) == 0
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fr_fwr = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fwr.data) == 3
+ assert len(fr_fwr.hf_data) == 3
+ for trace in fr_fwr.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fwr._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ def test_fr_fwr_scatter_agg_limit_to_view(float_series, bool_series, cat_series):
+ # we test whether the to view limited LF series will also get copied.
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(
+ go.Scatter(y=bool_series[:800]), limit_to_view=True, row=1, col=2
+ )
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 3
+ for trace in fr_fw.data[:1] + fr_fw.data[2:]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fw._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+ assert len(fr_fw.data[1]["y"]) == 800
+
+ # 2. No scatters are aggregated
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 3
+ for trace in fr_fw.data[:1] + fr_fw.data[2:]:
+ assert len(trace["y"]) == 10_000
+ assert len(fr_fw.data[1]["y"]) == 800
+
+ def test_fw_fwr_scatter_agg(float_series, bool_series, cat_series):
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureWidgetResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fw_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fw_fw.data) == 3
+ assert len(fw_fw.hf_data) == 3
+ for trace in fw_fw.data:
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # 2. No scatters are aggregated
+ fw_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fw_fw.data) == 3
+ # NOTE: the hf_data gets copied so the lenght will be the same length as the
+ # original figure
+ assert len(fw_fw.hf_data) == 3
+ for trace in fw_fw.data:
+ assert len(trace["y"]) == 10_000
+
+ def test_fwr_fwr_scatter_no_agg_agg(float_series, bool_series, cat_series):
+ # This inital figure object does not contain any aggregated data as
+ # default_n_shown samples >= the input data
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=10_000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ assert len(base_fig.hf_data) == 0
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fwr_fwr = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_fwr.data) == 3
+ assert len(fwr_fwr.hf_data) == 3
+ for trace in fwr_fwr.data:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_fwr._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ def test_fwr_fwr_scatter_agg_limit_to_view(float_series, bool_series, cat_series):
+ # we test whether the to view limited LF series will also get copied.
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(
+ go.Scatter(y=bool_series[:800]), limit_to_view=True, row=1, col=2
+ )
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fwr_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_fw.data) == 3
+ assert len(fwr_fw.hf_data) == 3
+ for trace in fwr_fw.data[:1] + fwr_fw.data[2:]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fwr_fw._hf_data
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+ assert len(fwr_fw.data[1]["y"]) == 800
+
+ # 2. No scatters are aggregated
+ fwr_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=10_000)
+ assert len(fwr_fw.data) == 3
+ assert len(fwr_fw.hf_data) == 3
+ for trace in fwr_fw.data[:1] + fwr_fw.data[2:]:
+ assert len(trace["y"]) == 10_000
+ assert len(fwr_fw.data[1]["y"]) == 800
+
+ def test_fr_fwr_scatter_agg_no_default(float_series, bool_series, cat_series):
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2, max_n_samples=1000)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureREsampler object from a FigureResampler
+ # 1. All scatters are aggregated
+ fr_fw = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fw.data) == 3
+ assert len(fr_fw.hf_data) == 3
+ for trace in fr_fw.data[:1] + fr_fw.data[2:]:
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # this was not a default value, so it remains its original value; i.e 1000
+ assert len(fr_fw.data[1]["y"]) == 1000
+
+ def test_fw_fwr_scatter_agg_no_default(float_series, bool_series, cat_series):
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2, max_n_samples=1000)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ # Create FigureWidgetResampler object from a go.Scatter
+ # 1. All scatters are aggregated
+ fw_fw = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fw_fw.data) == 3
+ assert len(fw_fw.hf_data) == 3
+ for trace in fw_fw.data[:1] + fw_fw.data[2:]:
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # this was not a default value, so it remains its original value; i.e 1000
+ assert len(fw_fw.data[1]["y"]) == 1000
+
+ # -------- Midex
+ def test_fr_fwr_mixed_agg(float_series):
+ base_fig = FigureWidgetResampler(
+ go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ ),
+ default_n_shown_samples=999,
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fr_fw_mixed = FigureResampler(base_fig, default_n_shown_samples=1_020)
+ assert len(fr_fw_mixed.data) == 3
+ assert len(fr_fw_mixed.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fr_fw_mixed.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fw_mixed._hf_data
+ assert len(trace["y"]) == 1_020
+
+ for trace in fr_fw_mixed.data[:1] + fr_fw_mixed.data[2:]:
+ assert trace.uid not in fr_fw_mixed._hf_data
+
+ def test_fr_fwr_mixed_no_default_agg(float_series):
+ base_fig = FigureWidgetResampler(
+ go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2, max_n_samples=1054)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fr_fw_mixed = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fw_mixed.data) == 3
+ assert len(fr_fw_mixed.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fr_fw_mixed.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fr_fw_mixed._hf_data
+ assert len(trace["y"]) == 1054
+
+ for trace in fr_fw_mixed.data[:1] + fr_fw_mixed.data[2:]:
+ assert trace.uid not in fr_fw_mixed._hf_data
+
+ def test_fw_fwr_mixed_agg(float_series):
+ base_fig = FigureWidgetResampler(
+ go.FigureWidget(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ ),
+ default_n_shown_samples=999,
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fw_fw_mixed = FigureWidgetResampler(base_fig, default_n_shown_samples=1_020)
+ assert len(fw_fw_mixed.data) == 3
+ assert len(fw_fw_mixed.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fw_fw_mixed.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fw_fw_mixed._hf_data
+ assert len(trace["y"]) == 1_020
+
+ for trace in fw_fw_mixed.data[:1] + fw_fw_mixed.data[2:]:
+ assert trace.uid not in fw_fw_mixed._hf_data
+
+ def test_fw_fwr_mixed_no_default_agg(float_series):
+ base_fig = FigureWidgetResampler(
+ go.Figure(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ )
+ )
+ )
+ base_fig.add_trace(go.Box(x=float_series), row=1, col=1)
+ base_fig.add_trace(dict(y=float_series), row=1, col=2, max_n_samples=1054)
+ base_fig.add_trace(go.Histogram(x=float_series), row=2, col=1)
+
+ fw_fw_mixed = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fw_fw_mixed.data) == 3
+ assert len(fw_fw_mixed.hf_data) == 1 # Only the second trace will be aggregated
+ for trace in fw_fw_mixed.data[1:2]:
+ # ensure that all uids are in the `_hf_data` property
+ assert trace.uid in fw_fw_mixed._hf_data
+ assert len(trace["y"]) == 1054
+
+ for trace in fw_fw_mixed.data[:1] + fw_fw_mixed.data[2:]:
+ assert trace.uid not in fw_fw_mixed._hf_data
+
+
+# =========================================================
+# Performing zoom events on widgets
+if True:
+
+ def test_fr_fwr_scatter_agg_zoom(cat_series, bool_series, float_series):
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2, max_n_samples=1000)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ base_fig.layout.update(
+ {
+ "xaxis": {"range": [10_000, 20_000]},
+ "yaxis": {"range": [-20, 3]},
+ "xaxis2": {"range": [40_000, 60_000]},
+ "yaxis2": {"range": [-10, 3]},
+ },
+ overwrite=False,
+ )
+
+ fr_fwr = FigureResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fr_fwr.data) == 3
+ assert len(fr_fwr.hf_data) == 3
+ for trace in fr_fwr.data[:1] + fr_fwr.data[2:]:
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # Verify whether the zoom did not affect antyhign
+ assert trace["x"][0] == 0
+ assert trace["x"][-1] == 9999
+
+ # this was not a default value, so it remains its original value; i.e 1000
+ assert len(fr_fwr.data[1]["y"]) == 1000
+ assert fr_fwr.data[1]["x"][0] == 0
+ assert fr_fwr.data[1]["x"][-1] == 9999
+
+ def test_fwr_fwr_scatter_agg_zoom(cat_series, bool_series, float_series):
+ base_fig = FigureWidgetResampler(
+ make_subplots(
+ rows=2,
+ cols=2,
+ specs=[[{}, {}], [{"colspan": 2}, None]],
+ ),
+ default_n_shown_samples=1000,
+ )
+ base_fig.add_trace(dict(y=cat_series), row=1, col=1)
+ base_fig.add_trace(go.Scatter(y=bool_series), row=1, col=2, max_n_samples=1000)
+ base_fig.add_trace(go.Scattergl(y=float_series), row=2, col=1)
+
+ base_fig.layout.update(
+ {
+ "xaxis": {"range": [10_000, 20_000]},
+ "yaxis": {"range": [-20, 3]},
+ "xaxis2": {"range": [40_000, 60_000]},
+ "yaxis2": {"range": [-10, 3]},
+ },
+ overwrite=False,
+ )
+
+ fwr_fwr = FigureWidgetResampler(base_fig, default_n_shown_samples=2_000)
+ assert len(fwr_fwr.data) == 3
+ assert len(fwr_fwr.hf_data) == 3
+ for trace in fwr_fwr.data[:1] + fwr_fwr.data[2:]:
+ # NOTE: default arguments are overtaken, so the default number of samples
+ # of the wrapped `FigureResampler` traces are overriden by the default
+ # number of samples of this class
+ assert len(trace["y"]) == 2_000
+
+ # Verify whether the zoom did not affect antyhign
+ assert trace["x"][0] == 0
+ assert trace["x"][-1] == 9999
+
+ # this was not a default value, so it remains its original value; i.e 1000
+ assert len(fwr_fwr.data[1]["y"]) == 1000
+ assert fwr_fwr.data[1]["x"][0] == 0
+ assert fwr_fwr.data[1]["x"][-1] == 9999
diff --git a/tests/test_figure_resampler.py b/tests/test_figure_resampler.py
index 751a9de6..501a9e28 100644
--- a/tests/test_figure_resampler.py
+++ b/tests/test_figure_resampler.py
@@ -10,6 +10,7 @@
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from plotly_resampler import FigureResampler, LTTB, EveryNthPoint
+from typing import List
def test_add_trace_kwarg_space(float_series, bool_series, cat_series):
@@ -36,7 +37,7 @@ def test_add_trace_kwarg_space(float_series, bool_series, cat_series):
row=1,
col=1,
limit_to_view=False,
- hf_text = "text",
+ hf_text="text",
hf_hovertext="hovertext",
)
@@ -155,7 +156,6 @@ def test_box_histogram(float_series):
hf_hovertext="hovertext",
)
-
fig.add_trace(go.Box(x=float_series.values, name="float_series"), row=1, col=2)
fig.add_trace(
go.Box(x=float_series.values**2, name="float_series**2"), row=1, col=2
@@ -307,13 +307,8 @@ def test_hf_text():
assert np.all(fig.data[0].text == fig.data[0].y.astype(int).astype(str))
assert fig.data[0].hovertext is None
-
fig = FigureResampler()
- fig.add_trace(
- go.Scatter(name="blabla"),
- hf_y=y,
- hf_text=y.astype(str)
- )
+ fig.add_trace(go.Scatter(name="blabla"), hf_y=y, hf_text=y.astype(str))
assert np.all(fig.hf_data[0]["text"] == y.astype(str))
assert fig.hf_data[0]["hovertext"] is None
@@ -340,11 +335,7 @@ def test_hf_hovertext():
assert fig.data[0].text is None
fig = FigureResampler()
- fig.add_trace(
- go.Scatter(name="blabla"),
- hf_y=y,
- hf_hovertext=y.astype(str)
- )
+ fig.add_trace(go.Scatter(name="blabla"), hf_y=y, hf_hovertext=y.astype(str))
assert np.all(fig.hf_data[0]["hovertext"] == y.astype(str))
assert fig.hf_data[0]["text"] is None
@@ -368,14 +359,16 @@ def test_hf_text_and_hf_hovertext():
assert len(fig.data[0].y) < 5_000
assert np.all(fig.data[0].text == fig.data[0].y.astype(int).astype(str))
- assert np.all(fig.data[0].hovertext == (9_999 - fig.data[0].y).astype(int).astype(str))
+ assert np.all(
+ fig.data[0].hovertext == (9_999 - fig.data[0].y).astype(int).astype(str)
+ )
fig = FigureResampler()
fig.add_trace(
go.Scatter(name="blabla"),
hf_y=y,
hf_text=y.astype(str),
- hf_hovertext=y.astype(str)[::-1]
+ hf_hovertext=y.astype(str)[::-1],
)
assert np.all(fig.hf_data[0]["text"] == y.astype(str))
@@ -383,7 +376,9 @@ def test_hf_text_and_hf_hovertext():
assert len(fig.data[0].y) < 5_000
assert np.all(fig.data[0].text == fig.data[0].y.astype(int).astype(str))
- assert np.all(fig.data[0].hovertext == (9_999 - fig.data[0].y).astype(int).astype(str))
+ assert np.all(
+ fig.data[0].hovertext == (9_999 - fig.data[0].y).astype(int).astype(str)
+ )
def test_multiple_timezones():
@@ -579,8 +574,8 @@ def test_multiple_tz_no_tz_series_slicing():
t_start = t_start.tz_localize(cs[(i + 1) % len(cs)].index.tz)
t_stop = t_stop.tz_localize(cs[(i + 2) % len(cs)].index.tz)
- # Now the assumpton cannot be made that s ahd the same time-zone as the
- # timestamps -> Assertionerror will be raised.
+ # Now the assumption cannot be made that s has the same time-zone as the
+ # timestamps -> AssertionError will be raised.
with pytest.raises(AssertionError):
fig._slice_time(s.tz_localize(None), t_start, t_stop)
@@ -651,3 +646,216 @@ def test_fr_add_empty_trace():
assert len(fig.hf_data) == 1
assert len(fig.hf_data[0]["x"]) == 0
assert len(fig.hf_data[0]["y"]) == 0
+
+
+def test_fr_from_dict():
+ y = np.array([1] * 10_000)
+ base_fig = {
+ "type": "scatter",
+ "y": y,
+ }
+
+ fr_fig = FigureResampler(base_fig, default_n_shown_samples=1000)
+ assert len(fr_fig.hf_data) == 1
+ assert (fr_fig.hf_data[0]["y"] == y).all()
+ assert len(fr_fig.data) == 1
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[0]["y"] == [1] * 1_000).all()
+
+ # assert that all the uuids of data and hf_data match
+ # this is a proxy for assuring that the dynamic aggregation should work
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+
+
+def test_fr_empty_list():
+ # and empty list -> so no concrete traces were added
+ fr_fig = FigureResampler([], default_n_shown_samples=1000)
+ assert len(fr_fig.hf_data) == 0
+ assert len(fr_fig.data) == 0
+
+
+def test_fr_empty_dict():
+ # a dict is a concrete trace so 1 trace should be added
+ fr_fig = FigureResampler({}, default_n_shown_samples=1000)
+ assert len(fr_fig.hf_data) == 0
+ assert len(fr_fig.data) == 1
+
+
+def test_fr_wrong_keys(float_series):
+ base_fig = [
+ {"ydata": float_series.values + 2, "name": "sp2"},
+ ]
+ with pytest.raises(ValueError):
+ FigureResampler(base_fig, default_n_shown_samples=1000)
+
+
+def test_fr_from_list_dict(float_series):
+ base_fig: List[dict] = [
+ {"y": float_series.values + 2, "name": "sp2"},
+ {"y": float_series.values, "name": "s"},
+ ]
+
+ fr_fig = FigureResampler(base_fig, default_n_shown_samples=1000)
+ # both traces are HF traces so should be aggregated
+ assert len(fr_fig.hf_data) == 2
+ assert (fr_fig.hf_data[0]["y"] == float_series + 2).all()
+ assert (fr_fig.hf_data[1]["y"] == float_series).all()
+ assert len(fr_fig.data) == 2
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[1]["x"][0] >= 0) & (fr_fig.data[1]["x"][-1] < 10_000)
+
+ # assert that all the uuids of data and hf_data match
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+ assert fr_fig.data[1].uid in fr_fig._hf_data
+
+ # redo the exercise with a new low-freq trace
+ base_fig.append({"y": float_series[:1000], "name": "s_no_agg"})
+ fr_fig = FigureResampler(base_fig, default_n_shown_samples=1000)
+ assert len(fr_fig.hf_data) == 2
+ assert len(fr_fig.data) == 3
+
+
+def test_fr_list_dict_add_traces(float_series):
+ fr_fig = FigureResampler(default_n_shown_samples=1000)
+
+ traces: List[dict] = [
+ {"y": float_series.values + 2, "name": "sp2"},
+ {"y": float_series.values, "name": "s"},
+ ]
+ fr_fig.add_traces(traces)
+ # both traces are HF traces so should be aggregated
+ assert len(fr_fig.hf_data) == 2
+ assert (fr_fig.hf_data[0]["y"] == float_series + 2).all()
+ assert (fr_fig.hf_data[1]["y"] == float_series).all()
+ assert len(fr_fig.data) == 2
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[1]["x"][0] >= 0) & (fr_fig.data[1]["x"][-1] < 10_000)
+
+ # assert that all the uuids of data and hf_data match
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+ assert fr_fig.data[1].uid in fr_fig._hf_data
+
+ # redo the exercise with a new low-freq trace
+ fr_fig.add_traces({"y": float_series[:1000], "name": "s_no_agg"})
+ assert len(fr_fig.hf_data) == 2
+ assert len(fr_fig.data) == 3
+
+ # add low-freq trace but set limit_to_view to True
+ fr_fig.add_traces([{"y": float_series[:100], "name": "s_agg"}], limit_to_views=True)
+ assert len(fr_fig.hf_data) == 3
+ assert len(fr_fig.data) == 4
+
+ # add a low-freq trace but adjust max_n_samples
+ # note that we use a tuple as input here
+ fr_fig.add_traces(({"y": float_series[:1000], "name": "s_agg"},), max_n_samples=999)
+ assert len(fr_fig.hf_data) == 4
+ assert len(fr_fig.data) == 5
+
+
+def test_fr_list_dict_add_trace(float_series):
+ fr_fig = FigureResampler(default_n_shown_samples=1000)
+
+ traces: List[dict] = [
+ {"y": float_series.values + 2, "name": "sp2"},
+ {"y": float_series.values, "name": "s"},
+ ]
+ for trace in traces:
+ fr_fig.add_trace(trace)
+
+ # both traces are HF traces so should be aggregated
+ assert len(fr_fig.hf_data) == 2
+ assert (fr_fig.hf_data[0]["y"] == float_series + 2).all()
+ assert (fr_fig.hf_data[1]["y"] == float_series).all()
+ assert len(fr_fig.data) == 2
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[1]["x"][0] >= 0) & (fr_fig.data[1]["x"][-1] < 10_000)
+
+ # assert that all the uuids of data and hf_data match
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+ assert fr_fig.data[1].uid in fr_fig._hf_data
+
+ # redo the exercise with a new low-freq trace
+ fr_fig.add_trace({"y": float_series[:1000], "name": "s_no_agg"})
+ assert len(fr_fig.hf_data) == 2
+ assert len(fr_fig.data) == 3
+
+ # add low-freq trace but set limit_to_view to True
+ fr_fig.add_trace({"y": float_series[:100], "name": "s_agg"}, limit_to_view=True)
+ assert len(fr_fig.hf_data) == 3
+ assert len(fr_fig.data) == 4
+
+ # add a low-freq trace but adjust max_n_samples
+ lf_series = {"y": float_series[:1000], "name": "s_agg"}
+ # plotly its default behavior raises a ValueError when a list or tuple is passed
+ # to add_trace
+ with pytest.raises(ValueError):
+ fr_fig.add_trace([lf_series], max_n_samples=999)
+ with pytest.raises(ValueError):
+ fr_fig.add_trace((lf_series,), max_n_samples=999)
+
+ fr_fig.add_trace(lf_series, max_n_samples=999)
+ assert len(fr_fig.hf_data) == 4
+ assert len(fr_fig.data) == 5
+
+
+def test_fr_list_scatter_add_traces(float_series):
+ fr_fig = FigureResampler(default_n_shown_samples=1000)
+
+ traces: List[dict] = [
+ go.Scattergl({"y": float_series.values + 2, "name": "sp2"}),
+ go.Scatter({"y": float_series.values, "name": "s"}),
+ ]
+ fr_fig.add_traces(tuple(traces))
+ # both traces are HF traces so should be aggregated
+ assert len(fr_fig.hf_data) == 2
+ assert (fr_fig.hf_data[0]["y"] == float_series + 2).all()
+ assert (fr_fig.hf_data[1]["y"] == float_series).all()
+ assert len(fr_fig.data) == 2
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[1]["x"][0] >= 0) & (fr_fig.data[1]["x"][-1] < 10_000)
+
+ # assert that all the uuids of data and hf_data match
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+ assert fr_fig.data[1].uid in fr_fig._hf_data
+
+ # redo the exercise with a new low-freq trace
+ fr_fig.add_traces([go.Scattergl({"y": float_series[:1000], "name": "s_no_agg"})])
+ assert len(fr_fig.hf_data) == 2
+ assert len(fr_fig.data) == 3
+
+ # add low-freq trace but set limit_to_view to True
+ fr_fig.add_traces(go.Scattergl(), limit_to_views=True)
+ assert len(fr_fig.hf_data) == 3
+ assert len(fr_fig.data) == 4
+
+ # add a low-freq trace but adjust max_n_samples
+ fr_fig.add_traces(
+ go.Scatter({"y": float_series[:1000], "name": "s_agg"}), max_n_samples=999
+ )
+ assert len(fr_fig.hf_data) == 4
+ assert len(fr_fig.data) == 5
+
+
+def test_fr_copy_hf_data(float_series):
+ fr_fig = FigureResampler(default_n_shown_samples=2000)
+ traces: List[dict] = [
+ go.Scattergl({"y": float_series.values + 2, "name": "sp2"}),
+ go.Scatter({"y": float_series.values, "name": "s"}),
+ ]
+ fr_fig.add_traces(tuple(traces))
+
+ hf_data_cp = FigureResampler()._copy_hf_data(fr_fig._hf_data)
+ uid = list(hf_data_cp.keys())[0]
+
+ hf_data_cp[uid]["x"] = np.arange(1000)
+ hf_data_cp[uid]["y"] = float_series[:1000]
+
+ assert len(fr_fig.hf_data[0]["x"]) == 10_000
+ assert len(fr_fig.hf_data[0]["y"]) == 10_000
+ assert len(fr_fig.hf_data[1]["x"]) == 10_000
+ assert len(fr_fig.hf_data[1]["y"]) == 10_000
\ No newline at end of file
diff --git a/tests/test_figurewidget_resampler.py b/tests/test_figurewidget_resampler.py
index 8959a3a6..fd3d0628 100644
--- a/tests/test_figurewidget_resampler.py
+++ b/tests/test_figurewidget_resampler.py
@@ -5,6 +5,7 @@
from copy import copy
from datetime import datetime
+from typing import List
import numpy as np
import pandas as pd
@@ -577,8 +578,8 @@ def test_multiple_tz_no_tz_series_slicing():
t_start = t_start.tz_localize(cs[(i + 1) % len(cs)].index.tz)
t_stop = t_stop.tz_localize(cs[(i + 2) % len(cs)].index.tz)
- # Now the assumpton cannot be made that s ahd the same time-zone as the
- # timestamps -> Assertionerror will be raised.
+ # Now the assumption cannot be made that s has the same time-zone as the
+ # timestamps -> AssertionError will be raised.
with pytest.raises(AssertionError):
fig._slice_time(s.tz_localize(None), t_start, t_stop)
@@ -1052,7 +1053,7 @@ def test_bare_update_methods():
== 0
)
- # Perform an autorange udpate -> assert that the range i
+ # Perform an autorange update -> assert that the range i
fw_fig._relayout_hist.clear()
fw_fig.layout.update({"xaxis2": {"autorange": True}, "yaxis2": {"autorange": True}})
assert len(fw_fig._relayout_hist) == 0
@@ -1095,7 +1096,7 @@ def test_fwr_add_empty_trace():
assert len(fig.hf_data[0]["y"]) == 0
-def test_fwr_updata_trace_data_zoom():
+def test_fwr_update_trace_data_zoom():
k = 50_000
fig = FigureWidgetResampler(
go.FigureWidget(make_subplots(rows=2, cols=1)), verbose=True
@@ -1334,7 +1335,7 @@ def test_fwr_adjust_series_input():
x = fig.data[0]["x"]
y = fig.data[0]["y"]
- # asser that hf x and y its values are used and not its index
+ # assert that hf x and y its values are used and not its index
assert x[0] == -2000
assert y[0] >= 5
@@ -1365,7 +1366,7 @@ def test_fwr_adjust_series_text_input():
x = fig.data[0]["x"]
y = fig.data[0]["y"]
- # asser that hf x and y its values are used and not its index
+ # assert that hf x and y its values are used and not its index
assert x[0] == -2000
assert y[0] >= 10
@@ -1531,3 +1532,195 @@ def test_fwr_time_based_data_s():
# text === -hovertext -> so the sum should their length
assert (text == -hovertext).sum() == 1000
+
+
+def test_fwr_from_dict():
+ y = np.array([1] * 10_000)
+ base_fig = {
+ "type": "scatter",
+ "y": y,
+ }
+
+ fr_fig = FigureWidgetResampler(base_fig, default_n_shown_samples=1000)
+ assert len(fr_fig.hf_data) == 1
+ assert (fr_fig.hf_data[0]["y"] == y).all()
+ assert len(fr_fig.data) == 1
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[0]["y"] == [1] * 1_000).all()
+
+ # assert that all the uuids of data and hf_data match
+ # this is a proxy for assuring that the dynamic aggregation should work
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+
+
+def test_fwr_empty_list():
+ # and empty list -> so no concrete traces were added
+ fr_fig = FigureWidgetResampler([], default_n_shown_samples=1000)
+ assert len(fr_fig.hf_data) == 0
+ assert len(fr_fig.data) == 0
+
+
+def test_fwr_empty_dict():
+ # a dict is a concrete trace so 1 trace should be added
+ fr_fig = FigureWidgetResampler({}, default_n_shown_samples=1000)
+ assert len(fr_fig._hf_data) == 0
+ assert len(fr_fig.data) == 1
+
+
+def test_fwr_wrong_keys(float_series):
+ base_fig = [
+ {"ydata": float_series.values + 2, "name": "sp2"},
+ ]
+ with pytest.raises(ValueError):
+ FigureWidgetResampler(base_fig, default_n_shown_samples=1000)
+
+
+def test_fwr_from_list_dict(float_series):
+ base_fig: List[dict] = [
+ {"y": float_series.values + 2, "name": "sp2"},
+ {"y": float_series.values, "name": "s"},
+ ]
+
+ fr_fig = FigureWidgetResampler(base_fig, default_n_shown_samples=1000)
+ assert len(fr_fig.hf_data) == 2
+ assert (fr_fig.hf_data[0]["y"] == float_series + 2).all()
+ assert (fr_fig.hf_data[1]["y"] == float_series).all()
+ assert len(fr_fig.data) == 2
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[1]["x"][0] >= 0) & (fr_fig.data[1]["x"][-1] < 10_000)
+
+ # assert that all the uuids of data and hf_data match
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+ assert fr_fig.data[1].uid in fr_fig._hf_data
+
+ # redo the exercise with a new low-freq trace
+ base_fig.append({'y': float_series[:1000], 'name': "s_no_agg"})
+ fr_fig = FigureWidgetResampler(base_fig, default_n_shown_samples=1000)
+ assert len(fr_fig.hf_data) == 2
+ assert len(fr_fig.data) == 3
+
+
+def test_fwr_list_dict_add_trace(float_series):
+ fr_fig = FigureWidgetResampler(default_n_shown_samples=1000)
+
+ traces: List[dict] = [
+ {"y": float_series.values + 2, "name": "sp2"},
+ {"y": float_series.values, "name": "s"},
+ ]
+ for trace in traces:
+ fr_fig.add_trace(trace)
+
+ # both traces are HF traces so should be aggregated
+ assert len(fr_fig.hf_data) == 2
+ assert (fr_fig.hf_data[0]["y"] == float_series + 2).all()
+ assert (fr_fig.hf_data[1]["y"] == float_series).all()
+ assert len(fr_fig.data) == 2
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[1]["x"][0] >= 0) & (fr_fig.data[1]["x"][-1] < 10_000)
+
+ # assert that all the uuids of data and hf_data match
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+ assert fr_fig.data[1].uid in fr_fig._hf_data
+
+ # redo the exercise with a new low-freq trace
+ fr_fig.add_trace({'y': float_series[:1000], 'name': "s_no_agg"})
+ assert len(fr_fig.hf_data) == 2
+ assert len(fr_fig.data) == 3
+
+ # add low-freq trace but set limit_to_view to True
+ fr_fig.add_trace({'y': float_series[:100], 'name': "s_agg"}, limit_to_view=True)
+ assert len(fr_fig.hf_data) == 3
+ assert len(fr_fig.data) == 4
+
+ # add a low-freq trace but adjust max_n_samples
+ lf_series = {'y': float_series[:1000], 'name': "s_agg"}
+ # plotly its default behavior raises a ValueError when a list or tuple is passed
+ # to add_trace
+ with pytest.raises(ValueError):
+ fr_fig.add_trace([lf_series], max_n_samples=999)
+ with pytest.raises(ValueError):
+ fr_fig.add_trace((lf_series,), max_n_samples=999)
+
+ fr_fig.add_trace(lf_series, max_n_samples=999)
+ assert len(fr_fig.hf_data) == 4
+ assert len(fr_fig.data) == 5
+
+
+def test_fwr_list_dict_add_traces(float_series):
+ fr_fig = FigureWidgetResampler(default_n_shown_samples=1000)
+
+ traces: List[dict] = [
+ {"y": float_series.values + 2, "name": "sp2"},
+ {"y": float_series.values, "name": "s"},
+ ]
+ fr_fig.add_traces(traces)
+ # both traces are HF traces so should be aggregated
+ assert len(fr_fig.hf_data) == 2
+ assert (fr_fig.hf_data[0]["y"] == float_series + 2).all()
+ assert (fr_fig.hf_data[1]["y"] == float_series).all()
+ assert len(fr_fig.data) == 2
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[1]["x"][0] >= 0) & (fr_fig.data[1]["x"][-1] < 10_000)
+
+ # assert that all the uuids of data and hf_data match
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+ assert fr_fig.data[1].uid in fr_fig._hf_data
+
+ # redo the exercise with a new low-freq trace
+ # plotly also allows a dict or a scatter object as input
+ fr_fig.add_traces({'y': float_series[:1000], 'name': "s_no_agg"})
+ assert len(fr_fig.hf_data) == 2
+ assert len(fr_fig.data) == 3
+
+ # add low-freq trace but set limit_to_view to True
+ fr_fig.add_traces([{'y': float_series[:100], 'name': "s_agg"}], limit_to_views=True)
+ assert len(fr_fig.hf_data) == 3
+ assert len(fr_fig.data) == 4
+
+ # add a low-freq trace but adjust max_n_samples
+ # note that we use tuple as input
+ fr_fig.add_traces(({'y': float_series[:1000], 'name': "s_agg"}, ), max_n_samples=999)
+ assert len(fr_fig.hf_data) == 4
+ assert len(fr_fig.data) == 5
+
+
+def test_fwr_list_scatter_add_traces(float_series):
+ fr_fig = FigureWidgetResampler(default_n_shown_samples=1000)
+
+ traces: List[dict] = [
+ go.Scattergl({"y": float_series.values + 2, "name": "sp2"}),
+ go.Scatter({"y": float_series.values, "name": "s"}),
+ ]
+ fr_fig.add_traces(tuple(traces))
+ # both traces are HF traces so should be aggregated
+ assert len(fr_fig.hf_data) == 2
+ assert (fr_fig.hf_data[0]["y"] == float_series + 2).all()
+ assert (fr_fig.hf_data[1]["y"] == float_series).all()
+ assert len(fr_fig.data) == 2
+ assert len(fr_fig.data[0]["x"]) == 1_000
+ assert (fr_fig.data[0]["x"][0] >= 0) & (fr_fig.data[0]["x"][-1] < 10_000)
+ assert (fr_fig.data[1]["x"][0] >= 0) & (fr_fig.data[1]["x"][-1] < 10_000)
+
+ # assert that all the uuids of data and hf_data match
+ assert fr_fig.data[0].uid in fr_fig._hf_data
+ assert fr_fig.data[1].uid in fr_fig._hf_data
+
+ # redo the exercise with a new low-freq trace
+ fr_fig.add_traces([go.Scattergl({'y': float_series[:1000], 'name': "s_no_agg"})])
+ assert len(fr_fig.hf_data) == 2
+ assert len(fr_fig.data) == 3
+
+ # add low-freq trace but set limit_to_view to True
+ # note how the scatter object is not encapsulated within a list
+ fr_fig.add_traces(go.Scattergl(), limit_to_views=True)
+ assert len(fr_fig.hf_data) == 3
+ assert len(fr_fig.data) == 4
+
+ # add a low-freq trace but adjust max_n_samples
+ fr_fig.add_traces(go.Scatter({'y': float_series[:1000], 'name': "s_agg"}), max_n_samples=999)
+ assert len(fr_fig.hf_data) == 4
+ assert len(fr_fig.data) == 5
diff --git a/tests/test_registering.py b/tests/test_registering.py
new file mode 100644
index 00000000..c4a1db4d
--- /dev/null
+++ b/tests/test_registering.py
@@ -0,0 +1,204 @@
+import plotly.graph_objects as go
+import plotly.express as px
+import numpy as np
+
+from plotly_resampler import FigureResampler, FigureWidgetResampler
+from plotly_resampler.figure_resampler.figure_resampler_interface import (
+ AbstractFigureAggregator,
+)
+from plotly_resampler.registering import (
+ register_plotly_resampler,
+ unregister_plotly_resampler,
+ _get_plotly_constr,
+)
+
+from .conftest import registering_cleanup
+from inspect import isfunction
+
+
+def test_get_plotly_const(registering_cleanup):
+ # Check the basi(c)s
+ assert issubclass(FigureResampler, AbstractFigureAggregator)
+ assert issubclass(FigureWidgetResampler, AbstractFigureAggregator)
+
+ # Is unregistered now
+ assert not (isfunction(go.Figure) or isfunction(go.FigureWidget))
+ assert not issubclass(go.Figure, AbstractFigureAggregator)
+ assert not issubclass(go.FigureWidget, AbstractFigureAggregator)
+ assert not issubclass(_get_plotly_constr(go.Figure), AbstractFigureAggregator)
+ assert not issubclass(_get_plotly_constr(go.FigureWidget), AbstractFigureAggregator)
+
+ register_plotly_resampler()
+ assert isfunction(go.Figure) and isfunction(go.FigureWidget)
+ assert isinstance(go.Figure(), AbstractFigureAggregator)
+ assert isinstance(go.FigureWidget(), AbstractFigureAggregator)
+ assert issubclass(FigureResampler, AbstractFigureAggregator)
+ assert issubclass(FigureWidgetResampler, AbstractFigureAggregator)
+ assert not issubclass(_get_plotly_constr(go.Figure), AbstractFigureAggregator)
+ assert not issubclass(_get_plotly_constr(go.FigureWidget), AbstractFigureAggregator)
+
+ unregister_plotly_resampler()
+ assert not (isfunction(go.Figure) or isfunction(go.FigureWidget))
+ assert not issubclass(go.Figure, AbstractFigureAggregator)
+ assert not issubclass(go.FigureWidget, AbstractFigureAggregator)
+ assert not issubclass(_get_plotly_constr(go.Figure), AbstractFigureAggregator)
+ assert not issubclass(_get_plotly_constr(go.FigureWidget), AbstractFigureAggregator)
+
+
+def test_register_and_unregister_graph_objects(registering_cleanup):
+ import plotly.graph_objects as go_
+
+ # Is unregistered now
+ assert not (isfunction(go_.Figure) or isfunction(go_.FigureWidget))
+ fig = go_.Figure()
+ assert not isinstance(fig, AbstractFigureAggregator)
+ fig = go_.FigureWidget()
+ assert not isinstance(fig, AbstractFigureAggregator)
+
+ register_plotly_resampler()
+ assert isfunction(go_.Figure) and isfunction(go_.FigureWidget)
+ fig = go_.Figure()
+ assert isinstance(fig, AbstractFigureAggregator)
+ assert isinstance(fig, FigureResampler)
+ assert not isinstance(fig, FigureWidgetResampler)
+ fig = go_.FigureWidget()
+ assert isinstance(fig, AbstractFigureAggregator)
+ assert isinstance(fig, FigureWidgetResampler)
+ assert not isinstance(fig, FigureResampler)
+
+ unregister_plotly_resampler()
+ assert not (isfunction(go_.Figure) or isfunction(go_.FigureWidget))
+ fig = go_.Figure()
+ assert not isinstance(fig, AbstractFigureAggregator)
+ fig = go_.FigureWidget()
+ assert not isinstance(fig, AbstractFigureAggregator)
+
+
+def test_register_and_unregister_graph_objs(registering_cleanup):
+ import plotly.graph_objs as go_
+
+ # Is unregistered now
+ assert not (isfunction(go_.Figure) or isfunction(go_.FigureWidget))
+ fig = go_.Figure()
+ assert not isinstance(fig, AbstractFigureAggregator)
+ fig = go_.FigureWidget()
+ assert not isinstance(fig, AbstractFigureAggregator)
+
+ register_plotly_resampler()
+ assert isfunction(go_.Figure) and isfunction(go_.FigureWidget)
+ fig = go_.Figure()
+ assert isinstance(fig, AbstractFigureAggregator)
+ assert isinstance(fig, FigureResampler)
+ assert not isinstance(fig, FigureWidgetResampler)
+ fig = go_.FigureWidget()
+ assert isinstance(fig, AbstractFigureAggregator)
+ assert isinstance(fig, FigureWidgetResampler)
+ assert not isinstance(fig, FigureResampler)
+
+ unregister_plotly_resampler()
+ assert not (isfunction(go_.Figure) or isfunction(go_.FigureWidget))
+ fig = go_.Figure()
+ assert not isinstance(fig, AbstractFigureAggregator)
+ fig = go_.FigureWidget()
+ assert not isinstance(fig, AbstractFigureAggregator)
+
+
+def test_registering_modes(registering_cleanup):
+ register_plotly_resampler(mode="auto")
+ # Should be default
+ assert isinstance(go.Figure(), FigureResampler)
+ assert isinstance(go.FigureWidget(), FigureWidgetResampler)
+
+ register_plotly_resampler(mode="figure")
+ # Should be all FigureResampler
+ assert isinstance(go.Figure(), FigureResampler)
+ assert isinstance(go.FigureWidget(), FigureResampler)
+
+ register_plotly_resampler(mode="widget")
+ # Should be all FigureWidgetResampler
+ assert isinstance(go.Figure(), FigureWidgetResampler)
+ assert isinstance(go.FigureWidget(), FigureWidgetResampler)
+
+
+def test_registering_plotly_express_and_kwargs(registering_cleanup):
+ # Is unregistered now
+ fig = px.scatter(y=np.arange(500))
+ assert not isinstance(fig, AbstractFigureAggregator)
+ assert len(fig.data) == 1
+ assert len(fig.data[0].y) == 500
+
+ register_plotly_resampler(default_n_shown_samples=50)
+ fig = px.scatter(y=np.arange(500))
+ assert isinstance(fig, FigureResampler)
+ assert len(fig.data) == 1
+ assert len(fig.data[0].y) == 50
+ assert len(fig.hf_data) == 1
+ assert len(fig.hf_data[0]["y"]) == 500
+
+ register_plotly_resampler()
+ fig = px.scatter(y=np.arange(5000))
+ assert isinstance(fig, FigureResampler)
+ assert len(fig.data) == 1
+ assert len(fig.data[0].y) == 1000
+ assert len(fig.hf_data) == 1
+ assert len(fig.hf_data[0]["y"]) == 5000
+
+ unregister_plotly_resampler()
+ fig = px.scatter(y=np.arange(500))
+ assert not isinstance(fig, AbstractFigureAggregator)
+ assert len(fig.data) == 1
+ assert len(fig.data[0].y) == 500
+
+
+def test_compasibility_when_registered(registering_cleanup):
+ fr = FigureResampler
+ fwr = FigureWidgetResampler
+
+ fig_orig_1 = px.scatter(y=np.arange(1_005))
+ fig_orig_2 = go.FigureWidget({"type": "scatter", "y": np.arange(1_005)})
+ for fig in [fig_orig_1, fig_orig_2]:
+ fig1 = fr(fig)
+ fig2 = fr(fwr(fig))
+ fig3 = fr(fr(fr(fr(fwr(fwr(fr(fwr(fr(fig)))))))))
+ for f in [fig1, fig2, fig3]:
+ assert isinstance(f, FigureResampler)
+ assert len(f.data) == 1
+ assert len(f.data[0].y) == 1000
+ assert len(f.hf_data) == 1
+ assert len(f.hf_data[0]["y"]) == 1005
+
+ fig1 = fwr(fig)
+ fig2 = fwr(fr(fig))
+ fig3 = fwr(fwr(fwr(fwr(fr(fr(fwr(fr(fwr(fig)))))))))
+ for f in [fig1, fig2, fig3]:
+ assert isinstance(f, FigureWidgetResampler)
+ assert len(f.data) == 1
+ assert len(f.data[0].y) == 1000
+ assert len(f.hf_data) == 1
+ assert len(f.hf_data[0]["y"]) == 1005
+
+ register_plotly_resampler()
+
+ fig_orig_1 = px.scatter(y=np.arange(1_005))
+ fig_orig_2 = go.FigureWidget({"type": "scatter", "y": np.arange(1_005)})
+ for fig in [fig_orig_1, fig_orig_2]:
+ fig1 = fr(fig)
+ fig2 = fr(fwr(fig))
+ fig3 = fr(fr(fr(fr(fwr(fwr(fr(fwr(fr(fig)))))))))
+ for f in [fig1, fig2, fig3]:
+ assert isinstance(f, FigureResampler)
+ assert len(f.data) == 1
+ assert len(f.data[0].y) == 1000
+ assert len(f.hf_data) == 1
+ assert len(f.hf_data[0]["y"]) == 1005
+
+ fig1 = fwr(fig)
+ fig2 = fwr(fr(fig))
+ fig3 = fwr(fwr(fwr(fwr(fr(fr(fwr(fr(fwr(fig)))))))))
+ for f in [fig1, fig2, fig3]:
+ assert isinstance(f, FigureWidgetResampler)
+ assert len(f.data) == 1
+ assert len(f.data[0].y) == 1000
+ assert len(f.hf_data) == 1
+ assert len(f.hf_data[0]["y"]) == 1005
+
\ No newline at end of file
diff --git a/tests/test_utils.py b/tests/test_utils.py
index 2fc8c2bc..2b5f0c96 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -1,45 +1,110 @@
-import pandas
-from plotly_resampler.utils import timedelta_to_str, round_td_str, round_number_str
import pandas as pd
+import plotly.graph_objects as go
+from plotly_resampler.figure_resampler.utils import (
+ is_figure,
+ is_figurewidget,
+ is_fr,
+ is_fwr,
+ timedelta_to_str,
+ round_td_str,
+ round_number_str,
+)
+from plotly_resampler import FigureResampler, FigureWidgetResampler
+
+
+def test_is_figure():
+ fig_dict = {"type": "scatter", "y": [1, 2, 3]}
+ assert is_figure(go.Figure())
+ assert is_figure(go.Figure(fig_dict))
+ assert is_figure(FigureResampler())
+ assert is_figure(FigureResampler(fig_dict))
+ assert not is_figure(go.FigureWidget())
+ assert not is_figure(None)
+ assert not is_figure(fig_dict)
+ assert not is_figure(go.Scatter(y=[1, 2, 3]))
+ assert not is_figure(FigureWidgetResampler())
+ assert not is_figure(FigureWidgetResampler(fig_dict))
+
+
+def test_is_fr():
+ fig_dict = {"type": "scatter", "y": [1, 2, 3]}
+ assert is_fr(FigureResampler())
+ assert is_fr(FigureResampler(fig_dict))
+ assert not is_fr(go.Figure())
+ assert not is_fr(go.Figure(fig_dict))
+ assert not is_fr(go.FigureWidget())
+ assert not is_fr(None)
+ assert not is_fr(fig_dict)
+ assert not is_fr(go.Scatter(y=[1, 2, 3]))
+ assert not is_fr(FigureWidgetResampler())
+ assert not is_fr(FigureWidgetResampler(fig_dict))
+
+
+def test_is_figurewidget():
+ fig_dict = {"type": "scatter", "y": [1, 2, 3]}
+ assert is_figurewidget(go.FigureWidget())
+ assert is_figurewidget(go.FigureWidget(fig_dict))
+ assert is_figurewidget(FigureWidgetResampler())
+ assert is_figurewidget(FigureWidgetResampler(fig_dict))
+ assert not is_figurewidget(go.Figure())
+ assert not is_figurewidget(None)
+ assert not is_figurewidget(fig_dict)
+ assert not is_figurewidget(go.Scatter(y=[1, 2, 3]))
+ assert not is_figurewidget(FigureResampler())
+ assert not is_figurewidget(FigureResampler(fig_dict))
+
+
+def test_is_fwr():
+ fig_dict = {"type": "scatter", "y": [1, 2, 3]}
+ assert is_fwr(FigureWidgetResampler())
+ assert is_fwr(FigureWidgetResampler(fig_dict))
+ assert not is_fwr(go.FigureWidget())
+ assert not is_fwr(go.FigureWidget(fig_dict))
+ assert not is_fwr(go.Figure())
+ assert not is_fwr(None)
+ assert not is_fwr(fig_dict)
+ assert not is_fwr(go.Scatter(y=[1, 2, 3]))
+ assert not is_fwr(FigureResampler())
+ assert not is_fwr(FigureResampler(fig_dict))
def test_timedelta_to_str():
- assert (round_td_str(pd.Timedelta('1W'))) == '7D'
- assert (timedelta_to_str(pd.Timedelta('1W'))) == '7D'
- assert (timedelta_to_str(pd.Timedelta('1W') * -1)) == 'NEG7D'
- assert timedelta_to_str(pd.Timedelta('1s 114ms')) == '1.114s'
- assert round_td_str(pd.Timedelta('14.4ms')) == '14ms'
- assert round_td_str(pd.Timedelta('501ms')) == '501ms'
- assert round_td_str(pd.Timedelta('951ms')) == '1s'
- assert round_td_str(pd.Timedelta('950ms')) == '950ms'
- assert round_td_str(pd.Timedelta('949ms')) == '949ms'
- assert round_td_str(pd.Timedelta('500ms')) == '500ms'
- assert round_td_str(pd.Timedelta('14.4ms')) == '14ms'
- assert round_td_str(pd.Timedelta('14.6ms')) == '15ms'
- assert round_td_str(pd.Timedelta('1h 14.4us')) == '1h'
- assert round_td_str(pd.Timedelta('1128.9us')) == '1ms'
- assert round_td_str(pd.Timedelta('128.9us')) == '129us'
- assert round_td_str((pd.Timedelta('14ns'))) == '14ns'
+ assert (round_td_str(pd.Timedelta("1W"))) == "7D"
+ assert (timedelta_to_str(pd.Timedelta("1W"))) == "7D"
+ assert (timedelta_to_str(pd.Timedelta("1W") * -1)) == "NEG7D"
+ assert timedelta_to_str(pd.Timedelta("1s 114ms")) == "1.114s"
+ assert round_td_str(pd.Timedelta("14.4ms")) == "14ms"
+ assert round_td_str(pd.Timedelta("501ms")) == "501ms"
+ assert round_td_str(pd.Timedelta("951ms")) == "1s"
+ assert round_td_str(pd.Timedelta("950ms")) == "950ms"
+ assert round_td_str(pd.Timedelta("949ms")) == "949ms"
+ assert round_td_str(pd.Timedelta("500ms")) == "500ms"
+ assert round_td_str(pd.Timedelta("14.4ms")) == "14ms"
+ assert round_td_str(pd.Timedelta("14.6ms")) == "15ms"
+ assert round_td_str(pd.Timedelta("1h 14.4us")) == "1h"
+ assert round_td_str(pd.Timedelta("1128.9us")) == "1ms"
+ assert round_td_str(pd.Timedelta("128.9us")) == "129us"
+ assert round_td_str((pd.Timedelta("14ns"))) == "14ns"
def test_round_int_str():
- assert round_number_str(0.951) == '1'
- assert round_number_str(0.95) == '0.9'
- assert round_number_str(0.949) == '0.9'
- assert round_number_str(0.00949) == '0.009'
- assert round_number_str(0.00950) == '0.009'
- assert round_number_str(0.00951) == '0.01'
- assert round_number_str(0.0044) == '0.004'
- assert round_number_str(0.00451) == '0.005'
- assert round_number_str(0.0001) == '0.0001'
- assert round_number_str(0.00001) == '1e-05'
- assert round_number_str(0.000000321) == '3e-07'
- assert round_number_str(12_000) == '12k'
- assert round_number_str(13_340) == '13k'
- assert round_number_str(13_540) == '14k'
- assert round_number_str(559_540) == '560k'
- assert round_number_str(949_000) == '949k'
- assert round_number_str(950_000) == '950k'
- assert round_number_str(950_001) == '1M'
- assert round_number_str(1_950_001) == '2M'
- assert round_number_str(111_950_001) == '112M'
\ No newline at end of file
+ assert round_number_str(0.951) == "1"
+ assert round_number_str(0.95) == "0.9"
+ assert round_number_str(0.949) == "0.9"
+ assert round_number_str(0.00949) == "0.009"
+ assert round_number_str(0.00950) == "0.009"
+ assert round_number_str(0.00951) == "0.01"
+ assert round_number_str(0.0044) == "0.004"
+ assert round_number_str(0.00451) == "0.005"
+ assert round_number_str(0.0001) == "0.0001"
+ assert round_number_str(0.00001) == "1e-05"
+ assert round_number_str(0.000000321) == "3e-07"
+ assert round_number_str(12_000) == "12k"
+ assert round_number_str(13_340) == "13k"
+ assert round_number_str(13_540) == "14k"
+ assert round_number_str(559_540) == "560k"
+ assert round_number_str(949_000) == "949k"
+ assert round_number_str(950_000) == "950k"
+ assert round_number_str(950_001) == "1M"
+ assert round_number_str(1_950_001) == "2M"
+ assert round_number_str(111_950_001) == "112M"