Skip to content

🕸️ Core¤

The core module to contain logic & functions used in controllers.

This module is intended to contain sub-modules and functions that are not directly utilized from the package, but rather used in building the package itself. This means that the core module should not contain any code that is specific to the package's use case, but rather should be generic and reusable in other contexts.

humbldata.core.standard_models ¤

Models to represent core data structures of the Standardization Framework.

humbldata.core.standard_models.abstract ¤

Abstract core DATA MODELS to be inherited by other models.

humbldata.core.standard_models.abstract.errors ¤

An ABSTRACT DATA MODEL to be inherited by custom errors.

humbldata.core.standard_models.abstract.errors.HumblDataError ¤

Bases: BaseException

Base Error for HumblData logic.

Source code in src/humbldata/core/standard_models/abstract/errors.py
4
5
6
7
8
9
class HumblDataError(BaseException):
    """Base Error for HumblData logic."""

    def __init__(self, original: str | Exception | None = None):
        self.original = original
        super().__init__(str(original))

humbldata.core.standard_models.abstract.query_params ¤

A wrapper around OpenBB QueryParams Standardized Model to use with humbldata.

humbldata.core.standard_models.abstract.query_params.QueryParams ¤

Bases: QueryParams

An abstract standard_model to represent a base QueryParams Data.

QueryParams model should be used to define the query parameters for a context.category.command call.

This QueryParams model is meant to be inherited and built upon by other standard_models for a specific context.

Examples:

class EquityHistoricalQueryParams(QueryParams):

    symbol: str = Field(description=QUERY_DESCRIPTIONS.get("symbol", ""))
    interval: Optional[str] = Field(
        default="1d",
        description=QUERY_DESCRIPTIONS.get("interval", ""),
    )
    start_date: Optional[dateType] = Field(
        default=None,
        description=QUERY_DESCRIPTIONS.get("start_date", ""),
    )
    end_date: Optional[dateType] = Field(
        default=None,
        description=QUERY_DESCRIPTIONS.get("end_date", ""),
    )

    @field_validator("symbol", mode="before", check_fields=False)
    @classmethod
    def upper_symbol(cls, v: Union[str, List[str], Set[str]]):
        if isinstance(v, str):
            return v.upper()
        return ",".join([symbol.upper() for symbol in list(v)])

This would create a class that would be used to query historical price data for equities from any given command.

This could then be used to create a MandelbrotChannelEquityHistoricalQueryParams that would define what query parameters are needed for the Mandelbrot Channel command.

Source code in src/humbldata/core/standard_models/abstract/query_params.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
class QueryParams(OpenBBQueryParams):
    """
    An abstract standard_model to represent a base QueryParams Data.

    QueryParams model should be used to define the query parameters for a
    `context.category.command` call.

    This QueryParams model is meant to be inherited and built upon by other
    standard_models for a specific context.

    Examples
    --------
    ```py
    class EquityHistoricalQueryParams(QueryParams):

        symbol: str = Field(description=QUERY_DESCRIPTIONS.get("symbol", ""))
        interval: Optional[str] = Field(
            default="1d",
            description=QUERY_DESCRIPTIONS.get("interval", ""),
        )
        start_date: Optional[dateType] = Field(
            default=None,
            description=QUERY_DESCRIPTIONS.get("start_date", ""),
        )
        end_date: Optional[dateType] = Field(
            default=None,
            description=QUERY_DESCRIPTIONS.get("end_date", ""),
        )

        @field_validator("symbol", mode="before", check_fields=False)
        @classmethod
        def upper_symbol(cls, v: Union[str, List[str], Set[str]]):
            if isinstance(v, str):
                return v.upper()
            return ",".join([symbol.upper() for symbol in list(v)])
    ```

    This would create a class that would be used to query historical price data
    for equities from any given command.

    This could then be used to create a
    `MandelbrotChannelEquityHistoricalQueryParams` that would define what query
    parameters are needed for the Mandelbrot Channel command.
    """

humbldata.core.standard_models.abstract.singleton ¤

An ABSTRACT DATA MODEL, Singleton, to represent a class that should only have one instance.

humbldata.core.standard_models.abstract.singleton.SingletonMeta ¤

Bases: type, Generic[T]

SingletonMeta is a metaclass that creates a Singleton instance of a class.

Singleton design pattern restricts the instantiation of a class to a single instance. This is useful when exactly one object is needed to coordinate actions across the system.

Source code in src/humbldata/core/standard_models/abstract/singleton.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
class SingletonMeta(type, Generic[T]):
    """
    SingletonMeta is a metaclass that creates a Singleton instance of a class.

    Singleton design pattern restricts the instantiation of a class to a single
    instance. This is useful when exactly one object is needed to coordinate
    actions across the system.
    """

    _instances: ClassVar[dict[T, T]] = {}  # type: ignore  # noqa: PGH003

    def __call__(cls, *args, **kwargs) -> T:
        """
        Override the __call__ method.

        If the class exists, otherwise creates a new instance and stores it in
        the _instances dictionary.
        """
        if cls not in cls._instances:
            instance = super().__call__(*args, **kwargs)
            cls._instances[cls] = instance  # type: ignore  # noqa: PGH003

        return cls._instances[cls]  # type: ignore  # noqa: PGH003
humbldata.core.standard_models.abstract.singleton.SingletonMeta.__call__ ¤
__call__(*args, **kwargs) -> T

Override the call method.

If the class exists, otherwise creates a new instance and stores it in the _instances dictionary.

Source code in src/humbldata/core/standard_models/abstract/singleton.py
21
22
23
24
25
26
27
28
29
30
31
32
def __call__(cls, *args, **kwargs) -> T:
    """
    Override the __call__ method.

    If the class exists, otherwise creates a new instance and stores it in
    the _instances dictionary.
    """
    if cls not in cls._instances:
        instance = super().__call__(*args, **kwargs)
        cls._instances[cls] = instance  # type: ignore  # noqa: PGH003

    return cls._instances[cls]  # type: ignore  # noqa: PGH003

humbldata.core.standard_models.abstract.chart ¤

humbldata.core.standard_models.abstract.chart.ChartTemplate ¤

Bases: str, Enum

Chart format.

Available options: - plotly - humbl_light - humbl_dark - plotly_light - plotly_dark - ggplot2 - seaborn - simple_white - presentation - xgridoff - ygridoff - gridon - none

Source code in src/humbldata/core/standard_models/abstract/chart.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
class ChartTemplate(str, Enum):
    """
    Chart format.

    Available options:
    - plotly
    - humbl_light
    - humbl_dark
    - plotly_light
    - plotly_dark
    - ggplot2
    - seaborn
    - simple_white
    - presentation
    - xgridoff
    - ygridoff
    - gridon
    - none
    """

    plotly = "plotly"
    humbl_light = "humbl_light"
    humbl_dark = "humbl_dark"
    plotly_light = "plotly_light"
    plotly_dark = "plotly_dark"
    ggplot2 = "ggplot2"
    seaborn = "seaborn"
    simple_white = "simple_white"
    presentation = "presentation"
    xgridoff = "xgridoff"
    ygridoff = "ygridoff"
    gridon = "gridon"
    none = "none"
humbldata.core.standard_models.abstract.chart.Chart ¤

Bases: BaseModel

a Chart Object that is returned from a View.

Source code in src/humbldata/core/standard_models/abstract/chart.py
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
class Chart(BaseModel):
    """a Chart Object that is returned from a View."""

    content: str | None = Field(
        default=None,
        description="Raw textual representation of the chart.",
    )
    theme: ChartTemplate | None = Field(
        default=ChartTemplate.plotly,
        description="Complementary attribute to the `content` attribute. It specifies the format of the chart.",
    )
    fig: Any | None = Field(
        default=None,
        description="The figure object.",
        # json_schema_extra={"exclude_from_api": True},
    )
    model_config = ConfigDict(validate_assignment=True)

    def __repr__(self) -> str:
        """Human readable representation of the object."""
        items = [
            f"{k}: {v}"[:83] + ("..." if len(f"{k}: {v}") > 83 else "")
            for k, v in self.model_dump().items()
        ]

        return f"{self.__class__.__name__}\n\n" + "\n".join(items)
humbldata.core.standard_models.abstract.chart.Chart.__repr__ ¤
__repr__() -> str

Human readable representation of the object.

Source code in src/humbldata/core/standard_models/abstract/chart.py
60
61
62
63
64
65
66
67
def __repr__(self) -> str:
    """Human readable representation of the object."""
    items = [
        f"{k}: {v}"[:83] + ("..." if len(f"{k}: {v}") > 83 else "")
        for k, v in self.model_dump().items()
    ]

    return f"{self.__class__.__name__}\n\n" + "\n".join(items)

humbldata.core.standard_models.abstract.data ¤

A wrapper around OpenBB Data Standardized Model to use with humbldata.

humbldata.core.standard_models.abstract.data.Data ¤

Bases: DataFrameModel

An abstract standard_model to represent a base Data Model.

The Data Model should be used to define the data that is being collected and analyzed in a context.category.command call.

This Data model is meant to be inherited and built upon by other standard_models for a specific context.

Example
class EquityHistoricalData(Data):

date: Union[dateType, datetime] = Field(
    description=DATA_DESCRIPTIONS.get("date", "")
)
open: float = Field(description=DATA_DESCRIPTIONS.get("open", ""))
high: float = Field(description=DATA_DESCRIPTIONS.get("high", ""))
low: float = Field(description=DATA_DESCRIPTIONS.get("low", ""))
close: float = Field(description=DATA_DESCRIPTIONS.get("close", ""))
volume: Optional[Union[float, int]] = Field(
    default=None, description=DATA_DESCRIPTIONS.get("volume", "")
)

@field_validator("date", mode="before", check_fields=False)
def date_validate(cls, v):  # pylint: disable=E0213
    v = parser.isoparse(str(v))
    if v.hour == 0 and v.minute == 0:
        return v.date()
    return v
Source code in src/humbldata/core/standard_models/abstract/data.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
class Data(pa.DataFrameModel):
    """
    An abstract standard_model to represent a base Data Model.

    The Data Model should be used to define the data that is being
    collected and analyzed in a `context.category.command` call.

    This Data model is meant to be inherited and built upon by other
    standard_models for a specific context.

    Example
    -------
    ```py
    class EquityHistoricalData(Data):

    date: Union[dateType, datetime] = Field(
        description=DATA_DESCRIPTIONS.get("date", "")
    )
    open: float = Field(description=DATA_DESCRIPTIONS.get("open", ""))
    high: float = Field(description=DATA_DESCRIPTIONS.get("high", ""))
    low: float = Field(description=DATA_DESCRIPTIONS.get("low", ""))
    close: float = Field(description=DATA_DESCRIPTIONS.get("close", ""))
    volume: Optional[Union[float, int]] = Field(
        default=None, description=DATA_DESCRIPTIONS.get("volume", "")
    )

    @field_validator("date", mode="before", check_fields=False)
    def date_validate(cls, v):  # pylint: disable=E0213
        v = parser.isoparse(str(v))
        if v.hour == 0 and v.minute == 0:
            return v.date()
        return v

    ```
    """

humbldata.core.standard_models.abstract.humblobject ¤

humbldata.core.standard_models.abstract.humblobject.extract_subclass_dict ¤
extract_subclass_dict(self, attribute_name: str, items: list)

Extract the dictionary representation of the specified attribute.

Parameters:

Name Type Description Default
attribute_name str

The name of the attribute to update in the items list.

required
Source code in src/humbldata/core/standard_models/abstract/humblobject.py
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
def extract_subclass_dict(self, attribute_name: str, items: list):
    """
    Extract the dictionary representation of the specified attribute.

    Parameters
    ----------
    attribute_name : str
        The name of the attribute to update in the items list.
    """
    # Check if the attribute exists and has a value
    attribute_value = getattr(self, attribute_name, None)
    if attribute_value:
        # Assuming the attribute has a method called 'model_dump' to get its dictionary representation
        add_item = attribute_value.model_dump()
        add_item_str = str(add_item)
        if len(add_item_str) > 80:
            add_item_str = add_item_str[:80] + "..."
        for i, item in enumerate(items):
            if item.startswith(f"{attribute_name}:"):
                items[i] = f"{attribute_name}: {add_item_str}"
                break

    return items
humbldata.core.standard_models.abstract.humblobject.HumblObject ¤

Bases: Tagged, Generic[T]

HumblObject is the base class for all dta returned from the Toolbox.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
class HumblObject(Tagged, Generic[T]):
    """HumblObject is the base class for all dta returned from the Toolbox."""

    _user_settings: ClassVar[BaseModel | None] = None
    _system_settings: ClassVar[BaseModel | None] = None

    model_config = ConfigDict(arbitrary_types_allowed=True)

    results: T | None = Field(
        default=None,
        description="Serializable Logical Plan of the pl.LazyFrame results.",
    )
    equity_data: T | None = Field(
        default=None,
        description="Serialized raw data used in the command calculations.",
    )
    provider: str | None = Field(
        default=None,
        description="Provider name.",
    )
    warnings: list[Warning_] | None = Field(
        default=None,
        description="List of warnings.",
    )
    chart: Chart | list[Chart] | None = Field(
        default=None,
        description="Chart object.",
    )
    extra: dict[str, Any] = Field(
        default_factory=dict,
        description="Extra info.",
    )
    context_params: ToolboxQueryParams | PortfolioQueryParams | None = Field(
        default_factory=ToolboxQueryParams,
        title="Context Parameters",
        description="Context parameters.",
    )
    command_params: SerializeAsAny[QueryParams] | None = Field(
        default=QueryParams,
        title="Command Parameters",
        description="Command-specific parameters.",
    )

    # @field_validator("command_params")
    # def validate_command_params(cls, v):
    #     class_name = v.__class__.__name__
    #     if "QueryParams" in class_name:
    #         return v
    #     msg = "Wrong type for 'command_params', must be subclass of QueryParams"
    #     raise TypeError(msg)

    def __repr__(self) -> str:
        """Human readable representation of the object."""
        items = [
            f"{k}: {v}"[:83] + ("..." if len(f"{k}: {v}") > 83 else "")
            for k, v in self.model_dump().items()
        ]

        # Needed to extract subclass dict correctly
        # items = extract_subclass_dict(self, "command_params", items)

        return f"{self.__class__.__name__}\n\n" + "\n".join(items)

    def to_polars(
        self, collect: bool = True, equity_data: bool = False
    ) -> pl.LazyFrame | pl.DataFrame:
        """
        Deserialize the stored results or return the LazyFrame, and optionally collect them into a Polars DataFrame.

        Parameters
        ----------
        collect : bool, optional
            If True, collects the deserialized LazyFrame into a DataFrame.
            Default is True.
        equity_data : bool, optional
            If True, processes equity_data instead of results.
            Default is False.

        Returns
        -------
        pl.LazyFrame | pl.DataFrame
            The results as a Polars LazyFrame or DataFrame,
            depending on the collect parameter.

        Raises
        ------
        HumblDataError
            If no results or equity data are found to process
        """
        data = self.equity_data if equity_data else self.results

        if data is None:
            raise HumblDataError("No data found.")

        if isinstance(data, pl.LazyFrame):
            out = data
        elif isinstance(data, str):
            with io.StringIO(data) as data_io:
                out = pl.LazyFrame.deserialize(data_io, format="json")
        elif isinstance(data, bytes):
            with io.BytesIO(data) as data_io:
                out = pl.LazyFrame.deserialize(data_io, format="binary")
        else:
            raise HumblDataError(
                "Invalid data type. Expected LazyFrame or serialized string."
            )

        if collect:
            out = out.collect()

        return out

    def to_df(
        self, collect: bool = True, equity_data: bool = False
    ) -> pl.LazyFrame | pl.DataFrame:
        """
        Alias for the `to_polars` method.

        Parameters
        ----------
        collect : bool, optional
            If True, collects the deserialized LazyFrame into a DataFrame.
            Default is True.

        Returns
        -------
        pl.LazyFrame | pl.DataFrame
            The deserialized results as a Polars LazyFrame or DataFrame,
            depending on the collect parameter.
        """
        return self.to_polars(collect=collect, equity_data=equity_data)

    def to_pandas(self, equity_data: bool = False) -> pd.DataFrame:
        """
        Convert the results to a Pandas DataFrame.

        Returns
        -------
        pd.DataFrame
            The results as a Pandas DataFrame.
        """
        return self.to_polars(collect=True, equity_data=equity_data).to_pandas()

    def to_numpy(self, equity_data: bool = False) -> np.ndarray:
        """
        Convert the results to a NumPy array.

        Returns
        -------
        np.ndarray
            The results as a NumPy array.
        """
        return self.to_polars(collect=True, equity_data=equity_data).to_numpy()

    def to_dict(
        self,
        row_wise: bool = False,
        equity_data: bool = False,
        as_series: bool = True,
    ) -> dict | list[dict]:
        """
        Transform the stored data into a dictionary or a list of dictionaries.

        This method allows for the conversion of the internal data
        representation into a more universally accessible format, either
        aggregating the entire dataset into a single dictionary (column-wise)
        or breaking it down into a list of dictionaries, each representing a
        row in the dataset.

        Parameters
        ----------
        row_wise : bool, optional
            Determines the format of the output. If set to True, the method
            returns a list of dictionaries, with each dictionary representing a
            row and its corresponding data as key-value pairs. If set to False,
            the method returns a single dictionary, with column names as keys
            and lists of column data as values. Default is False.

        equity_data : bool, optional
            A flag to specify whether to use equity-specific data for the
            conversion. This parameter allows for flexibility in handling
            different types of data stored within the object. Default is
            False.
        as_series : bool, optional
            If True, the method returns a pl.Series with values as Series. If
            False, the method returns a dict with values as List[Any].
            Default is True.

        Returns
        -------
        dict | list[dict]
            Depending on the `row_wise` parameter, either a dictionary mapping column names to lists of values (if `row_wise` is False) or a list of dictionaries, each representing a row in the dataset (if `row_wise` is True).
        """
        if row_wise:
            return self.to_polars(
                collect=True, equity_data=equity_data
            ).to_dicts()
        return self.to_polars(collect=True, equity_data=equity_data).to_dict(
            as_series=as_series
        )

    def to_arrow(self, equity_data: bool = False) -> pa.Table:
        """
        Convert the results to an Arrow Table.

        Returns
        -------
        pa.Table
            The results as an Arrow Table.
        """
        return self.to_polars(collect=True, equity_data=equity_data).to_arrow()

    def to_struct(
        self, name: str = "results", equity_data: bool = False
    ) -> pl.Series:
        """
        Convert the results to a struct.

        Parameters
        ----------
        name : str, optional
            The name of the struct. Default is "results".

        Returns
        -------
        pl.Struct
            The results as a struct.
        """
        return self.to_polars(collect=True, equity_data=equity_data).to_struct(
            name=name
        )

    def to_json(
        self, equity_data: bool = False, chart: bool = False
    ) -> str | list[str]:
        """
        Convert the results to a JSON string.

        Parameters
        ----------
        equity_data : bool, optional
            A flag to specify whether to use equity-specific data for the
            conversion. Default is False.
        chart : bool, optional
            If True, return all generated charts as a JSON string instead of
            returning the results. Default is False.

        Returns
        -------
        str
            The results or charts as a JSON string.

        Raises
        ------
        HumblDataError
            If chart is True but no charts are available.
        """
        import json
        from datetime import date, datetime

        from humbldata.core.standard_models.abstract.errors import (
            HumblDataError,
        )

        def json_serial(obj):
            """JSON serializer for objects not serializable by default json code."""
            if isinstance(obj, (datetime, date)):
                return obj.isoformat()
            msg = f"Type {type(obj)} not serializable"
            raise TypeError(msg)

        if chart:
            if self.chart is None:
                msg = f"You set `.to_json(chart=True)` but there were no charts. Make sure `chart=True` in {self.command_params.__class__.__name__}"
                raise HumblDataError(msg)

            if isinstance(self.chart, list):
                return [
                    chart.content
                    for chart in self.chart
                    if chart and chart.content
                ]
            else:
                return self.chart.content
        else:
            data = self.to_polars(
                collect=True, equity_data=equity_data
            ).to_dict(as_series=False)
            return json.dumps(data, default=json_serial)

    def is_empty(self, equity_data: bool = False) -> bool:
        """
        Check if the results are empty.

        Returns
        -------
        bool
            True if the results are empty, False otherwise.
        """
        return self.to_polars(collect=True, equity_data=equity_data).is_empty()

    def show(self) -> None:
        """Show the chart."""
        if isinstance(self.chart, list):
            for chart in self.chart:
                if chart and chart.fig:
                    chart.fig.show()
                else:
                    msg = "Chart object is missing or incomplete."
                    raise HumblDataError(msg)
        elif not self.chart or not self.chart.fig:
            msg = "Chart not found."
            raise HumblDataError(msg)
humbldata.core.standard_models.abstract.humblobject.HumblObject.__repr__ ¤
__repr__() -> str

Human readable representation of the object.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
104
105
106
107
108
109
110
111
112
113
114
def __repr__(self) -> str:
    """Human readable representation of the object."""
    items = [
        f"{k}: {v}"[:83] + ("..." if len(f"{k}: {v}") > 83 else "")
        for k, v in self.model_dump().items()
    ]

    # Needed to extract subclass dict correctly
    # items = extract_subclass_dict(self, "command_params", items)

    return f"{self.__class__.__name__}\n\n" + "\n".join(items)
humbldata.core.standard_models.abstract.humblobject.HumblObject.to_polars ¤
to_polars(collect: bool = True, equity_data: bool = False) -> LazyFrame | DataFrame

Deserialize the stored results or return the LazyFrame, and optionally collect them into a Polars DataFrame.

Parameters:

Name Type Description Default
collect bool

If True, collects the deserialized LazyFrame into a DataFrame. Default is True.

True
equity_data bool

If True, processes equity_data instead of results. Default is False.

False

Returns:

Type Description
LazyFrame | DataFrame

The results as a Polars LazyFrame or DataFrame, depending on the collect parameter.

Raises:

Type Description
HumblDataError

If no results or equity data are found to process

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
def to_polars(
    self, collect: bool = True, equity_data: bool = False
) -> pl.LazyFrame | pl.DataFrame:
    """
    Deserialize the stored results or return the LazyFrame, and optionally collect them into a Polars DataFrame.

    Parameters
    ----------
    collect : bool, optional
        If True, collects the deserialized LazyFrame into a DataFrame.
        Default is True.
    equity_data : bool, optional
        If True, processes equity_data instead of results.
        Default is False.

    Returns
    -------
    pl.LazyFrame | pl.DataFrame
        The results as a Polars LazyFrame or DataFrame,
        depending on the collect parameter.

    Raises
    ------
    HumblDataError
        If no results or equity data are found to process
    """
    data = self.equity_data if equity_data else self.results

    if data is None:
        raise HumblDataError("No data found.")

    if isinstance(data, pl.LazyFrame):
        out = data
    elif isinstance(data, str):
        with io.StringIO(data) as data_io:
            out = pl.LazyFrame.deserialize(data_io, format="json")
    elif isinstance(data, bytes):
        with io.BytesIO(data) as data_io:
            out = pl.LazyFrame.deserialize(data_io, format="binary")
    else:
        raise HumblDataError(
            "Invalid data type. Expected LazyFrame or serialized string."
        )

    if collect:
        out = out.collect()

    return out
humbldata.core.standard_models.abstract.humblobject.HumblObject.to_df ¤
to_df(collect: bool = True, equity_data: bool = False) -> LazyFrame | DataFrame

Alias for the to_polars method.

Parameters:

Name Type Description Default
collect bool

If True, collects the deserialized LazyFrame into a DataFrame. Default is True.

True

Returns:

Type Description
LazyFrame | DataFrame

The deserialized results as a Polars LazyFrame or DataFrame, depending on the collect parameter.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
def to_df(
    self, collect: bool = True, equity_data: bool = False
) -> pl.LazyFrame | pl.DataFrame:
    """
    Alias for the `to_polars` method.

    Parameters
    ----------
    collect : bool, optional
        If True, collects the deserialized LazyFrame into a DataFrame.
        Default is True.

    Returns
    -------
    pl.LazyFrame | pl.DataFrame
        The deserialized results as a Polars LazyFrame or DataFrame,
        depending on the collect parameter.
    """
    return self.to_polars(collect=collect, equity_data=equity_data)
humbldata.core.standard_models.abstract.humblobject.HumblObject.to_pandas ¤
to_pandas(equity_data: bool = False) -> DataFrame

Convert the results to a Pandas DataFrame.

Returns:

Type Description
DataFrame

The results as a Pandas DataFrame.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
185
186
187
188
189
190
191
192
193
194
def to_pandas(self, equity_data: bool = False) -> pd.DataFrame:
    """
    Convert the results to a Pandas DataFrame.

    Returns
    -------
    pd.DataFrame
        The results as a Pandas DataFrame.
    """
    return self.to_polars(collect=True, equity_data=equity_data).to_pandas()
humbldata.core.standard_models.abstract.humblobject.HumblObject.to_numpy ¤
to_numpy(equity_data: bool = False) -> ndarray

Convert the results to a NumPy array.

Returns:

Type Description
ndarray

The results as a NumPy array.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
196
197
198
199
200
201
202
203
204
205
def to_numpy(self, equity_data: bool = False) -> np.ndarray:
    """
    Convert the results to a NumPy array.

    Returns
    -------
    np.ndarray
        The results as a NumPy array.
    """
    return self.to_polars(collect=True, equity_data=equity_data).to_numpy()
humbldata.core.standard_models.abstract.humblobject.HumblObject.to_dict ¤
to_dict(row_wise: bool = False, equity_data: bool = False, as_series: bool = True) -> dict | list[dict]

Transform the stored data into a dictionary or a list of dictionaries.

This method allows for the conversion of the internal data representation into a more universally accessible format, either aggregating the entire dataset into a single dictionary (column-wise) or breaking it down into a list of dictionaries, each representing a row in the dataset.

Parameters:

Name Type Description Default
row_wise bool

Determines the format of the output. If set to True, the method returns a list of dictionaries, with each dictionary representing a row and its corresponding data as key-value pairs. If set to False, the method returns a single dictionary, with column names as keys and lists of column data as values. Default is False.

False
equity_data bool

A flag to specify whether to use equity-specific data for the conversion. This parameter allows for flexibility in handling different types of data stored within the object. Default is False.

False
as_series bool

If True, the method returns a pl.Series with values as Series. If False, the method returns a dict with values as List[Any]. Default is True.

True

Returns:

Type Description
dict | list[dict]

Depending on the row_wise parameter, either a dictionary mapping column names to lists of values (if row_wise is False) or a list of dictionaries, each representing a row in the dataset (if row_wise is True).

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
def to_dict(
    self,
    row_wise: bool = False,
    equity_data: bool = False,
    as_series: bool = True,
) -> dict | list[dict]:
    """
    Transform the stored data into a dictionary or a list of dictionaries.

    This method allows for the conversion of the internal data
    representation into a more universally accessible format, either
    aggregating the entire dataset into a single dictionary (column-wise)
    or breaking it down into a list of dictionaries, each representing a
    row in the dataset.

    Parameters
    ----------
    row_wise : bool, optional
        Determines the format of the output. If set to True, the method
        returns a list of dictionaries, with each dictionary representing a
        row and its corresponding data as key-value pairs. If set to False,
        the method returns a single dictionary, with column names as keys
        and lists of column data as values. Default is False.

    equity_data : bool, optional
        A flag to specify whether to use equity-specific data for the
        conversion. This parameter allows for flexibility in handling
        different types of data stored within the object. Default is
        False.
    as_series : bool, optional
        If True, the method returns a pl.Series with values as Series. If
        False, the method returns a dict with values as List[Any].
        Default is True.

    Returns
    -------
    dict | list[dict]
        Depending on the `row_wise` parameter, either a dictionary mapping column names to lists of values (if `row_wise` is False) or a list of dictionaries, each representing a row in the dataset (if `row_wise` is True).
    """
    if row_wise:
        return self.to_polars(
            collect=True, equity_data=equity_data
        ).to_dicts()
    return self.to_polars(collect=True, equity_data=equity_data).to_dict(
        as_series=as_series
    )
humbldata.core.standard_models.abstract.humblobject.HumblObject.to_arrow ¤
to_arrow(equity_data: bool = False) -> Table

Convert the results to an Arrow Table.

Returns:

Type Description
Table

The results as an Arrow Table.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
254
255
256
257
258
259
260
261
262
263
def to_arrow(self, equity_data: bool = False) -> pa.Table:
    """
    Convert the results to an Arrow Table.

    Returns
    -------
    pa.Table
        The results as an Arrow Table.
    """
    return self.to_polars(collect=True, equity_data=equity_data).to_arrow()
humbldata.core.standard_models.abstract.humblobject.HumblObject.to_struct ¤
to_struct(name: str = 'results', equity_data: bool = False) -> Series

Convert the results to a struct.

Parameters:

Name Type Description Default
name str

The name of the struct. Default is "results".

'results'

Returns:

Type Description
Struct

The results as a struct.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
def to_struct(
    self, name: str = "results", equity_data: bool = False
) -> pl.Series:
    """
    Convert the results to a struct.

    Parameters
    ----------
    name : str, optional
        The name of the struct. Default is "results".

    Returns
    -------
    pl.Struct
        The results as a struct.
    """
    return self.to_polars(collect=True, equity_data=equity_data).to_struct(
        name=name
    )
humbldata.core.standard_models.abstract.humblobject.HumblObject.to_json ¤
to_json(equity_data: bool = False, chart: bool = False) -> str | list[str]

Convert the results to a JSON string.

Parameters:

Name Type Description Default
equity_data bool

A flag to specify whether to use equity-specific data for the conversion. Default is False.

False
chart bool

If True, return all generated charts as a JSON string instead of returning the results. Default is False.

False

Returns:

Type Description
str

The results or charts as a JSON string.

Raises:

Type Description
HumblDataError

If chart is True but no charts are available.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
def to_json(
    self, equity_data: bool = False, chart: bool = False
) -> str | list[str]:
    """
    Convert the results to a JSON string.

    Parameters
    ----------
    equity_data : bool, optional
        A flag to specify whether to use equity-specific data for the
        conversion. Default is False.
    chart : bool, optional
        If True, return all generated charts as a JSON string instead of
        returning the results. Default is False.

    Returns
    -------
    str
        The results or charts as a JSON string.

    Raises
    ------
    HumblDataError
        If chart is True but no charts are available.
    """
    import json
    from datetime import date, datetime

    from humbldata.core.standard_models.abstract.errors import (
        HumblDataError,
    )

    def json_serial(obj):
        """JSON serializer for objects not serializable by default json code."""
        if isinstance(obj, (datetime, date)):
            return obj.isoformat()
        msg = f"Type {type(obj)} not serializable"
        raise TypeError(msg)

    if chart:
        if self.chart is None:
            msg = f"You set `.to_json(chart=True)` but there were no charts. Make sure `chart=True` in {self.command_params.__class__.__name__}"
            raise HumblDataError(msg)

        if isinstance(self.chart, list):
            return [
                chart.content
                for chart in self.chart
                if chart and chart.content
            ]
        else:
            return self.chart.content
    else:
        data = self.to_polars(
            collect=True, equity_data=equity_data
        ).to_dict(as_series=False)
        return json.dumps(data, default=json_serial)
humbldata.core.standard_models.abstract.humblobject.HumblObject.is_empty ¤
is_empty(equity_data: bool = False) -> bool

Check if the results are empty.

Returns:

Type Description
bool

True if the results are empty, False otherwise.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
343
344
345
346
347
348
349
350
351
352
def is_empty(self, equity_data: bool = False) -> bool:
    """
    Check if the results are empty.

    Returns
    -------
    bool
        True if the results are empty, False otherwise.
    """
    return self.to_polars(collect=True, equity_data=equity_data).is_empty()
humbldata.core.standard_models.abstract.humblobject.HumblObject.show ¤
show() -> None

Show the chart.

Source code in src/humbldata/core/standard_models/abstract/humblobject.py
354
355
356
357
358
359
360
361
362
363
364
365
def show(self) -> None:
    """Show the chart."""
    if isinstance(self.chart, list):
        for chart in self.chart:
            if chart and chart.fig:
                chart.fig.show()
            else:
                msg = "Chart object is missing or incomplete."
                raise HumblDataError(msg)
    elif not self.chart or not self.chart.fig:
        msg = "Chart not found."
        raise HumblDataError(msg)

humbldata.core.standard_models.abstract.tagged ¤

An ABSTRACT DATA MODEL, Tagged, to be inherited by other models as identifier.

humbldata.core.standard_models.abstract.tagged.Tagged ¤

Bases: BaseModel

A class to represent an object tagged with a uuid7.

Source code in src/humbldata/core/standard_models/abstract/tagged.py
 7
 8
 9
10
class Tagged(BaseModel):
    """A class to represent an object tagged with a uuid7."""

    id: str = Field(default_factory=uuid7str, alias="_id")

humbldata.core.standard_models.portfolio ¤

Context: Portfolio || Category: Analytics.

This module defines the QueryParams and Data classes for the Portfolio context.

humbldata.core.standard_models.portfolio.analytics ¤

humbldata.core.standard_models.portfolio.analytics.etf_category ¤

UserTable Standard Model.

Context: Portfolio || Category: Analytics || Command: user_table.

This module is used to define the QueryParams and Data model for the UserTable command.

humbldata.core.standard_models.portfolio.analytics.etf_category.ETFCategoryData ¤

Bases: Data

Data model for the etf_category command, a Pandera.Polars Model.

Used for simple validation of ETF category data for the UserTableFetcher internal logic aggregate_user_table_data()

Source code in src/humbldata/core/standard_models/portfolio/analytics/etf_category.py
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
class ETFCategoryData(Data):
    """
    Data model for the etf_category command, a Pandera.Polars Model.

    Used for simple validation of ETF category data for the UserTableFetcher
    internal logic `aggregate_user_table_data()`
    """

    symbol: str = pa.Field(
        default=None,
        title="Symbol",
        description=QUERY_DESCRIPTIONS.get("symbol", ""),
    )
    category: pl.Utf8 | None = pa.Field(
        default=None,
        title="Category/Sector",
        description=QUERY_DESCRIPTIONS.get("category", ""),
        nullable=True,
    )
humbldata.core.standard_models.portfolio.analytics.user_table ¤

UserTable Standard Model.

Context: Portfolio || Category: Analytics || Command: user_table.

This module is used to define the QueryParams and Data model for the UserTable command.

humbldata.core.standard_models.portfolio.analytics.user_table.UserTableQueryParams ¤

Bases: QueryParams

QueryParams model for the UserTable command, a Pydantic v2 model.

Parameters:

Name Type Description Default
symbols str | list[str] | set[str]

The symbol or ticker of the stock(s). Can be a single symbol, a comma-separated string, or a list/set of symbols. Default is "AAPL". Examples: "AAPL", "AAPL,MSFT", ["AAPL", "MSFT"] All inputs will be converted to uppercase.

required
Notes

The symbols input will be processed to ensure all symbols are uppercase and properly formatted, regardless of the input format.

Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
class UserTableQueryParams(QueryParams):
    """
    QueryParams model for the UserTable command, a Pydantic v2 model.

    Parameters
    ----------
    symbols : str | list[str] | set[str]
        The symbol or ticker of the stock(s). Can be a single symbol, a comma-separated string,
        or a list/set of symbols. Default is "AAPL".
        Examples: "AAPL", "AAPL,MSFT", ["AAPL", "MSFT"]
        All inputs will be converted to uppercase.

    Notes
    -----
    The `symbols` input will be processed to ensure all symbols are uppercase
    and properly formatted, regardless of the input format.
    """

    symbols: str | list[str] | set[str] = pa.Field(
        default="AAPL",
        title="Symbol",
        description=QUERY_DESCRIPTIONS.get("symbol", ""),
    )

    @field_validator("symbols", mode="before", check_fields=False)
    @classmethod
    def upper_symbol(cls, v: str | list[str] | set[str]) -> str | list[str]:
        """
        Convert the stock symbol to uppercase.

        Parameters
        ----------
        v : Union[str, List[str], Set[str]]
            The stock symbol or collection of symbols to be converted.

        Returns
        -------
        Union[str, List[str]]
            The uppercase stock symbol or a comma-separated string of uppercase
            symbols.
        """
        # Handle empty inputs
        if not v:
            return []
        # If v is a string, split it by commas into a list. Otherwise, ensure it's a list.
        v = v.split(",") if isinstance(v, str) else v

        # Trim whitespace and check if all elements in the list are strings
        if not all(isinstance(item.strip(), str) for item in v):
            msg = "Every element in `symbol` list must be a `str`"
            raise ValueError(msg)

        # Convert all elements to uppercase, trim whitespace, and join them with a comma
        return [symbol.strip().upper() for symbol in v]
humbldata.core.standard_models.portfolio.analytics.user_table.UserTableQueryParams.upper_symbol classmethod ¤
upper_symbol(v: str | list[str] | set[str]) -> str | list[str]

Convert the stock symbol to uppercase.

Parameters:

Name Type Description Default
v Union[str, List[str], Set[str]]

The stock symbol or collection of symbols to be converted.

required

Returns:

Type Description
Union[str, List[str]]

The uppercase stock symbol or a comma-separated string of uppercase symbols.

Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
@field_validator("symbols", mode="before", check_fields=False)
@classmethod
def upper_symbol(cls, v: str | list[str] | set[str]) -> str | list[str]:
    """
    Convert the stock symbol to uppercase.

    Parameters
    ----------
    v : Union[str, List[str], Set[str]]
        The stock symbol or collection of symbols to be converted.

    Returns
    -------
    Union[str, List[str]]
        The uppercase stock symbol or a comma-separated string of uppercase
        symbols.
    """
    # Handle empty inputs
    if not v:
        return []
    # If v is a string, split it by commas into a list. Otherwise, ensure it's a list.
    v = v.split(",") if isinstance(v, str) else v

    # Trim whitespace and check if all elements in the list are strings
    if not all(isinstance(item.strip(), str) for item in v):
        msg = "Every element in `symbol` list must be a `str`"
        raise ValueError(msg)

    # Convert all elements to uppercase, trim whitespace, and join them with a comma
    return [symbol.strip().upper() for symbol in v]
humbldata.core.standard_models.portfolio.analytics.user_table.UserTableData ¤

Bases: Data

Data model for the user_table command, a Pandera.Polars Model.

This Data model is used to validate data in the .transform_data() method of the UserTableFetcher class.

Attributes:

Name Type Description
symbol Utf8

The stock symbol.

last_price Float64

The last known price of the stock.

buy_price Float64

The recommended buy price for the stock.

sell_price Float64

The recommended sell price for the stock.

ud_pct Utf8

The upside/downside percentage.

ud_ratio Float64

The upside/downside ratio.

asset_class Utf8

The asset class of the stock.

sector Utf8

The sector of the stock.

humbl_suggestion Utf8 | None

The suggestion provided by HUMBL.

Methods:

Name Description
None
Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
class UserTableData(Data):
    """
    Data model for the user_table command, a Pandera.Polars Model.

    This Data model is used to validate data in the `.transform_data()` method of the `UserTableFetcher` class.

    Attributes
    ----------
    symbol : pl.Utf8
        The stock symbol.
    last_price : pl.Float64
        The last known price of the stock.
    buy_price : pl.Float64
        The recommended buy price for the stock.
    sell_price : pl.Float64
        The recommended sell price for the stock.
    ud_pct : pl.Utf8
        The upside/downside percentage.
    ud_ratio : pl.Float64
        The upside/downside ratio.
    asset_class : pl.Utf8
        The asset class of the stock.
    sector : pl.Utf8
        The sector of the stock.
    humbl_suggestion : pl.Utf8 | None
        The suggestion provided by HUMBL.

    Methods
    -------
    None

    """

    symbol: pl.Utf8 = pa.Field(
        default=None,
        title="Symbol",
        description=DATA_DESCRIPTIONS.get("symbol", ""),
        alias="(symbols|symbol)",
        regex=True,
    )
    last_price: pl.Float64 = pa.Field(
        default=None,
        title="Last Price",
        description=DATA_DESCRIPTIONS.get("last_price", ""),
    )
    buy_price: pl.Float64 = pa.Field(
        default=None,
        title="Buy Price",
        description=DATA_DESCRIPTIONS.get("buy_price", ""),
    )
    sell_price: pl.Float64 = pa.Field(
        default=None,
        title="Sell Price",
        description=DATA_DESCRIPTIONS.get("sell_price", ""),
    )
    ud_pct: pl.Utf8 = pa.Field(
        default=None,
        title="Upside/Downside Percentage",
        description=DATA_DESCRIPTIONS.get("ud_pct", ""),
    )
    ud_ratio: pl.Float64 = pa.Field(
        default=None,
        title="Upside/Downside Ratio",
        description=DATA_DESCRIPTIONS.get("ud_ratio", ""),
    )
    asset_class: pl.Utf8 = pa.Field(
        default=None,
        title="Asset Class",
        description=DATA_DESCRIPTIONS.get("asset_class", ""),
    )
    sector: pl.Utf8 = pa.Field(
        default=None,
        title="Sector",
        description=DATA_DESCRIPTIONS.get("sector", ""),
        nullable=True,
    )
    humbl_suggestion: pl.Utf8 | None = pa.Field(
        default=None,
        title="humblSuggestion",
        description=QUERY_DESCRIPTIONS.get("humbl_suggestion", ""),
    )
humbldata.core.standard_models.portfolio.analytics.user_table.UserTableFetcher ¤

Fetcher for the UserTable command.

Parameters:

Name Type Description Default
context_params PortfolioQueryParams

The context parameters for the Portfolio query.

required
command_params UserTableQueryParams

The command-specific parameters for the UserTable query.

required

Attributes:

Name Type Description
context_params PortfolioQueryParams

Stores the context parameters passed during initialization.

command_params UserTableQueryParams

Stores the command-specific parameters passed during initialization.

data DataFrame

The raw data extracted from the data provider, before transformation.

Methods:

Name Description
transform_query

Transform the command-specific parameters into a query.

extract_data

Extracts the data from the provider and returns it as a Polars DataFrame.

transform_data

Transforms the command-specific data according to the UserTable logic.

fetch_data

Execute TET Pattern.

Returns:

Type Description
HumblObject

results : UserTableData Serializable results. provider : Literal['fmp', 'intrinio', 'polygon', 'tiingo', 'yfinance'] Provider name. warnings : Optional[List[Warning_]] List of warnings. chart : Optional[Chart] Chart object. context_params : PortfolioQueryParams Context-specific parameters. command_params : UserTableQueryParams Command-specific parameters.

Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
class UserTableFetcher:
    """
    Fetcher for the UserTable command.

    Parameters
    ----------
    context_params : PortfolioQueryParams
        The context parameters for the Portfolio query.
    command_params : UserTableQueryParams
        The command-specific parameters for the UserTable query.

    Attributes
    ----------
    context_params : PortfolioQueryParams
        Stores the context parameters passed during initialization.
    command_params : UserTableQueryParams
        Stores the command-specific parameters passed during initialization.
    data : pl.DataFrame
        The raw data extracted from the data provider, before transformation.

    Methods
    -------
    transform_query()
        Transform the command-specific parameters into a query.
    extract_data()
        Extracts the data from the provider and returns it as a Polars DataFrame.
    transform_data()
        Transforms the command-specific data according to the UserTable logic.
    fetch_data()
        Execute TET Pattern.

    Returns
    -------
    HumblObject
        results : UserTableData
            Serializable results.
        provider : Literal['fmp', 'intrinio', 'polygon', 'tiingo', 'yfinance']
            Provider name.
        warnings : Optional[List[Warning_]]
            List of warnings.
        chart : Optional[Chart]
            Chart object.
        context_params : PortfolioQueryParams
            Context-specific parameters.
        command_params : UserTableQueryParams
            Command-specific parameters.
    """

    def __init__(
        self,
        context_params: PortfolioQueryParams,
        command_params: UserTableQueryParams,
    ):
        """
        Initialize the UserTableFetcher with context and command parameters.

        Parameters
        ----------
        context_params : PortfolioQueryParams
            The context parameters for the Portfolio query.
        command_params : UserTableQueryParams
            The command-specific parameters for the UserTable query.
        """
        self.context_params = context_params
        self.command_params = command_params

    def transform_query(self):
        """
        Transform the command-specific parameters into a query.

        If command_params is not provided, it initializes a default UserTableQueryParams object.
        """
        if not self.command_params:
            self.command_params = None
            # Set Default Arguments
            self.command_params: UserTableQueryParams = UserTableQueryParams()
        else:
            self.command_params: UserTableQueryParams = UserTableQueryParams(
                **self.command_params
            )

    async def extract_data(self):
        """
        Extract the data from the provider and returns it as a Polars DataFrame.

        Returns
        -------
        pl.DataFrame
            The extracted data as a Polars DataFrame.

        """
        self.etf_data = await aget_etf_category(self.context_params.symbols)

        # Dates are automatically selected based on membership
        self.toolbox = Toolbox(
            symbols=self.context_params.symbols,
            membership=self.context_params.membership,
            interval="1d",
        )
        self.mandelbrot = self.toolbox.technical.mandelbrot_channel().to_polars(
            collect=False
        )
        return self

    async def transform_data(self):
        """
        Transform the command-specific data according to the user_table logic.

        Returns
        -------
        pl.DataFrame
            The transformed data as a Polars DataFrame
        """
        # Implement data transformation logic here
        transformed_data: pl.LazyFrame = await user_table_engine(
            symbols=self.context_params.symbols,
            etf_data=self.etf_data,
            mandelbrot_data=self.mandelbrot,
            toolbox=self.toolbox,
        )
        self.transformed_data = UserTableData(transformed_data.collect()).lazy()
        self.transformed_data = self.transformed_data.with_columns(
            pl.col(pl.Float64).round(2)
        )
        return self

    @log_start_end(logger=logger)
    async def fetch_data(self):
        """
        Execute TET Pattern.

        This method executes the query transformation, data fetching and
        transformation process by first calling `transform_query` to prepare the query parameters, then
        extracting the raw data using `extract_data` method, and finally
        transforming the raw data using `transform_data` method.

        Returns
        -------
        HumblObject
            The HumblObject containing the transformed data and metadata.
        """
        logger.debug("Running .transform_query()")
        self.transform_query()
        logger.debug("Running .extract_data()")
        await self.extract_data()
        logger.debug("Running .transform_data()")
        await self.transform_data()

        return HumblObject(
            results=self.transformed_data,
            provider=self.context_params.provider,
            warnings=None,
            chart=None,
            context_params=self.context_params,
            command_params=self.command_params,
        )
humbldata.core.standard_models.portfolio.analytics.user_table.UserTableFetcher.__init__ ¤
__init__(context_params: PortfolioQueryParams, command_params: UserTableQueryParams)

Initialize the UserTableFetcher with context and command parameters.

Parameters:

Name Type Description Default
context_params PortfolioQueryParams

The context parameters for the Portfolio query.

required
command_params UserTableQueryParams

The command-specific parameters for the UserTable query.

required
Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
def __init__(
    self,
    context_params: PortfolioQueryParams,
    command_params: UserTableQueryParams,
):
    """
    Initialize the UserTableFetcher with context and command parameters.

    Parameters
    ----------
    context_params : PortfolioQueryParams
        The context parameters for the Portfolio query.
    command_params : UserTableQueryParams
        The command-specific parameters for the UserTable query.
    """
    self.context_params = context_params
    self.command_params = command_params
humbldata.core.standard_models.portfolio.analytics.user_table.UserTableFetcher.transform_query ¤
transform_query()

Transform the command-specific parameters into a query.

If command_params is not provided, it initializes a default UserTableQueryParams object.

Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
248
249
250
251
252
253
254
255
256
257
258
259
260
261
def transform_query(self):
    """
    Transform the command-specific parameters into a query.

    If command_params is not provided, it initializes a default UserTableQueryParams object.
    """
    if not self.command_params:
        self.command_params = None
        # Set Default Arguments
        self.command_params: UserTableQueryParams = UserTableQueryParams()
    else:
        self.command_params: UserTableQueryParams = UserTableQueryParams(
            **self.command_params
        )
humbldata.core.standard_models.portfolio.analytics.user_table.UserTableFetcher.extract_data async ¤
extract_data()

Extract the data from the provider and returns it as a Polars DataFrame.

Returns:

Type Description
DataFrame

The extracted data as a Polars DataFrame.

Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
async def extract_data(self):
    """
    Extract the data from the provider and returns it as a Polars DataFrame.

    Returns
    -------
    pl.DataFrame
        The extracted data as a Polars DataFrame.

    """
    self.etf_data = await aget_etf_category(self.context_params.symbols)

    # Dates are automatically selected based on membership
    self.toolbox = Toolbox(
        symbols=self.context_params.symbols,
        membership=self.context_params.membership,
        interval="1d",
    )
    self.mandelbrot = self.toolbox.technical.mandelbrot_channel().to_polars(
        collect=False
    )
    return self
humbldata.core.standard_models.portfolio.analytics.user_table.UserTableFetcher.transform_data async ¤
transform_data()

Transform the command-specific data according to the user_table logic.

Returns:

Type Description
DataFrame

The transformed data as a Polars DataFrame

Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
async def transform_data(self):
    """
    Transform the command-specific data according to the user_table logic.

    Returns
    -------
    pl.DataFrame
        The transformed data as a Polars DataFrame
    """
    # Implement data transformation logic here
    transformed_data: pl.LazyFrame = await user_table_engine(
        symbols=self.context_params.symbols,
        etf_data=self.etf_data,
        mandelbrot_data=self.mandelbrot,
        toolbox=self.toolbox,
    )
    self.transformed_data = UserTableData(transformed_data.collect()).lazy()
    self.transformed_data = self.transformed_data.with_columns(
        pl.col(pl.Float64).round(2)
    )
    return self
humbldata.core.standard_models.portfolio.analytics.user_table.UserTableFetcher.fetch_data async ¤
fetch_data()

Execute TET Pattern.

This method executes the query transformation, data fetching and transformation process by first calling transform_query to prepare the query parameters, then extracting the raw data using extract_data method, and finally transforming the raw data using transform_data method.

Returns:

Type Description
HumblObject

The HumblObject containing the transformed data and metadata.

Source code in src/humbldata/core/standard_models/portfolio/analytics/user_table.py
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
@log_start_end(logger=logger)
async def fetch_data(self):
    """
    Execute TET Pattern.

    This method executes the query transformation, data fetching and
    transformation process by first calling `transform_query` to prepare the query parameters, then
    extracting the raw data using `extract_data` method, and finally
    transforming the raw data using `transform_data` method.

    Returns
    -------
    HumblObject
        The HumblObject containing the transformed data and metadata.
    """
    logger.debug("Running .transform_query()")
    self.transform_query()
    logger.debug("Running .extract_data()")
    await self.extract_data()
    logger.debug("Running .transform_data()")
    await self.transform_data()

    return HumblObject(
        results=self.transformed_data,
        provider=self.context_params.provider,
        warnings=None,
        chart=None,
        context_params=self.context_params,
        command_params=self.command_params,
    )

humbldata.core.standard_models.portfolio.PortfolioQueryParams ¤

Bases: QueryParams

Query parameters for the PortfolioController.

This class defines the query parameters used by the PortfolioController.

Parameters:

Name Type Description Default
symbol str or list of str

The stock symbol(s) to query. Default is "AAPL".

required
provider OBB_EQUITY_PRICE_HISTORICAL_PROVIDERS

The data provider for historical price data. Default is "yahoo".

required
membership Literal['anonymous', 'humblPEON', 'humblPREMIUM', 'humblPOWER', 'humblPERMANENT', 'admin']

The membership level of the user accessing the data. Default is "anonymous".

required

Attributes:

Name Type Description
symbol str or list of str

The stock symbol(s) to query.

provider OBB_EQUITY_PRICE_HISTORICAL_PROVIDERS

The data provider for historical price data.

membership Literal['anonymous', 'humblPEON', 'humblPREMIUM', 'humblPOWER', 'humblPERMANENT', 'admin']

The membership level of the user.

Source code in src/humbldata/core/standard_models/portfolio/__init__.py
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
class PortfolioQueryParams(QueryParams):
    """
    Query parameters for the PortfolioController.

    This class defines the query parameters used by the PortfolioController.

    Parameters
    ----------
    symbol : str or list of str
        The stock symbol(s) to query. Default is "AAPL".
    provider : OBB_EQUITY_PRICE_HISTORICAL_PROVIDERS
        The data provider for historical price data. Default is "yahoo".
    membership : Literal["anonymous", "humblPEON", "humblPREMIUM", "humblPOWER", "humblPERMANENT", "admin"]
        The membership level of the user accessing the data. Default is "anonymous".

    Attributes
    ----------
    symbol : str or list of str
        The stock symbol(s) to query.
    provider : OBB_EQUITY_PRICE_HISTORICAL_PROVIDERS
        The data provider for historical price data.
    membership : Literal["anonymous", "humblPEON", "humblPREMIUM", "humblPOWER", "humblPERMANENT", "admin"]
        The membership level of the user.
    """

    symbols: str | list[str] = Field(
        default=["AAPL"],
        title="Symbols",
        description=QUERY_DESCRIPTIONS.get("symbols", ""),
    )
    provider: OBB_EQUITY_PRICE_HISTORICAL_PROVIDERS = Field(
        default="yfinance",
        title="Provider",
        description=QUERY_DESCRIPTIONS.get("provider", ""),
    )
    membership: Literal[
        "anonymous",
        "humblPEON",
        "humblPREMIUM",
        "humblPOWER",
        "humblPERMANENT",
        "admin",
    ] = Field(
        default="anonymous",
        title="Membership",
        description=QUERY_DESCRIPTIONS.get("membership", ""),
    )

    @field_validator("symbols", mode="before", check_fields=False)
    @classmethod
    def upper_symbol(cls, v: str | list[str] | set[str]) -> list[str]:
        """
        Convert the stock symbols to uppercase and remove empty strings.

        Parameters
        ----------
        v : Union[str, List[str], Set[str]]
            The stock symbol or collection of symbols to be converted.

        Returns
        -------
        List[str]
            A list of uppercase stock symbols with empty strings removed.
        """
        # Handle empty inputs
        if not v:
            return []

        # If v is a string, split it by commas into a list. Otherwise, ensure it's a list.
        v = v.split(",") if isinstance(v, str) else list(v)

        # Convert all elements to uppercase, trim whitespace, and remove empty strings
        valid_symbols = [
            symbol.strip().upper() for symbol in v if symbol.strip()
        ]

        if not valid_symbols:
            msg = "At least one valid symbol (str) must be provided"
            raise ValueError(msg)

        return valid_symbols
humbldata.core.standard_models.portfolio.PortfolioQueryParams.upper_symbol classmethod ¤
upper_symbol(v: str | list[str] | set[str]) -> list[str]

Convert the stock symbols to uppercase and remove empty strings.

Parameters:

Name Type Description Default
v Union[str, List[str], Set[str]]

The stock symbol or collection of symbols to be converted.

required

Returns:

Type Description
List[str]

A list of uppercase stock symbols with empty strings removed.

Source code in src/humbldata/core/standard_models/portfolio/__init__.py
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
@field_validator("symbols", mode="before", check_fields=False)
@classmethod
def upper_symbol(cls, v: str | list[str] | set[str]) -> list[str]:
    """
    Convert the stock symbols to uppercase and remove empty strings.

    Parameters
    ----------
    v : Union[str, List[str], Set[str]]
        The stock symbol or collection of symbols to be converted.

    Returns
    -------
    List[str]
        A list of uppercase stock symbols with empty strings removed.
    """
    # Handle empty inputs
    if not v:
        return []

    # If v is a string, split it by commas into a list. Otherwise, ensure it's a list.
    v = v.split(",") if isinstance(v, str) else list(v)

    # Convert all elements to uppercase, trim whitespace, and remove empty strings
    valid_symbols = [
        symbol.strip().upper() for symbol in v if symbol.strip()
    ]

    if not valid_symbols:
        msg = "At least one valid symbol (str) must be provided"
        raise ValueError(msg)

    return valid_symbols

humbldata.core.standard_models.portfolio.PortfolioData ¤

Bases: Data

The Data for the PortfolioController.

Source code in src/humbldata/core/standard_models/portfolio/__init__.py
101
102
103
104
105
106
107
class PortfolioData(Data):
    """
    The Data for the PortfolioController.
    """

    # Add your data model fields here
    pass

humbldata.core.standard_models.toolbox ¤

Context: Toolbox || Category: Standardized Framework Model.

This module defines the QueryParams and Data classes for the Toolbox context. THis is where all of the context(s) of your project go. The STANDARD MODELS for categories and subsequent commands are nested here.

Classes:

Name Description
ToolboxQueryParams

Query parameters for the ToolboxController.

ToolboxData

A Pydantic model that defines the data returned by the ToolboxController.

Attributes:

Name Type Description
symbol str

The symbol/ticker of the stock.

interval Optional[str]

The interval of the data. Defaults to '1d'.

start_date str

The start date of the data.

end_date str

The end date of the data.

humbldata.core.standard_models.toolbox.fundamental ¤

humbldata.core.standard_models.toolbox.fundamental.humbl_compass ¤

HumblCompass Standard Model.

Context: Toolbox || Category: Fundamental || Command: humbl_compass.

This module is used to define the QueryParams and Data model for the HumblCompass command.

humbldata.core.standard_models.toolbox.fundamental.humbl_compass.AssetRecommendation ¤

Bases: str, Enum

Asset recommendation categories.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
class AssetRecommendation(str, Enum):
    """Asset recommendation categories."""

    EQUITIES = "Equities"
    CREDIT = "Credit"
    COMMODITIES = "Commodities"
    FX = "FX"
    FIXED_INCOME = "Fixed Income"
    USD = "USD"
    GOLD = "Gold"
    TECHNOLOGY = "Technology"
    CONSUMER_DISCRETIONARY = "Consumer Discretionary"
    MATERIALS = "Materials"
    INDUSTRIALS = "Industrials"
    UTILITIES = "Utilities"
    REITS = "REITs"
    CONSUMER_STAPLES = "Consumer Staples"
    FINANCIALS = "Financials"
    ENERGY = "Energy"
    HEALTH_CARE = "Health Care"
    TELECOM = "Telecom"
    HIGH_BETA = "High Beta"
    MOMENTUM = "Momentum"
    CYCLICALS = "Cyclicals"
    SECULAR_GROWTH = "Secular Growth"
    LOW_BETA = "Low Beta"
    DEFENSIVES = "Defensives"
    VALUE = "Value"
    DIVIDEND_YIELD = "Dividend Yield"
    QUALITY = "Quality"
    CYCLICAL_GROWTH = "Cyclical Growth"
    SMALL_CAPS = "Small Caps"
    MID_CAPS = "Mid Caps"
    BDCS = "BDCs"
    CONVERTIBLES = "Convertibles"
    HY_CREDIT = "HY Credit"
    EM_DEBT = "EM Debt"
    TIPS = "TIPS"
    SHORT_DURATION_TREASURIES = "Short Duration Treasuries"
    MORTGAGE_BACKED_SECURITIES = "Mortgage Backed Securities"
    MEDIUM_DURATION_TREASURIES = "Medium Duration Treasuries"
    LONG_DURATION_TREASURIES = "Long Duration Treasuries"
    IG_CREDIT = "Investment Grade Credit"
    MUNIS = "Municipal Bonds"
    PREFERREDS = "Preferreds"
    EM_LOCAL_CURRENCY = "Emerging Market Local Currency"
    LEVERAGED_LOANS = "Leveraged Loans"
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.RecommendationCategory ¤

Bases: BaseModel

Category-specific recommendations with rationale.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
106
107
108
109
110
111
class RecommendationCategory(BaseModel):
    """Category-specific recommendations with rationale."""

    best: list[AssetRecommendation]
    worst: list[AssetRecommendation]
    rationale: str
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.RegimeRecommendations ¤

Bases: BaseModel

Complete set of recommendations for a specific regime.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
114
115
116
117
118
119
120
121
122
123
class RegimeRecommendations(BaseModel):
    """Complete set of recommendations for a specific regime."""

    asset_classes: RecommendationCategory
    equity_sectors: RecommendationCategory
    equity_factors: RecommendationCategory
    fixed_income: RecommendationCategory
    regime_description: str
    key_risks: list[str]
    last_updated: datetime = Field(default_factory=datetime.utcnow)
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.HumblCompassQueryParams ¤

Bases: QueryParams

QueryParams model for the HumblCompass command, a Pydantic v2 model.

Parameters:

Name Type Description Default
country Literal

The country or group of countries to collect humblCOMPASS data for.

required
cli_start_date str

The adjusted start date for CLI data collection.

required
cpi_start_date str

The adjusted start date for CPI data collection.

required
z_score Optional[str]

The time window for z-score calculation (e.g., "1 year", "18 months").

required
chart bool

Whether to return a chart object.

required
template Literal

The template/theme to use for the plotly figure.

required
Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
class HumblCompassQueryParams(QueryParams):
    """
    QueryParams model for the HumblCompass command, a Pydantic v2 model.

    Parameters
    ----------
    country : Literal
        The country or group of countries to collect humblCOMPASS data for.
    cli_start_date : str
        The adjusted start date for CLI data collection.
    cpi_start_date : str
        The adjusted start date for CPI data collection.
    z_score : Optional[str]
        The time window for z-score calculation (e.g., "1 year", "18 months").
    chart : bool
        Whether to return a chart object.
    template : Literal
        The template/theme to use for the plotly figure.
    """

    country: Literal[
        "g20",
        "g7",
        "asia5",
        "north_america",
        "europe4",
        "australia",
        "brazil",
        "canada",
        "china",
        "france",
        "germany",
        "india",
        "indonesia",
        "italy",
        "japan",
        "mexico",
        "south_africa",
        "south_korea",
        "spain",
        "turkey",
        "united_kingdom",
        "united_states",
        "all",
    ] = Field(
        default="united_states",
        title="Country for humblCOMPASS data",
        description=HUMBLCOMPASS_QUERY_DESCRIPTIONS.get("country", ""),
    )
    cli_start_date: str = Field(
        default=None,
        title="Adjusted start date for CLI data",
        description="The adjusted start date for CLI data collection.",
    )
    cpi_start_date: str = Field(
        default=None,
        title="Adjusted start date for CPI data",
        description="The adjusted start date for CPI data collection.",
    )
    z_score: str | None = Field(
        default=None,
        title="Z-score calculation window",
        description="The time window for z-score calculation (e.g., '1 year', '18 months').",
    )
    chart: bool = Field(
        default=False,
        title="Results Chart",
        description=HUMBLCOMPASS_QUERY_DESCRIPTIONS.get("chart", ""),
    )
    template: Literal[
        "humbl_dark",
        "humbl_light",
        "ggplot2",
        "seaborn",
        "simple_white",
        "plotly",
        "plotly_white",
        "plotly_dark",
        "presentation",
        "xgridoff",
        "ygridoff",
        "gridon",
        "none",
    ] = Field(
        default="humbl_dark",
        title="Plotly Template",
        description=HUMBLCOMPASS_QUERY_DESCRIPTIONS.get("template", ""),
    )
    recommendations: bool = Field(
        default=False,
        title="Investment Recommendations",
        description="Whether to include investment recommendations based on the HUMBL regime.",
    )
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.HumblCompassData ¤

Bases: Data

Data model for the humbl_compass command, a Pandera.Polars Model.

This Data model is used to validate data in the .transform_data() method of the HumblCompassFetcher class.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
class HumblCompassData(Data):
    """
    Data model for the humbl_compass command, a Pandera.Polars Model.

    This Data model is used to validate data in the `.transform_data()` method of the `HumblCompassFetcher` class.
    """

    date_month_start: pl.Date = pa.Field(
        default=None,
        title="Date",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["date"],
    )
    country: pl.Utf8 = pa.Field(
        default=None,
        title="Country",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["country"],
    )
    cpi: pl.Float64 = pa.Field(
        default=None,
        title="Consumer Price Index (CPI)",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["cpi"],
    )
    cpi_3m_delta: pl.Float64 = pa.Field(
        default=None,
        title="Consumer Price Index (CPI) 3-Month Delta",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["cpi_3m_delta"],
    )
    cpi_zscore: pl.Float64 | None = pa.Field(
        default=None,
        title="Consumer Price Index (CPI) 1-Year Z-Score",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["cpi_1yr_zscore"],
    )
    cli: pl.Float64 = pa.Field(
        default=None,
        title="Composite Leading Indicator (CLI)",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["cli"],
    )
    cli_3m_delta: pl.Float64 = pa.Field(
        default=None,
        title="Composite Leading Indicator (CLI) 3-Month Delta",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["cli_3m_delta"],
    )
    cli_zscore: pl.Float64 | None = pa.Field(
        default=None,
        title="Composite Leading Indicator (CLI) 1-Year Z-Score",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["cli_1yr_zscore"],
    )
    humbl_regime: pl.Utf8 = pa.Field(
        default=None,
        title="HUMBL Regime",
        description=HUMBLCOMPASS_DATA_DESCRIPTIONS["humbl_regime"],
    )
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.HumblCompassFetcher ¤

Fetcher for the HumblCompass command.

Parameters:

Name Type Description Default
context_params ToolboxQueryParams

The context parameters for the Toolbox query.

required
command_params HumblCompassQueryParams

The command-specific parameters for the HumblCompass query.

required

Attributes:

Name Type Description
context_params ToolboxQueryParams

Stores the context parameters passed during initialization.

command_params HumblCompassQueryParams

Stores the command-specific parameters passed during initialization.

data DataFrame

The raw data extracted from the data provider, before transformation.

Methods:

Name Description
transform_query

Transform the command-specific parameters into a query.

extract_data

Extracts the data from the provider and returns it as a Polars DataFrame.

transform_data

Transforms the command-specific data according to the HumblCompass logic.

fetch_data

Execute TET Pattern.

Returns:

Type Description
HumblObject

results : HumblCompassData Serializable results. provider : Literal['fmp', 'intrinio', 'polygon', 'tiingo', 'yfinance'] Provider name. warnings : Optional[List[Warning_]] List of warnings. chart : Optional[Chart] Chart object. context_params : ToolboxQueryParams Context-specific parameters. command_params : HumblCompassQueryParams Command-specific parameters.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
class HumblCompassFetcher:
    """
    Fetcher for the HumblCompass command.

    Parameters
    ----------
    context_params : ToolboxQueryParams
        The context parameters for the Toolbox query.
    command_params : HumblCompassQueryParams
        The command-specific parameters for the HumblCompass query.

    Attributes
    ----------
    context_params : ToolboxQueryParams
        Stores the context parameters passed during initialization.
    command_params : HumblCompassQueryParams
        Stores the command-specific parameters passed during initialization.
    data : pl.DataFrame
        The raw data extracted from the data provider, before transformation.

    Methods
    -------
    transform_query()
        Transform the command-specific parameters into a query.
    extract_data()
        Extracts the data from the provider and returns it as a Polars DataFrame.
    transform_data()
        Transforms the command-specific data according to the HumblCompass logic.
    fetch_data()
        Execute TET Pattern.

    Returns
    -------
    HumblObject
        results : HumblCompassData
            Serializable results.
        provider : Literal['fmp', 'intrinio', 'polygon', 'tiingo', 'yfinance']
            Provider name.
        warnings : Optional[List[Warning_]]
            List of warnings.
        chart : Optional[Chart]
            Chart object.
        context_params : ToolboxQueryParams
            Context-specific parameters.
        command_params : HumblCompassQueryParams
            Command-specific parameters.
    """

    def __init__(
        self,
        context_params: ToolboxQueryParams,
        command_params: HumblCompassQueryParams,
    ):
        """
        Initialize the HumblCompassFetcher with context and command parameters.

        Parameters
        ----------
        context_params : ToolboxQueryParams
            The context parameters for the Toolbox query.
        command_params : HumblCompassQueryParams
            The command-specific parameters for the HumblCompass query.
        """
        self.context_params = context_params
        self.command_params = command_params

    def transform_query(self):
        """
        Transform the command-specific parameters into a query.

        If command_params is not provided, it initializes a default HumblCompassQueryParams object.
        Calculates adjusted start dates for CLI and CPI data collection.
        """
        if not self.command_params:
            self.command_params = HumblCompassQueryParams()
        elif isinstance(self.command_params, dict):
            self.command_params = HumblCompassQueryParams(**self.command_params)

        # Calculate adjusted start dates
        if isinstance(self.context_params.start_date, str):
            start_date = pl.Series(
                [datetime.strptime(self.context_params.start_date, "%Y-%m-%d")]
            )
        else:
            start_date = pl.Series([self.context_params.start_date])

        # Calculate z-score window in months
        self.z_score_months = 0
        if (
            self.command_params.z_score is not None
            and self.context_params.membership != "humblPEON"
        ):
            z_score_months_str = _window_format(
                self.command_params.z_score, _return_timedelta=False
            )
            self.z_score_months = _window_format_monthly(z_score_months_str)
        elif self.context_params.membership == "humblPEON":
            logger.warning(
                "Z-score is not calculated for humblPEON membership level."
            )

        cli_start_date = start_date.dt.offset_by(
            f"-{4 + self.z_score_months}mo"
        ).dt.strftime("%Y-%m-%d")[0]
        cpi_start_date = start_date.dt.offset_by(
            f"-{3 + self.z_score_months}mo"
        ).dt.strftime("%Y-%m-%d")[0]

        # Update the command_params with the new start dates
        self.command_params = self.command_params.model_copy(
            update={
                "cli_start_date": cli_start_date,
                "cpi_start_date": cpi_start_date,
            }
        )

        logger.info(
            f"CLI start date: {self.command_params.cli_start_date} and CPI start date: {self.command_params.cpi_start_date}. "
            f"Dates are adjusted to account for CLI data release lag and z-score calculation window."
        )

    def extract_data(self):
        """
        Extract the data from the provider and returns it as a Polars DataFrame.

        Returns
        -------
        self
            The HumblCompassFetcher instance with extracted data.
        """
        # Collect CLI Data
        self.oecd_cli_data = (
            obb.economy.composite_leading_indicator(
                start_date=self.command_params.cli_start_date,
                end_date=self.context_params.end_date,
                provider="oecd",
                country=self.command_params.country,
            )
            .to_polars()
            .lazy()
            .rename({"value": "cli"})
            .with_columns(
                [pl.col("date").dt.month_start().alias("date_month_start")]
            )
        )

        # Collect YoY CPI Data
        self.oecd_cpi_data = (
            obb.economy.cpi(
                start_date=self.command_params.cpi_start_date,
                end_date=self.context_params.end_date,
                frequency="monthly",
                country=self.command_params.country,
                transform="yoy",
                provider="oecd",
                harmonized=False,
                expenditure="total",
            )
            .to_polars()
            .lazy()
            .rename({"value": "cpi"})
            .with_columns(
                [pl.col("date").dt.month_start().alias("date_month_start")]
            )
        )
        return self

    def transform_data(self):
        """
        Transform the command-specific data according to the humbl_compass logic.

        Returns
        -------
        self
            The HumblCompassFetcher instance with transformed data.
        """
        # Combine CLI and CPI data
        # CLI data is released before CPI data, so we use a left join
        combined_data = (
            self.oecd_cli_data.join(
                self.oecd_cpi_data,
                on=["date_month_start", "country"],
                how="left",
                suffix="_cpi",
            )
            .sort("date_month_start")
            .with_columns(
                [
                    pl.col("country").cast(pl.Utf8),
                    pl.col("cli").cast(pl.Float64),
                    pl.col("cpi").cast(pl.Float64)
                    * 100,  # Convert CPI to percentage
                ]
            )
            .rename(
                {
                    "date": "date_cli",
                }
            )
            .select(
                [
                    "date_month_start",
                    "date_cli",
                    "date_cpi",
                    "country",
                    "cli",
                    "cpi",
                ]
            )
        )

        # Calculate 3-month deltas
        delta_window = 3
        transformed_data = combined_data.with_columns(
            [
                (pl.col("cli") - pl.col("cli").shift(delta_window)).alias(
                    "cli_3m_delta"
                ),
                (pl.col("cpi") - pl.col("cpi").shift(delta_window)).alias(
                    "cpi_3m_delta"
                ),
            ]
        )

        # Add this after calculating 3-month deltas in transform_data()
        transformed_data = transformed_data.with_columns(
            [
                pl.when(
                    (pl.col("cpi_3m_delta") > 0) & (pl.col("cli_3m_delta") < 0)
                )
                .then(pl.lit("humblBLOAT"))
                .when(
                    (pl.col("cpi_3m_delta") > 0) & (pl.col("cli_3m_delta") > 0)
                )
                .then(pl.lit("humblBOUNCE"))
                .when(
                    (pl.col("cpi_3m_delta") < 0) & (pl.col("cli_3m_delta") > 0)
                )
                .then(pl.lit("humblBOOM"))
                .when(
                    (pl.col("cpi_3m_delta") < 0) & (pl.col("cli_3m_delta") < 0)
                )
                .then(pl.lit("humblBUST"))
                .otherwise(None)
                .alias("humbl_regime")
            ]
        )

        # Calculate z-scores only if self.z_score_months is greater than 0 and membership is not humblPEON
        if (
            self.z_score_months > 0
            and self.context_params.membership != "humblPEON"
        ):
            transformed_data = transformed_data.with_columns(
                [
                    pl.when(
                        pl.col("cli").count().over("country")
                        >= self.z_score_months
                    )
                    .then(
                        (
                            pl.col("cli")
                            - pl.col("cli").rolling_mean(self.z_score_months)
                        )
                        / pl.col("cli").rolling_std(self.z_score_months)
                    )
                    .alias("cli_zscore"),
                    pl.when(
                        pl.col("cpi").count().over("country")
                        >= self.z_score_months
                    )
                    .then(
                        (
                            pl.col("cpi")
                            - pl.col("cpi").rolling_mean(self.z_score_months)
                        )
                        / pl.col("cpi").rolling_std(self.z_score_months)
                    )
                    .alias("cpi_zscore"),
                ]
            )

        # Select columns based on whether z-scores were calculated
        columns_to_select = [
            pl.col("date_month_start"),
            pl.col("country"),
            pl.col("cpi").round(2),
            pl.col("cpi_3m_delta").round(2),
            pl.col("cli").round(2),
            pl.col("cli_3m_delta").round(2),
            pl.col("humbl_regime"),
        ]

        if (
            self.z_score_months > 0
            and self.context_params.membership != "humblPEON"
        ):
            columns_to_select.extend(
                [
                    pl.col("cpi_zscore").round(2),
                    pl.col("cli_zscore").round(2),
                ]
            )

        self.transformed_data = transformed_data.select(columns_to_select)

        # Validate the data using HumblCompassData
        self.transformed_data = HumblCompassData(
            self.transformed_data.collect().drop_nulls()  # removes preceding 3 months used for delta calculations
        ).lazy()

        # Generate chart if requested
        self.chart = None
        if self.command_params.chart:
            self.chart = generate_plots(
                self.transformed_data,
                template=ChartTemplate(self.command_params.template),
            )

        # Add warning if z_score is None
        if self.command_params.z_score is None:
            if not hasattr(self, "warnings"):
                self.warnings = []
            self.warnings.append(
                HumblDataWarning(
                    category="HumblCompassFetcher",
                    message="Z-score defaulted to None. No z-score data will be calculated.",
                )
            )

        # Add recommendations if requested
        if self.command_params.recommendations:
            latest_regime = (
                self.transformed_data.select(pl.col("humbl_regime"))
                .collect()
                .row(-1)[0]
            )

            if latest_regime not in REGIME_RECOMMENDATIONS:
                if not hasattr(self, "warnings"):
                    self.warnings = []
                self.warnings.append(
                    HumblDataWarning(
                        category="HumblCompassFetcher",
                        message=f"No recommendations available for regime: {latest_regime}",
                    )
                )
            else:
                recommendations = REGIME_RECOMMENDATIONS[latest_regime]
                if not hasattr(self, "extra"):
                    self.extra = {}
                self.extra["humbl_regime_recommendations"] = (
                    recommendations.model_dump()
                )

        self.transformed_data = self.transformed_data.serialize(format="binary")
        return self

    @log_start_end(logger=logger)
    def fetch_data(self):
        """
        Execute TET Pattern.

        This method executes the query transformation, data fetching and
        transformation process by first calling `transform_query` to prepare the query parameters, then
        extracting the raw data using `extract_data` method, and finally
        transforming the raw data using `transform_data` method.

        Returns
        -------
        HumblObject
            The HumblObject containing the transformed data and metadata.
        """
        self.transform_query()
        self.extract_data()
        self.transform_data()

        # Initialize warnings list if it doesn't exist
        if not hasattr(self.context_params, "warnings"):
            self.context_params.warnings = []

        # Initialize fetcher warnings if they don't exist
        if not hasattr(self, "warnings"):
            self.warnings = []

        # Initialize extra dict if it doesn't exist
        if not hasattr(self, "extra"):
            self.extra = {}

        # Combine warnings from both sources
        all_warnings = self.context_params.warnings + self.warnings

        return HumblObject(
            results=self.transformed_data,
            provider=self.context_params.provider,
            warnings=all_warnings,  # Use combined warnings
            chart=self.chart,
            context_params=self.context_params,
            command_params=self.command_params,
            extra=self.extra,  # pipe in extra from transform_data()
        )
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.HumblCompassFetcher.__init__ ¤
__init__(context_params: ToolboxQueryParams, command_params: HumblCompassQueryParams)

Initialize the HumblCompassFetcher with context and command parameters.

Parameters:

Name Type Description Default
context_params ToolboxQueryParams

The context parameters for the Toolbox query.

required
command_params HumblCompassQueryParams

The command-specific parameters for the HumblCompass query.

required
Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
def __init__(
    self,
    context_params: ToolboxQueryParams,
    command_params: HumblCompassQueryParams,
):
    """
    Initialize the HumblCompassFetcher with context and command parameters.

    Parameters
    ----------
    context_params : ToolboxQueryParams
        The context parameters for the Toolbox query.
    command_params : HumblCompassQueryParams
        The command-specific parameters for the HumblCompass query.
    """
    self.context_params = context_params
    self.command_params = command_params
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.HumblCompassFetcher.transform_query ¤
transform_query()

Transform the command-specific parameters into a query.

If command_params is not provided, it initializes a default HumblCompassQueryParams object. Calculates adjusted start dates for CLI and CPI data collection.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
def transform_query(self):
    """
    Transform the command-specific parameters into a query.

    If command_params is not provided, it initializes a default HumblCompassQueryParams object.
    Calculates adjusted start dates for CLI and CPI data collection.
    """
    if not self.command_params:
        self.command_params = HumblCompassQueryParams()
    elif isinstance(self.command_params, dict):
        self.command_params = HumblCompassQueryParams(**self.command_params)

    # Calculate adjusted start dates
    if isinstance(self.context_params.start_date, str):
        start_date = pl.Series(
            [datetime.strptime(self.context_params.start_date, "%Y-%m-%d")]
        )
    else:
        start_date = pl.Series([self.context_params.start_date])

    # Calculate z-score window in months
    self.z_score_months = 0
    if (
        self.command_params.z_score is not None
        and self.context_params.membership != "humblPEON"
    ):
        z_score_months_str = _window_format(
            self.command_params.z_score, _return_timedelta=False
        )
        self.z_score_months = _window_format_monthly(z_score_months_str)
    elif self.context_params.membership == "humblPEON":
        logger.warning(
            "Z-score is not calculated for humblPEON membership level."
        )

    cli_start_date = start_date.dt.offset_by(
        f"-{4 + self.z_score_months}mo"
    ).dt.strftime("%Y-%m-%d")[0]
    cpi_start_date = start_date.dt.offset_by(
        f"-{3 + self.z_score_months}mo"
    ).dt.strftime("%Y-%m-%d")[0]

    # Update the command_params with the new start dates
    self.command_params = self.command_params.model_copy(
        update={
            "cli_start_date": cli_start_date,
            "cpi_start_date": cpi_start_date,
        }
    )

    logger.info(
        f"CLI start date: {self.command_params.cli_start_date} and CPI start date: {self.command_params.cpi_start_date}. "
        f"Dates are adjusted to account for CLI data release lag and z-score calculation window."
    )
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.HumblCompassFetcher.extract_data ¤
extract_data()

Extract the data from the provider and returns it as a Polars DataFrame.

Returns:

Type Description
self

The HumblCompassFetcher instance with extracted data.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
def extract_data(self):
    """
    Extract the data from the provider and returns it as a Polars DataFrame.

    Returns
    -------
    self
        The HumblCompassFetcher instance with extracted data.
    """
    # Collect CLI Data
    self.oecd_cli_data = (
        obb.economy.composite_leading_indicator(
            start_date=self.command_params.cli_start_date,
            end_date=self.context_params.end_date,
            provider="oecd",
            country=self.command_params.country,
        )
        .to_polars()
        .lazy()
        .rename({"value": "cli"})
        .with_columns(
            [pl.col("date").dt.month_start().alias("date_month_start")]
        )
    )

    # Collect YoY CPI Data
    self.oecd_cpi_data = (
        obb.economy.cpi(
            start_date=self.command_params.cpi_start_date,
            end_date=self.context_params.end_date,
            frequency="monthly",
            country=self.command_params.country,
            transform="yoy",
            provider="oecd",
            harmonized=False,
            expenditure="total",
        )
        .to_polars()
        .lazy()
        .rename({"value": "cpi"})
        .with_columns(
            [pl.col("date").dt.month_start().alias("date_month_start")]
        )
    )
    return self
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.HumblCompassFetcher.transform_data ¤
transform_data()

Transform the command-specific data according to the humbl_compass logic.

Returns:

Type Description
self

The HumblCompassFetcher instance with transformed data.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
def transform_data(self):
    """
    Transform the command-specific data according to the humbl_compass logic.

    Returns
    -------
    self
        The HumblCompassFetcher instance with transformed data.
    """
    # Combine CLI and CPI data
    # CLI data is released before CPI data, so we use a left join
    combined_data = (
        self.oecd_cli_data.join(
            self.oecd_cpi_data,
            on=["date_month_start", "country"],
            how="left",
            suffix="_cpi",
        )
        .sort("date_month_start")
        .with_columns(
            [
                pl.col("country").cast(pl.Utf8),
                pl.col("cli").cast(pl.Float64),
                pl.col("cpi").cast(pl.Float64)
                * 100,  # Convert CPI to percentage
            ]
        )
        .rename(
            {
                "date": "date_cli",
            }
        )
        .select(
            [
                "date_month_start",
                "date_cli",
                "date_cpi",
                "country",
                "cli",
                "cpi",
            ]
        )
    )

    # Calculate 3-month deltas
    delta_window = 3
    transformed_data = combined_data.with_columns(
        [
            (pl.col("cli") - pl.col("cli").shift(delta_window)).alias(
                "cli_3m_delta"
            ),
            (pl.col("cpi") - pl.col("cpi").shift(delta_window)).alias(
                "cpi_3m_delta"
            ),
        ]
    )

    # Add this after calculating 3-month deltas in transform_data()
    transformed_data = transformed_data.with_columns(
        [
            pl.when(
                (pl.col("cpi_3m_delta") > 0) & (pl.col("cli_3m_delta") < 0)
            )
            .then(pl.lit("humblBLOAT"))
            .when(
                (pl.col("cpi_3m_delta") > 0) & (pl.col("cli_3m_delta") > 0)
            )
            .then(pl.lit("humblBOUNCE"))
            .when(
                (pl.col("cpi_3m_delta") < 0) & (pl.col("cli_3m_delta") > 0)
            )
            .then(pl.lit("humblBOOM"))
            .when(
                (pl.col("cpi_3m_delta") < 0) & (pl.col("cli_3m_delta") < 0)
            )
            .then(pl.lit("humblBUST"))
            .otherwise(None)
            .alias("humbl_regime")
        ]
    )

    # Calculate z-scores only if self.z_score_months is greater than 0 and membership is not humblPEON
    if (
        self.z_score_months > 0
        and self.context_params.membership != "humblPEON"
    ):
        transformed_data = transformed_data.with_columns(
            [
                pl.when(
                    pl.col("cli").count().over("country")
                    >= self.z_score_months
                )
                .then(
                    (
                        pl.col("cli")
                        - pl.col("cli").rolling_mean(self.z_score_months)
                    )
                    / pl.col("cli").rolling_std(self.z_score_months)
                )
                .alias("cli_zscore"),
                pl.when(
                    pl.col("cpi").count().over("country")
                    >= self.z_score_months
                )
                .then(
                    (
                        pl.col("cpi")
                        - pl.col("cpi").rolling_mean(self.z_score_months)
                    )
                    / pl.col("cpi").rolling_std(self.z_score_months)
                )
                .alias("cpi_zscore"),
            ]
        )

    # Select columns based on whether z-scores were calculated
    columns_to_select = [
        pl.col("date_month_start"),
        pl.col("country"),
        pl.col("cpi").round(2),
        pl.col("cpi_3m_delta").round(2),
        pl.col("cli").round(2),
        pl.col("cli_3m_delta").round(2),
        pl.col("humbl_regime"),
    ]

    if (
        self.z_score_months > 0
        and self.context_params.membership != "humblPEON"
    ):
        columns_to_select.extend(
            [
                pl.col("cpi_zscore").round(2),
                pl.col("cli_zscore").round(2),
            ]
        )

    self.transformed_data = transformed_data.select(columns_to_select)

    # Validate the data using HumblCompassData
    self.transformed_data = HumblCompassData(
        self.transformed_data.collect().drop_nulls()  # removes preceding 3 months used for delta calculations
    ).lazy()

    # Generate chart if requested
    self.chart = None
    if self.command_params.chart:
        self.chart = generate_plots(
            self.transformed_data,
            template=ChartTemplate(self.command_params.template),
        )

    # Add warning if z_score is None
    if self.command_params.z_score is None:
        if not hasattr(self, "warnings"):
            self.warnings = []
        self.warnings.append(
            HumblDataWarning(
                category="HumblCompassFetcher",
                message="Z-score defaulted to None. No z-score data will be calculated.",
            )
        )

    # Add recommendations if requested
    if self.command_params.recommendations:
        latest_regime = (
            self.transformed_data.select(pl.col("humbl_regime"))
            .collect()
            .row(-1)[0]
        )

        if latest_regime not in REGIME_RECOMMENDATIONS:
            if not hasattr(self, "warnings"):
                self.warnings = []
            self.warnings.append(
                HumblDataWarning(
                    category="HumblCompassFetcher",
                    message=f"No recommendations available for regime: {latest_regime}",
                )
            )
        else:
            recommendations = REGIME_RECOMMENDATIONS[latest_regime]
            if not hasattr(self, "extra"):
                self.extra = {}
            self.extra["humbl_regime_recommendations"] = (
                recommendations.model_dump()
            )

    self.transformed_data = self.transformed_data.serialize(format="binary")
    return self
humbldata.core.standard_models.toolbox.fundamental.humbl_compass.HumblCompassFetcher.fetch_data ¤
fetch_data()

Execute TET Pattern.

This method executes the query transformation, data fetching and transformation process by first calling transform_query to prepare the query parameters, then extracting the raw data using extract_data method, and finally transforming the raw data using transform_data method.

Returns:

Type Description
HumblObject

The HumblObject containing the transformed data and metadata.

Source code in src/humbldata/core/standard_models/toolbox/fundamental/humbl_compass.py
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
@log_start_end(logger=logger)
def fetch_data(self):
    """
    Execute TET Pattern.

    This method executes the query transformation, data fetching and
    transformation process by first calling `transform_query` to prepare the query parameters, then
    extracting the raw data using `extract_data` method, and finally
    transforming the raw data using `transform_data` method.

    Returns
    -------
    HumblObject
        The HumblObject containing the transformed data and metadata.
    """
    self.transform_query()
    self.extract_data()
    self.transform_data()

    # Initialize warnings list if it doesn't exist
    if not hasattr(self.context_params, "warnings"):
        self.context_params.warnings = []

    # Initialize fetcher warnings if they don't exist
    if not hasattr(self, "warnings"):
        self.warnings = []

    # Initialize extra dict if it doesn't exist
    if not hasattr(self, "extra"):
        self.extra = {}

    # Combine warnings from both sources
    all_warnings = self.context_params.warnings + self.warnings

    return HumblObject(
        results=self.transformed_data,
        provider=self.context_params.provider,
        warnings=all_warnings,  # Use combined warnings
        chart=self.chart,
        context_params=self.context_params,
        command_params=self.command_params,
        extra=self.extra,  # pipe in extra from transform_data()
    )

humbldata.core.standard_models.toolbox.technical ¤

Context: Toolbox || Category: Technical.

humbldata.core.standard_models.toolbox.technical.realized_volatility ¤

Volatility Standard Model.

Context: Toolbox || Category: Technical || Command: Volatility.

This module is used to define the QueryParams and Data model for the Volatility command.

humbldata.core.standard_models.toolbox.technical.realized_volatility.RealizedVolatilityQueryParams ¤

Bases: QueryParams

QueryParams for the Realized Volatility command.

Source code in src/humbldata/core/standard_models/toolbox/technical/realized_volatility.py
21
22
23
24
class RealizedVolatilityQueryParams(QueryParams):
    """
    QueryParams for the Realized Volatility command.
    """
humbldata.core.standard_models.toolbox.technical.realized_volatility.RealizedVolatilityData ¤

Bases: Data

Data model for the Realized Volatility command.

Source code in src/humbldata/core/standard_models/toolbox/technical/realized_volatility.py
27
28
29
30
class RealizedVolatilityData(Data):
    """
    Data model for the Realized Volatility command.
    """
humbldata.core.standard_models.toolbox.technical.realized_volatility.RealizedVolatilityFetcher ¤

Bases: RealizedVolatilityQueryParams

Fetcher for the Realized Volatility command.

Source code in src/humbldata/core/standard_models/toolbox/technical/realized_volatility.py
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
class RealizedVolatilityFetcher(RealizedVolatilityQueryParams):
    """
    Fetcher for the Realized Volatility command.
    """

    data_list: ClassVar[list[RealizedVolatilityData]] = []

    def __init__(
        self,
        context_params: ToolboxQueryParams,
        command_params: RealizedVolatilityQueryParams,
    ):
        self._context_params = context_params
        self._command_params = command_params

    def transform_query(self):
        """Transform the params to the command-specific query."""

    def extract_data(self):
        """Extract the data from the provider."""
        # Assuming 'obb' is a predefined object in your context
        df = (
            obb.equity.price.historical(
                symbol=self.context_params.symbol,
                start_date=str(self.context_params.start_date),
                end_date=str(self.context_params.end_date),
                provider=self.command_params.provider,
                verbose=not self.command_params.kwargs.get("silent", False),
                **self.command_params.kwargs,
            )
            .to_df()
            .reset_index()
        )
        return df

    def transform_data(self):
        """Transform the command-specific data."""
        # Placeholder for data transformation logic

    def fetch_data(self):
        """Execute the TET pattern."""
        # Call the methods in the desired order
        query = self.transform_query()
        raw_data = (
            self.extract_data()
        )  # This should use 'query' to fetch the data
        transformed_data = (
            self.transform_data()
        )  # This should transform 'raw_data'

        # Validate with VolatilityData, unpack dict into pydantic row by row
        return transformed_data
humbldata.core.standard_models.toolbox.technical.realized_volatility.RealizedVolatilityFetcher.transform_query ¤
transform_query()

Transform the params to the command-specific query.

Source code in src/humbldata/core/standard_models/toolbox/technical/realized_volatility.py
48
49
def transform_query(self):
    """Transform the params to the command-specific query."""
humbldata.core.standard_models.toolbox.technical.realized_volatility.RealizedVolatilityFetcher.extract_data ¤
extract_data()

Extract the data from the provider.

Source code in src/humbldata/core/standard_models/toolbox/technical/realized_volatility.py
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
def extract_data(self):
    """Extract the data from the provider."""
    # Assuming 'obb' is a predefined object in your context
    df = (
        obb.equity.price.historical(
            symbol=self.context_params.symbol,
            start_date=str(self.context_params.start_date),
            end_date=str(self.context_params.end_date),
            provider=self.command_params.provider,
            verbose=not self.command_params.kwargs.get("silent", False),
            **self.command_params.kwargs,
        )
        .to_df()
        .reset_index()
    )
    return df
humbldata.core.standard_models.toolbox.technical.realized_volatility.RealizedVolatilityFetcher.transform_data ¤
transform_data()

Transform the command-specific data.

Source code in src/humbldata/core/standard_models/toolbox/technical/realized_volatility.py
68
69
def transform_data(self):
    """Transform the command-specific data."""
humbldata.core.standard_models.toolbox.technical.realized_volatility.RealizedVolatilityFetcher.fetch_data ¤
fetch_data()

Execute the TET pattern.

Source code in src/humbldata/core/standard_models/toolbox/technical/realized_volatility.py
72
73
74
75
76
77
78
79
80
81
82
83
84
def fetch_data(self):
    """Execute the TET pattern."""
    # Call the methods in the desired order
    query = self.transform_query()
    raw_data = (
        self.extract_data()
    )  # This should use 'query' to fetch the data
    transformed_data = (
        self.transform_data()
    )  # This should transform 'raw_data'

    # Validate with VolatilityData, unpack dict into pydantic row by row
    return transformed_data
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel ¤

Mandelbrot Channel Standard Model.

Context: Toolbox || Category: Technical || Command: Mandelbrot Channel.

This module is used to define the QueryParams and Data model for the Mandelbrot Channel command.

humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelQueryParams ¤

Bases: QueryParams

QueryParams model for the Mandelbrot Channel command, a Pydantic v2 model.

Parameters:

Name Type Description Default
window str

The width of the window used for splitting the data into sections for detrending. Defaults to "1m".

required
rv_adjustment bool

Whether to adjust the calculation for realized volatility. If True, the data is filtered to only include observations in the same volatility bucket that the stock is currently in. Defaults to True.

required
rv_method str

The method to calculate the realized volatility. Only need to define when rv_adjustment is True. Defaults to "std".

required
rs_method Literal['RS', 'RS_min', 'RS_max', 'RS_mean']

The method to use for Range/STD calculation. This is either, min, max or mean of all RS ranges per window. If not defined, just used the most recent RS window. Defaults to "RS".

required
rv_grouped_mean bool

Whether to calculate the mean value of realized volatility over multiple window lengths. Defaults to False.

required
live_price bool

Whether to calculate the ranges using the current live price, or the most recent 'close' observation. Defaults to False.

required
historical bool

Whether to calculate the Historical Mandelbrot Channel (over-time), and return a time-series of channels from the start to the end date. If False, the Mandelbrot Channel calculation is done aggregating all of the data into one observation. If True, then it will enable daily observations over-time. Defaults to False.

required
chart bool

Whether to return a chart object. Defaults to False.

required
Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
class MandelbrotChannelQueryParams(QueryParams):
    """
    QueryParams model for the Mandelbrot Channel command, a Pydantic v2 model.

    Parameters
    ----------
    window : str
        The width of the window used for splitting the data into sections for
        detrending. Defaults to "1m".
    rv_adjustment : bool
        Whether to adjust the calculation for realized volatility. If True, the
        data is filtered
        to only include observations in the same volatility bucket that the
        stock is currently in. Defaults to True.
    rv_method : str
        The method to calculate the realized volatility. Only need to define
        when rv_adjustment is True. Defaults to "std".
    rs_method : Literal["RS", "RS_min", "RS_max", "RS_mean"]
        The method to use for Range/STD calculation. This is either, min, max
        or mean of all RS ranges
        per window. If not defined, just used the most recent RS window.
        Defaults to "RS".
    rv_grouped_mean : bool
        Whether to calculate the mean value of realized volatility over
        multiple window lengths. Defaults to False.
    live_price : bool
        Whether to calculate the ranges using the current live price, or the
        most recent 'close' observation. Defaults to False.
    historical : bool
        Whether to calculate the Historical Mandelbrot Channel (over-time), and
        return a time-series of channels from the start to the end date. If
        False, the Mandelbrot Channel calculation is done aggregating all of the
        data into one observation. If True, then it will enable daily
        observations over-time. Defaults to False.
    chart : bool
        Whether to return a chart object. Defaults to False.
    """

    window: str = Field(
        default="1mo",
        title="Window",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("window", ""),
    )
    rv_adjustment: bool = Field(
        default=True,
        title="Realized Volatility Adjustment",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("rv_adjustment", ""),
    )
    rv_method: Literal[
        "std",
        "parkinson",
        "garman_klass",
        "gk",
        "hodges_tompkins",
        "ht",
        "rogers_satchell",
        "rs",
        "yang_zhang",
        "yz",
        "squared_returns",
        "sq",
    ] = Field(
        default="std",
        title="Realized Volatility Method",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("rv_method", ""),
    )
    rs_method: Literal["RS", "RS_min", "RS_max", "RS_mean"] = Field(
        default="RS",
        title="R/S Method",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("rs_method", ""),
    )
    rv_grouped_mean: bool = Field(
        default=False,
        title="Realized Volatility Grouped Mean",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("rv_grouped_mean", ""),
    )
    live_price: bool = Field(
        default=False,
        title="Live Price",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("live_price", ""),
    )
    historical: bool = Field(
        default=False,
        title="Historical Mandelbrot Channel",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("historical", ""),
    )
    chart: bool = Field(
        default=False,
        title="Results Chart",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("chart", ""),
    )
    template: Literal[
        "humbl_dark",
        "humbl_light",
        "ggplot2",
        "seaborn",
        "simple_white",
        "plotly",
        "plotly_white",
        "plotly_dark",
        "presentation",
        "xgridoff",
        "ygridoff",
        "gridon",
        "none",
    ] = Field(
        default="humbl_dark",
        title="Plotly Template",
        description=MANDELBROT_QUERY_DESCRIPTIONS.get("template", ""),
    )

    @field_validator("window", mode="after", check_fields=False)
    @classmethod
    def window_format(cls, v: str) -> str:
        """
        Format the window string into a standardized format.

        Parameters
        ----------
        v : str
            The window size as a string.

        Returns
        -------
        str
            The window string in a standardized format.

        Raises
        ------
        ValueError
            If the input is not a string.
        """
        if isinstance(v, str):
            return _window_format(v, _return_timedelta=False)
        else:
            msg = "Window must be a string."
            raise ValueError(msg)
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelQueryParams.window_format classmethod ¤
window_format(v: str) -> str

Format the window string into a standardized format.

Parameters:

Name Type Description Default
v str

The window size as a string.

required

Returns:

Type Description
str

The window string in a standardized format.

Raises:

Type Description
ValueError

If the input is not a string.

Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
@field_validator("window", mode="after", check_fields=False)
@classmethod
def window_format(cls, v: str) -> str:
    """
    Format the window string into a standardized format.

    Parameters
    ----------
    v : str
        The window size as a string.

    Returns
    -------
    str
        The window string in a standardized format.

    Raises
    ------
    ValueError
        If the input is not a string.
    """
    if isinstance(v, str):
        return _window_format(v, _return_timedelta=False)
    else:
        msg = "Window must be a string."
        raise ValueError(msg)
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelData ¤

Bases: Data

Data model for the Mandelbrot Channel command, a Pandera.Polars Model.

Parameters:

Name Type Description Default
date Union[date, datetime]

The date of the data point. Defaults to None.

required
symbol str

The stock symbol. Defaults to None.

required
bottom_price float

The bottom price in the Mandelbrot Channel. Defaults to None.

required
recent_price float

The most recent price within the Mandelbrot Channel. Defaults to None.

required
top_price float

The top price in the Mandelbrot Channel. Defaults to None.

required
Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
class MandelbrotChannelData(Data):
    """
    Data model for the Mandelbrot Channel command, a Pandera.Polars Model.

    Parameters
    ----------
    date : Union[dt.date, dt.datetime], optional
        The date of the data point. Defaults to None.
    symbol : str, optional
        The stock symbol. Defaults to None.
    bottom_price : float, optional
        The bottom price in the Mandelbrot Channel. Defaults to None.
    recent_price : float, optional
        The most recent price within the Mandelbrot Channel. Defaults to None.
    top_price : float, optional
        The top price in the Mandelbrot Channel. Defaults to None.
    """

    date: pl.Date = pa.Field(
        default=None,
        title="Date",
        description="The date of the data point.",
    )
    symbol: str = pa.Field(
        default=None,
        title="Symbol",
        description="The stock symbol.",
    )
    bottom_price: float = pa.Field(
        default=None,
        title="Bottom Price",
        description="The bottom price in the Mandelbrot Channel.",
    )
    recent_price: float = pa.Field(
        default=None,
        title="Recent Price",
        description="The most recent price within the Mandelbrot Channel.",
        alias="(close_price|recent_price|last_price)",
        regex=True,
    )
    top_price: float = pa.Field(
        default=None,
        title="Top Price",
        description="The top price in the Mandelbrot Channel.",
    )
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelFetcher ¤

Fetcher for the Mandelbrot Channel command.

Parameters:

Name Type Description Default
context_params ToolboxQueryParams

The context parameters for the toolbox query.

required
command_params MandelbrotChannelQueryParams

The command-specific parameters for the Mandelbrot Channel query.

required

Attributes:

Name Type Description
context_params ToolboxQueryParams

Stores the context parameters passed during initialization.

command_params MandelbrotChannelQueryParams

Stores the command-specific parameters passed during initialization.

equity_historical_data DataFrame

The raw data extracted from the data provider, before transformation.

Methods:

Name Description
transform_query

Transform the command-specific parameters into a query.

extract_data

Extracts the data from the provider and returns it as a Polars DataFrame.

transform_data

Transforms the command-specific data according to the Mandelbrot Channel logic.

fetch_data

Execute TET Pattern.

Returns:

Type Description
HumblObject

results : MandelbrotChannelData Serializable results. provider : Literal['fmp', 'intrinio', 'polygon', 'tiingo', 'yfinance'] Provider name. warnings : Optional[List[Warning_]] List of warnings. chart : Optional[Chart] Chart object. context_params : ToolboxQueryParams Context-specific parameters. command_params : MandelbrotChannelQueryParams Command-specific parameters.

Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
class MandelbrotChannelFetcher:
    """
    Fetcher for the Mandelbrot Channel command.

    Parameters
    ----------
    context_params : ToolboxQueryParams
        The context parameters for the toolbox query.
    command_params : MandelbrotChannelQueryParams
        The command-specific parameters for the Mandelbrot Channel query.

    Attributes
    ----------
    context_params : ToolboxQueryParams
        Stores the context parameters passed during initialization.
    command_params : MandelbrotChannelQueryParams
        Stores the command-specific parameters passed during initialization.
    equity_historical_data : pl.DataFrame
        The raw data extracted from the data provider, before transformation.

    Methods
    -------
    transform_query()
        Transform the command-specific parameters into a query.
    extract_data()
        Extracts the data from the provider and returns it as a Polars DataFrame.
    transform_data()
        Transforms the command-specific data according to the Mandelbrot Channel logic.
    fetch_data()
        Execute TET Pattern.

    Returns
    -------
    HumblObject
        results : MandelbrotChannelData
            Serializable results.
        provider : Literal['fmp', 'intrinio', 'polygon', 'tiingo', 'yfinance']
            Provider name.
        warnings : Optional[List[Warning_]]
            List of warnings.
        chart : Optional[Chart]
            Chart object.
        context_params : ToolboxQueryParams
            Context-specific parameters.
        command_params : MandelbrotChannelQueryParams
            Command-specific parameters.

    """

    def __init__(
        self,
        context_params: ToolboxQueryParams,
        command_params: MandelbrotChannelQueryParams,
    ):
        """
        Initialize the MandelbrotChannelFetcher with context and command parameters.

        Parameters
        ----------
        context_params : ToolboxQueryParams
            The context parameters for the toolbox query.
        command_params : MandelbrotChannelQueryParams
            The command-specific parameters for the Mandelbrot Channel query.
        """
        self.context_params = context_params
        self.command_params = command_params

    def transform_query(self):
        """
        Transform the command-specific parameters into a query.

        If command_params is not provided, it initializes a default MandelbrotChannelQueryParams object.
        """
        if not self.command_params:
            self.command_params = None
            # Set Default Arguments
            self.command_params: MandelbrotChannelQueryParams = (
                MandelbrotChannelQueryParams()
            )
        else:
            self.command_params: MandelbrotChannelQueryParams = (
                MandelbrotChannelQueryParams(**self.command_params)
            )

    def extract_data(self):
        """
        Extract the data from the provider and returns it as a Polars DataFrame.

        Drops unnecessary columns like dividends and stock splits from the data.

        Returns
        -------
        pl.DataFrame
            The extracted data as a Polars DataFrame.
        """
        self.equity_historical_data: pl.LazyFrame = (
            obb.equity.price.historical(
                symbol=self.context_params.symbols,
                start_date=self.context_params.start_date,
                end_date=self.context_params.end_date,
                provider=self.context_params.provider,
                adjustment="splits_and_dividends",
                # add kwargs
            )
            .to_polars()
            .lazy()
        ).drop(["dividend", "split_ratio"])  # TODO: drop `capital_gains` col

        if len(self.context_params.symbols) == 1:
            self.equity_historical_data = (
                self.equity_historical_data.with_columns(
                    symbol=pl.lit(self.context_params.symbols[0])
                )
            )
        return self

    def transform_data(self):
        """
        Transform the command-specific data according to the Mandelbrot Channel logic.

        Returns
        -------
        pl.DataFrame
            The transformed data as a Polars DataFrame
        """
        if self.command_params.historical is False:
            transformed_data = calc_mandelbrot_channel(
                data=self.equity_historical_data,
                window=self.command_params.window,
                rv_adjustment=self.command_params.rv_adjustment,
                rv_method=self.command_params.rv_method,
                rv_grouped_mean=self.command_params.rv_grouped_mean,
                rs_method=self.command_params.rs_method,
                live_price=self.command_params.live_price,
            )
        else:
            transformed_data = calc_mandelbrot_channel_historical_concurrent(
                data=self.equity_historical_data,
                window=self.command_params.window,
                rv_adjustment=self.command_params.rv_adjustment,
                rv_method=self.command_params.rv_method,
                rv_grouped_mean=self.command_params.rv_grouped_mean,
                rs_method=self.command_params.rs_method,
                live_price=self.command_params.live_price,
                use_processes=False,
            )

        self.transformed_data = MandelbrotChannelData(
            transformed_data.collect().drop_nulls()  ## HOTFIX - need to trace where coming from w/ unequal data
        ).lazy()

        if self.command_params.chart:
            self.chart = generate_plots(
                self.transformed_data,
                self.equity_historical_data,
                template=self.command_params.template,
            )
        else:
            self.chart = None

        self.transformed_data = self.transformed_data.serialize(format="binary")
        return self

    @log_start_end(logger=logger)
    def fetch_data(self):
        """
        Execute TET Pattern.

        This method executes the query transformation, data fetching and
        transformation process by first calling `transform_query` to prepare the query parameters, then
        extracting the raw data using `extract_data` method, and finally
        transforming the raw data using `transform_data` method.

        Returns
        -------
        pl.DataFrame
            The transformed data as a Polars DataFrame, ready for further analysis
            or visualization.
        """
        self.transform_query()
        self.extract_data()
        self.transform_data()

        if not hasattr(self.context_params, "warnings"):
            self.context_params.warnings = []

        return HumblObject(
            results=self.transformed_data,
            provider=self.context_params.provider,
            equity_data=self.equity_historical_data.serialize(),
            warnings=self.context_params.warnings,
            chart=self.chart,
            context_params=self.context_params,
            command_params=self.command_params,
        )
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelFetcher.__init__ ¤
__init__(context_params: ToolboxQueryParams, command_params: MandelbrotChannelQueryParams)

Initialize the MandelbrotChannelFetcher with context and command parameters.

Parameters:

Name Type Description Default
context_params ToolboxQueryParams

The context parameters for the toolbox query.

required
command_params MandelbrotChannelQueryParams

The command-specific parameters for the Mandelbrot Channel query.

required
Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
def __init__(
    self,
    context_params: ToolboxQueryParams,
    command_params: MandelbrotChannelQueryParams,
):
    """
    Initialize the MandelbrotChannelFetcher with context and command parameters.

    Parameters
    ----------
    context_params : ToolboxQueryParams
        The context parameters for the toolbox query.
    command_params : MandelbrotChannelQueryParams
        The command-specific parameters for the Mandelbrot Channel query.
    """
    self.context_params = context_params
    self.command_params = command_params
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelFetcher.transform_query ¤
transform_query()

Transform the command-specific parameters into a query.

If command_params is not provided, it initializes a default MandelbrotChannelQueryParams object.

Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
def transform_query(self):
    """
    Transform the command-specific parameters into a query.

    If command_params is not provided, it initializes a default MandelbrotChannelQueryParams object.
    """
    if not self.command_params:
        self.command_params = None
        # Set Default Arguments
        self.command_params: MandelbrotChannelQueryParams = (
            MandelbrotChannelQueryParams()
        )
    else:
        self.command_params: MandelbrotChannelQueryParams = (
            MandelbrotChannelQueryParams(**self.command_params)
        )
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelFetcher.extract_data ¤
extract_data()

Extract the data from the provider and returns it as a Polars DataFrame.

Drops unnecessary columns like dividends and stock splits from the data.

Returns:

Type Description
DataFrame

The extracted data as a Polars DataFrame.

Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
def extract_data(self):
    """
    Extract the data from the provider and returns it as a Polars DataFrame.

    Drops unnecessary columns like dividends and stock splits from the data.

    Returns
    -------
    pl.DataFrame
        The extracted data as a Polars DataFrame.
    """
    self.equity_historical_data: pl.LazyFrame = (
        obb.equity.price.historical(
            symbol=self.context_params.symbols,
            start_date=self.context_params.start_date,
            end_date=self.context_params.end_date,
            provider=self.context_params.provider,
            adjustment="splits_and_dividends",
            # add kwargs
        )
        .to_polars()
        .lazy()
    ).drop(["dividend", "split_ratio"])  # TODO: drop `capital_gains` col

    if len(self.context_params.symbols) == 1:
        self.equity_historical_data = (
            self.equity_historical_data.with_columns(
                symbol=pl.lit(self.context_params.symbols[0])
            )
        )
    return self
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelFetcher.transform_data ¤
transform_data()

Transform the command-specific data according to the Mandelbrot Channel logic.

Returns:

Type Description
DataFrame

The transformed data as a Polars DataFrame

Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
def transform_data(self):
    """
    Transform the command-specific data according to the Mandelbrot Channel logic.

    Returns
    -------
    pl.DataFrame
        The transformed data as a Polars DataFrame
    """
    if self.command_params.historical is False:
        transformed_data = calc_mandelbrot_channel(
            data=self.equity_historical_data,
            window=self.command_params.window,
            rv_adjustment=self.command_params.rv_adjustment,
            rv_method=self.command_params.rv_method,
            rv_grouped_mean=self.command_params.rv_grouped_mean,
            rs_method=self.command_params.rs_method,
            live_price=self.command_params.live_price,
        )
    else:
        transformed_data = calc_mandelbrot_channel_historical_concurrent(
            data=self.equity_historical_data,
            window=self.command_params.window,
            rv_adjustment=self.command_params.rv_adjustment,
            rv_method=self.command_params.rv_method,
            rv_grouped_mean=self.command_params.rv_grouped_mean,
            rs_method=self.command_params.rs_method,
            live_price=self.command_params.live_price,
            use_processes=False,
        )

    self.transformed_data = MandelbrotChannelData(
        transformed_data.collect().drop_nulls()  ## HOTFIX - need to trace where coming from w/ unequal data
    ).lazy()

    if self.command_params.chart:
        self.chart = generate_plots(
            self.transformed_data,
            self.equity_historical_data,
            template=self.command_params.template,
        )
    else:
        self.chart = None

    self.transformed_data = self.transformed_data.serialize(format="binary")
    return self
humbldata.core.standard_models.toolbox.technical.mandelbrot_channel.MandelbrotChannelFetcher.fetch_data ¤
fetch_data()

Execute TET Pattern.

This method executes the query transformation, data fetching and transformation process by first calling transform_query to prepare the query parameters, then extracting the raw data using extract_data method, and finally transforming the raw data using transform_data method.

Returns:

Type Description
DataFrame

The transformed data as a Polars DataFrame, ready for further analysis or visualization.

Source code in src/humbldata/core/standard_models/toolbox/technical/mandelbrot_channel.py
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
@log_start_end(logger=logger)
def fetch_data(self):
    """
    Execute TET Pattern.

    This method executes the query transformation, data fetching and
    transformation process by first calling `transform_query` to prepare the query parameters, then
    extracting the raw data using `extract_data` method, and finally
    transforming the raw data using `transform_data` method.

    Returns
    -------
    pl.DataFrame
        The transformed data as a Polars DataFrame, ready for further analysis
        or visualization.
    """
    self.transform_query()
    self.extract_data()
    self.transform_data()

    if not hasattr(self.context_params, "warnings"):
        self.context_params.warnings = []

    return HumblObject(
        results=self.transformed_data,
        provider=self.context_params.provider,
        equity_data=self.equity_historical_data.serialize(),
        warnings=self.context_params.warnings,
        chart=self.chart,
        context_params=self.context_params,
        command_params=self.command_params,
    )

humbldata.core.standard_models.toolbox.ToolboxQueryParams ¤

Bases: QueryParams

Query parameters for the ToolboxController.

This class defines the query parameters used by the ToolboxController, including the stock symbol, data interval, start date, and end date. It also includes a method to ensure the stock symbol is in uppercase. If no dates constraints are given, it will collect the MAX amount of data available.

Parameters:

Name Type Description Default
symbol str | list[str] | set[str]

The symbol or ticker of the stock. You can pass multiple tickers like: "AAPL", "AAPL, MSFT" or ["AAPL", "MSFT"]. The input will be converted to uppercase.

""
interval str | None

The interval of the data. Can be None.

"1d"
start_date str

The start date for the data query.

""
end_date str

The end date for the data query.

""
provider OBB_EQUITY_PRICE_HISTORICAL_PROVIDERS

The data provider to be used for the query.

"yfinance"
membership str

The membership level of the user.

"anonymous"

Methods:

Name Description
upper_symbol

A Pydantic @field_validator() that converts the stock symbol to uppercase. If a list or set of symbols is provided, each symbol in the collection is converted to uppercase and returned as a comma-separated string.

validate_interval

A Pydantic @field_validator() that validates the interval format. Ensures the interval is a number followed by one of 's', 'm', 'h', 'd', 'W', 'M', 'Q', 'Y'.

validate_date_format

A Pydantic @field_validator() that validates the date format to ensure it is YYYY-MM-DD.

validate_start_date

A Pydantic @model_validator() that validates and adjusts the start date based on membership level.

Raises:

Type Description
ValueError

If the symbol parameter is a list and not all elements are strings, or if symbol is not a string, list, or set. If the interval format is invalid. If the date format is invalid.

Notes

A Pydantic v2 Model

Source code in src/humbldata/core/standard_models/toolbox/__init__.py
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
class ToolboxQueryParams(QueryParams):
    """
    Query parameters for the ToolboxController.

    This class defines the query parameters used by the ToolboxController,
    including the stock symbol, data interval, start date, and end date. It also
    includes a method to ensure the stock symbol is in uppercase.
    If no dates constraints are given, it will collect the MAX amount of data
    available.

    Parameters
    ----------
    symbol : str | list[str] | set[str], default=""
        The symbol or ticker of the stock. You can pass multiple tickers like:
        "AAPL", "AAPL, MSFT" or ["AAPL", "MSFT"]. The input will be converted
        to uppercase.
    interval : str | None, default="1d"
        The interval of the data. Can be None.
    start_date : str, default=""
        The start date for the data query.
    end_date : str, default=""
        The end date for the data query.
    provider : OBB_EQUITY_PRICE_HISTORICAL_PROVIDERS, default="yfinance"
        The data provider to be used for the query.
    membership : str, default="anonymous"
        The membership level of the user.

    Methods
    -------
    upper_symbol(cls, v: Union[str, list[str], set[str]]) -> Union[str, list[str]]
        A Pydantic `@field_validator()` that converts the stock symbol to
        uppercase. If a list or set of symbols is provided, each symbol in the
        collection is converted to uppercase and returned as a comma-separated
        string.
    validate_interval(cls, v: str) -> str
        A Pydantic `@field_validator()` that validates the interval format.
        Ensures the interval is a number followed by one of 's', 'm', 'h', 'd', 'W', 'M', 'Q', 'Y'.
    validate_date_format(cls, v: str | date) -> date
        A Pydantic `@field_validator()` that validates the date format to ensure it is YYYY-MM-DD.
    validate_start_date(self) -> 'ToolboxQueryParams'
        A Pydantic `@model_validator()` that validates and adjusts the start date based on membership level.

    Raises
    ------
    ValueError
        If the `symbol` parameter is a list and not all elements are strings, or
        if `symbol` is not a string, list, or set.
        If the `interval` format is invalid.
        If the `date` format is invalid.

    Notes
    -----
    A Pydantic v2 Model

    """

    symbols: str | list[str] | None = Field(
        default=None,
        title="Symbols/Tickers",
        description=QUERY_DESCRIPTIONS.get("symbols", ""),
    )
    interval: str | None = Field(
        default="1d",
        title="Interval",
        description=QUERY_DESCRIPTIONS.get("interval", ""),
    )
    start_date: dt.date | str = Field(
        default_factory=lambda: dt.date(1950, 1, 1),
        title="Start Date",
        description="The starting date for the data query.",
    )
    end_date: dt.date | str = Field(
        default_factory=lambda: dt.datetime.now(
            tz=pytz.timezone("America/New_York")
        ).date(),
        title="End Date",
        description="The ending date for the data query.",
    )
    provider: OBB_EQUITY_PRICE_HISTORICAL_PROVIDERS = Field(
        default="yfinance",
        title="Provider",
        description=QUERY_DESCRIPTIONS.get("provider", ""),
    )
    membership: Literal[
        "anonymous",
        "humblPEON",
        "humblPREMIUM",
        "humblPOWER",
        "humblPERMANENT",
        "admin",
    ] = Field(
        default="anonymous",
        title="Membership",
        description=QUERY_DESCRIPTIONS.get("membership", ""),
    )

    @field_validator("symbols", mode="before", check_fields=False)
    @classmethod
    def upper_symbol(cls, v: str | list[str] | set[str]) -> list[str]:
        """
        Convert the stock symbols to uppercase and remove empty strings.

        Parameters
        ----------
        v : Union[str, List[str], Set[str]]
            The stock symbol or collection of symbols to be converted.

        Returns
        -------
        List[str]
            A list of uppercase stock symbols with empty strings removed.
        """
        # Handle empty inputs
        if not v:
            return []

        # If v is a string, split it by commas into a list. Otherwise, ensure it's a list.
        v = v.split(",") if isinstance(v, str) else list(v)

        # Convert all elements to uppercase, trim whitespace, and remove empty strings
        valid_symbols = [
            symbol.strip().upper() for symbol in v if symbol.strip()
        ]

        if not valid_symbols:
            msg = "At least one valid symbol (str) must be provided"
            raise ValueError(msg)

        return valid_symbols

    @field_validator("interval", mode="before", check_fields=False)
    @classmethod
    def validate_interval(cls, v: str) -> str:
        """
        Validate the interval format.

        Parameters
        ----------
        v : str
            The interval string to be validated.

        Returns
        -------
        str
            The validated interval string.

        Raises
        ------
        ValueError
            If the interval format is invalid.
        """
        if not re.match(r"^\d*[smhdWMQY]$", v):
            msg = "Invalid interval format. Must be a number followed by one of 's', 'm', 'h', 'd', 'W', 'M', 'Q', 'Y'."
            raise ValueError(msg)
        return v

    @field_validator("start_date", "end_date", mode="before")
    @classmethod
    def validate_date_format(cls, v: str | dt.date) -> dt.date:
        """
        Validate and convert the input date to a datetime.date object.

        This method accepts either a string in 'YYYY-MM-DD' format or a datetime.date object.
        It converts the input to a datetime.date object, ensuring it's in the correct format.

        Parameters
        ----------
        v : str | dt.date
            The input date to validate and convert.

        Returns
        -------
        dt.date
            The validated and converted date.

        Raises
        ------
        ValueError
            If the input string is not in the correct format.
        TypeError
            If the input is neither a string nor a datetime.date object.
        """
        if isinstance(v, str):
            try:
                date = datetime.strptime(v, "%Y-%m-%d").replace(
                    tzinfo=pytz.timezone("America/New_York")
                )
            except ValueError as e:
                msg = f"Invalid date format. Must be YYYY-MM-DD: {e}"
                raise ValueError(msg) from e
        elif isinstance(v, dt.date):
            date = datetime.combine(v, datetime.min.time()).replace(
                tzinfo=pytz.timezone("America/New_York")
            )
        else:
            msg = f"Expected str or date, got {type(v)}"
            raise TypeError(msg)

        # Check if the date is in the correct format
        if date.strftime("%Y-%m-%d") != date.strftime("%Y-%m-%d"):
            msg = "Date must be in YYYY-MM-DD format"
            raise ValueError(msg)
        if date.date() < dt.date(1950, 1, 1):
            msg = "Date must be after 1950-01-01"
            raise ValueError(msg)

        return date.date()

    @model_validator(mode="after")
    def validate_start_date(self) -> "ToolboxQueryParams":
        end_date: dt.date = self.end_date  # type: ignore  # noqa: PGH003 the date has already been converted to date

        start_date_mapping = {
            "anonymous": (end_date - timedelta(days=365), "1Y"),
            "humblPEON": (end_date - timedelta(days=730), "2Y"),
            "humblPREMIUM": (end_date - timedelta(days=1825), "5Y"),
            "humblPOWER": (end_date - timedelta(days=7300), "20Y"),
            "humblPERMANENT": (end_date - timedelta(days=10680), "30Y"),
            "admin": (
                datetime(
                    1950, 1, 1, tzinfo=pytz.timezone("America/New_York")
                ).date(),
                "All",
            ),
        }

        allowed_start_date, data_length = start_date_mapping.get(
            self.membership, (end_date - timedelta(days=365), "1Y")
        )

        if self.start_date < allowed_start_date:  # type: ignore  # noqa: PGH003 the date has already been converted to date
            warning_msg = f"Start date adjusted to {allowed_start_date} based on {self.membership} membership ({data_length} of data)."
            logger.warning(warning_msg)
            self.start_date = allowed_start_date
            if not hasattr(self, "warnings"):
                self.warnings = []
            self.warnings.append(
                HumblDataWarning(
                    category="ToolboxQueryParams",
                    message=warning_msg,
                )
            )

        return self
humbldata.core.standard_models.toolbox.ToolboxQueryParams.upper_symbol classmethod ¤
upper_symbol(v: str | list[str] | set[str]) -> list[str]

Convert the stock symbols to uppercase and remove empty strings.

Parameters:

Name Type Description Default
v Union[str, List[str], Set[str]]

The stock symbol or collection of symbols to be converted.

required

Returns:

Type Description
List[str]

A list of uppercase stock symbols with empty strings removed.

Source code in src/humbldata/core/standard_models/toolbox/__init__.py
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
@field_validator("symbols", mode="before", check_fields=False)
@classmethod
def upper_symbol(cls, v: str | list[str] | set[str]) -> list[str]:
    """
    Convert the stock symbols to uppercase and remove empty strings.

    Parameters
    ----------
    v : Union[str, List[str], Set[str]]
        The stock symbol or collection of symbols to be converted.

    Returns
    -------
    List[str]
        A list of uppercase stock symbols with empty strings removed.
    """
    # Handle empty inputs
    if not v:
        return []

    # If v is a string, split it by commas into a list. Otherwise, ensure it's a list.
    v = v.split(",") if isinstance(v, str) else list(v)

    # Convert all elements to uppercase, trim whitespace, and remove empty strings
    valid_symbols = [
        symbol.strip().upper() for symbol in v if symbol.strip()
    ]

    if not valid_symbols:
        msg = "At least one valid symbol (str) must be provided"
        raise ValueError(msg)

    return valid_symbols
humbldata.core.standard_models.toolbox.ToolboxQueryParams.validate_interval classmethod ¤
validate_interval(v: str) -> str

Validate the interval format.

Parameters:

Name Type Description Default
v str

The interval string to be validated.

required

Returns:

Type Description
str

The validated interval string.

Raises:

Type Description
ValueError

If the interval format is invalid.

Source code in src/humbldata/core/standard_models/toolbox/__init__.py
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
@field_validator("interval", mode="before", check_fields=False)
@classmethod
def validate_interval(cls, v: str) -> str:
    """
    Validate the interval format.

    Parameters
    ----------
    v : str
        The interval string to be validated.

    Returns
    -------
    str
        The validated interval string.

    Raises
    ------
    ValueError
        If the interval format is invalid.
    """
    if not re.match(r"^\d*[smhdWMQY]$", v):
        msg = "Invalid interval format. Must be a number followed by one of 's', 'm', 'h', 'd', 'W', 'M', 'Q', 'Y'."
        raise ValueError(msg)
    return v
humbldata.core.standard_models.toolbox.ToolboxQueryParams.validate_date_format classmethod ¤
validate_date_format(v: str | date) -> date

Validate and convert the input date to a datetime.date object.

This method accepts either a string in 'YYYY-MM-DD' format or a datetime.date object. It converts the input to a datetime.date object, ensuring it's in the correct format.

Parameters:

Name Type Description Default
v str | date

The input date to validate and convert.

required

Returns:

Type Description
date

The validated and converted date.

Raises:

Type Description
ValueError

If the input string is not in the correct format.

TypeError

If the input is neither a string nor a datetime.date object.

Source code in src/humbldata/core/standard_models/toolbox/__init__.py
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
@field_validator("start_date", "end_date", mode="before")
@classmethod
def validate_date_format(cls, v: str | dt.date) -> dt.date:
    """
    Validate and convert the input date to a datetime.date object.

    This method accepts either a string in 'YYYY-MM-DD' format or a datetime.date object.
    It converts the input to a datetime.date object, ensuring it's in the correct format.

    Parameters
    ----------
    v : str | dt.date
        The input date to validate and convert.

    Returns
    -------
    dt.date
        The validated and converted date.

    Raises
    ------
    ValueError
        If the input string is not in the correct format.
    TypeError
        If the input is neither a string nor a datetime.date object.
    """
    if isinstance(v, str):
        try:
            date = datetime.strptime(v, "%Y-%m-%d").replace(
                tzinfo=pytz.timezone("America/New_York")
            )
        except ValueError as e:
            msg = f"Invalid date format. Must be YYYY-MM-DD: {e}"
            raise ValueError(msg) from e
    elif isinstance(v, dt.date):
        date = datetime.combine(v, datetime.min.time()).replace(
            tzinfo=pytz.timezone("America/New_York")
        )
    else:
        msg = f"Expected str or date, got {type(v)}"
        raise TypeError(msg)

    # Check if the date is in the correct format
    if date.strftime("%Y-%m-%d") != date.strftime("%Y-%m-%d"):
        msg = "Date must be in YYYY-MM-DD format"
        raise ValueError(msg)
    if date.date() < dt.date(1950, 1, 1):
        msg = "Date must be after 1950-01-01"
        raise ValueError(msg)

    return date.date()

humbldata.core.standard_models.toolbox.ToolboxData ¤

Bases: Data

The Data for the ToolboxController.

WIP: I'm thinking that this is the final layer around which the HumblDataObject will be returned to the user, with all necessary information about the query, command, data and charts that they should want. This HumblDataObject will return values in json/dict format, with methods to allow transformation into polars_df, pandas_df, a list, a dict...

Source code in src/humbldata/core/standard_models/toolbox/__init__.py
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
class ToolboxData(Data):
    """
    The Data for the ToolboxController.

    WIP: I'm thinking that this is the final layer around which the
    HumblDataObject will be returned to the user, with all necessary information
    about the query, command, data and charts that they should want.
    This HumblDataObject will return values in json/dict format, with methods
    to allow transformation into polars_df, pandas_df, a list, a dict...
    """

    date: pl.Date = pa.Field(
        default=None,
        title="Date",
        description=DATA_DESCRIPTIONS.get("date", ""),
    )
    open: float = pa.Field(
        default=None,
        title="Open",
        description=DATA_DESCRIPTIONS.get("open", ""),
    )
    high: float = pa.Field(
        default=None,
        title="High",
        description=DATA_DESCRIPTIONS.get("high", ""),
    )
    low: float = pa.Field(
        default=None,
        title="Low",
        description=DATA_DESCRIPTIONS.get("low", ""),
    )
    close: float = pa.Field(
        default=None,
        title="Close",
        description=DATA_DESCRIPTIONS.get("close", ""),
    )
    volume: int = pa.Field(
        default=None,
        title="Volume",
        description=DATA_DESCRIPTIONS.get("volume", ""),
    )

humbldata.core.utils ¤

humbldata core utils.

Utils is used to keep; helpers, descriptions, constants, and other useful tools.

humbldata.core.utils.env ¤

The Env Module, to control a single instance of environment variables.

humbldata.core.utils.env.Env ¤

A singleton environment to hold all Environment variables.

Source code in src/humbldata/core/utils/env.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
class Env(metaclass=SingletonMeta):
    """A singleton environment to hold all Environment variables."""

    _environ: dict[str, str]

    def __init__(self) -> None:
        env_path = dotenv.find_dotenv()
        dotenv.load_dotenv(Path(env_path))

        self._environ = os.environ.copy()

    @property
    def OBB_PAT(self) -> str | None:  # noqa: N802
        """OpenBB Personal Access Token."""
        return self._environ.get("OBB_PAT", None)

    @property
    def LOGGER_LEVEL(self) -> int:
        """
        Get the global logger level.

        Returns
        -------
        int
            The numeric logging level (default: 20 for INFO).

        Notes
        -----
        Mapping of string levels to numeric values:
        DEBUG: 10, INFO: 20, WARNING: 30, ERROR: 40, CRITICAL: 50
        """
        level_map = {
            "DEBUG": 10,
            "INFO": 20,
            "WARNING": 30,
            "ERROR": 40,
            "CRITICAL": 50,
        }
        return level_map.get(
            self._environ.get("LOGGER_LEVEL", "INFO").upper(), 20
        )

    @property
    def OBB_LOGGED_IN(self) -> bool:
        return self.str2bool(self._environ.get("OBB_LOGGED_IN", False))

    @staticmethod
    def str2bool(value: str | bool) -> bool:
        """Match a value to its boolean correspondent.

        Args:
            value (str): The string value to be converted to a boolean.

        Returns
        -------
            bool: The boolean value corresponding to the input string.

        Raises
        ------
            ValueError: If the input string does not correspond to a boolean
            value.
        """
        if isinstance(value, bool):
            return value
        if value.lower() in {"false", "f", "0", "no", "n"}:
            return False
        if value.lower() in {"true", "t", "1", "yes", "y"}:
            return True
        msg = f"Failed to cast '{value}' to bool."
        raise ValueError(msg)
humbldata.core.utils.env.Env.OBB_PAT property ¤
OBB_PAT: str | None

OpenBB Personal Access Token.

humbldata.core.utils.env.Env.LOGGER_LEVEL property ¤
LOGGER_LEVEL: int

Get the global logger level.

Returns:

Type Description
int

The numeric logging level (default: 20 for INFO).

Notes

Mapping of string levels to numeric values: DEBUG: 10, INFO: 20, WARNING: 30, ERROR: 40, CRITICAL: 50

humbldata.core.utils.env.Env.str2bool staticmethod ¤
str2bool(value: str | bool) -> bool

Match a value to its boolean correspondent.

Args: value (str): The string value to be converted to a boolean.

Returns:

Type Description
bool: The boolean value corresponding to the input string.

Raises:

Type Description
ValueError: If the input string does not correspond to a boolean

value.

Source code in src/humbldata/core/utils/env.py
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
@staticmethod
def str2bool(value: str | bool) -> bool:
    """Match a value to its boolean correspondent.

    Args:
        value (str): The string value to be converted to a boolean.

    Returns
    -------
        bool: The boolean value corresponding to the input string.

    Raises
    ------
        ValueError: If the input string does not correspond to a boolean
        value.
    """
    if isinstance(value, bool):
        return value
    if value.lower() in {"false", "f", "0", "no", "n"}:
        return False
    if value.lower() in {"true", "t", "1", "yes", "y"}:
        return True
    msg = f"Failed to cast '{value}' to bool."
    raise ValueError(msg)

humbldata.core.utils.descriptions ¤

Common descriptions for model fields.

humbldata.core.utils.constants ¤

A module to contain all project-wide constants.

humbldata.core.utils.logger ¤

humbldata.core.utils.logger.setup_logger ¤

setup_logger(name: str, level: int = logging.INFO) -> Logger

Set up a logger with the specified name and logging level.

Parameters:

Name Type Description Default
name str

The name of the logger.

required
level int

The logging level, by default logging.INFO.

INFO

Returns:

Type Description
Logger

A configured logger instance.

Notes

This function creates a logger with a StreamHandler that outputs to sys.stdout. It uses a formatter that includes timestamp, logger name, log level, and message. If the logger already has handlers, it skips the setup to avoid duplicate logging. The logger is configured not to propagate messages to the root logger.

Examples:

>>> logger = setup_logger("my_logger", logging.DEBUG)
>>> logger.debug("This is a debug message")
2023-05-20 10:30:45,123 - my_logger - DEBUG - This is a debug message
Source code in src/humbldata/core/utils/logger.py
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
def setup_logger(name: str, level: int = logging.INFO) -> logging.Logger:
    """
    Set up a logger with the specified name and logging level.

    Parameters
    ----------
    name : str
        The name of the logger.
    level : int, optional
        The logging level, by default logging.INFO.

    Returns
    -------
    logging.Logger
        A configured logger instance.

    Notes
    -----
    This function creates a logger with a StreamHandler that outputs to sys.stdout.
    It uses a formatter that includes timestamp, logger name, log level, and message.
    If the logger already has handlers, it skips the setup to avoid duplicate logging.
    The logger is configured not to propagate messages to the root logger.

    Examples
    --------
    >>> logger = setup_logger("my_logger", logging.DEBUG)
    >>> logger.debug("This is a debug message")
    2023-05-20 10:30:45,123 - my_logger - DEBUG - This is a debug message
    """
    logger = logging.getLogger(name)

    # Check if the logger already has handlers to avoid duplicate logging
    if not logger.handlers:
        logger.setLevel(level)

        # Install coloredlogs
        coloredlogs.install(
            level=level,
            logger=logger,
            fmt="%(levelname)s: %(name)s || %(message)s",
            level_styles={
                "debug": {"color": "green"},
                "info": {"color": "blue"},
                "warning": {"color": "yellow", "bold": True},
                "error": {"color": "red", "bold": True},
                "critical": {
                    "color": "red",
                    "bold": True,
                    "background": "white",
                },
            },
            field_styles={
                "asctime": {"color": "blue"},
                "levelname": {"color": "magenta", "bold": True},
                "name": {"color": "cyan"},
            },
        )

    # Prevent the logger from propagating messages to the root logger
    logger.propagate = False

    return logger

humbldata.core.utils.logger.log_start_end ¤

log_start_end(func: Callable | None = None, *, logger: Logger | None = None) -> Callable

Log the start and end of any function, including time tracking.

This decorator works with both synchronous and asynchronous functions. It logs the start and end of the function execution, as well as the total execution time. If an exception occurs, it logs the exception details.

Parameters:

Name Type Description Default
func Callable | None

The function to be decorated. If None, the decorator can be used with parameters.

None
logger Logger | None

The logger to use. If None, a logger will be created using the function's module name.

None

Returns:

Type Description
Callable

The wrapped function.

Notes
  • For asynchronous functions, the decorator uses an async wrapper.
  • For synchronous functions, it uses a sync wrapper.
  • If a KeyboardInterrupt occurs, it logs the interruption and returns an empty list.
  • If any other exception occurs, it logs the exception and re-raises it.

Examples:

>>> @log_start_end
... def example_function():
...     print("This is an example function")
...
>>> example_function()
START: example_function (sync)
This is an example function
END: example_function (sync) - Total time: 0.0001s
>>> @log_start_end(logger=custom_logger)
... async def async_example():
...     await asyncio.sleep(1)
...
>>> asyncio.run(async_example())
START: async_example (async)
END: async_example (async) - Total time: 1.0012s
Source code in src/humbldata/core/utils/logger.py
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
def log_start_end(
    func: Callable | None = None, *, logger: logging.Logger | None = None
) -> Callable:
    """
    Log the start and end of any function, including time tracking.

    This decorator works with both synchronous and asynchronous functions.
    It logs the start and end of the function execution, as well as the total
    execution time. If an exception occurs, it logs the exception details.

    Parameters
    ----------
    func : Callable | None, optional
        The function to be decorated. If None, the decorator can be used with parameters.
    logger : logging.Logger | None, optional
        The logger to use. If None, a logger will be created using the function's module name.

    Returns
    -------
    Callable
        The wrapped function.

    Notes
    -----
    - For asynchronous functions, the decorator uses an async wrapper.
    - For synchronous functions, it uses a sync wrapper.
    - If a KeyboardInterrupt occurs, it logs the interruption and returns an empty list.
    - If any other exception occurs, it logs the exception and re-raises it.

    Examples
    --------
    >>> @log_start_end
    ... def example_function():
    ...     print("This is an example function")
    ...
    >>> example_function()
    START: example_function (sync)
    This is an example function
    END: example_function (sync) - Total time: 0.0001s

    >>> @log_start_end(logger=custom_logger)
    ... async def async_example():
    ...     await asyncio.sleep(1)
    ...
    >>> asyncio.run(async_example())
    START: async_example (async)
    END: async_example (async) - Total time: 1.0012s
    """
    assert callable(func) or func is None

    def decorator(func: Callable) -> Callable:
        @functools.wraps(func)
        async def async_wrapper(*args, **kwargs) -> Any:
            nonlocal logger
            if logger is None:
                logger = logging.getLogger(func.__module__)

            start_time = time.time()
            logger.info(f"START: {func.__name__} (async)")

            try:
                result = await func(*args, **kwargs)
            except KeyboardInterrupt:
                end_time = time.time()
                total_time = end_time - start_time
                logger.info(
                    f"INTERRUPTED: {func.__name__} (async) - Total time: {total_time:.4f}s"
                )
                return []
            except Exception as e:
                end_time = time.time()
                total_time = end_time - start_time
                logger.exception(
                    f"EXCEPTION in {func.__name__} (async) - Total time: {total_time:.4f}s"
                )
                raise
            else:
                end_time = time.time()
                total_time = end_time - start_time
                logger.info(
                    f"END: {func.__name__} (async) - Total time: {total_time:.4f}s"
                )
                return result

        @functools.wraps(func)
        def sync_wrapper(*args, **kwargs) -> Any:
            nonlocal logger
            if logger is None:
                logger = logging.getLogger(func.__module__)

            start_time = time.time()
            logger.info(f"START: {func.__name__} (sync)")

            try:
                result = func(*args, **kwargs)
            except KeyboardInterrupt:
                end_time = time.time()
                total_time = end_time - start_time
                logger.info(
                    f"INTERRUPTED: {func.__name__} (sync) - Total time: {total_time:.4f}s"
                )
                return []
            except Exception as e:
                end_time = time.time()
                total_time = end_time - start_time
                logger.exception(
                    f"EXCEPTION in {func.__name__} (sync) - Total time: {total_time:.4f}s"
                )
                raise
            else:
                end_time = time.time()
                total_time = end_time - start_time
                logger.info(
                    f"END: {func.__name__} (sync) - Total time: {total_time:.4f}s"
                )
                return result

        if asyncio.iscoroutinefunction(func):
            return async_wrapper
        else:
            return sync_wrapper

    return decorator(func) if callable(func) else decorator

humbldata.core.utils.openbb_helpers ¤

Core Module - OpenBB Helpers.

This module contains functions used to interact with OpenBB, or wrap commands to have specific data outputs.

humbldata.core.utils.openbb_helpers.obb_login ¤

obb_login(pat: str | None = None) -> bool

Log into the OpenBB Hub using a Personal Access Token (PAT).

This function wraps the obb.account.login method to provide a simplified interface for logging into OpenBB Hub. It optionally accepts a PAT. If no PAT is provided, it attempts to use the PAT stored in the environment variable OBB_PAT.

Parameters:

Name Type Description Default
pat str | None

The personal access token for authentication. If None, the token is retrieved from the environment variable OBB_PAT. Default is None.

None

Returns:

Type Description
bool

True if login is successful, False otherwise.

Raises:

Type Description
HumblDataError

If an error occurs during the login process.

Examples:

>>> # obb_login("your_personal_access_token_here")
True
>>> # obb_login()  # Assumes `OBB_PAT` is set in the environment
True
Source code in src/humbldata/core/utils/openbb_helpers.py
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
def obb_login(pat: str | None = None) -> bool:
    """
    Log into the OpenBB Hub using a Personal Access Token (PAT).

    This function wraps the `obb.account.login` method to provide a simplified
    interface for logging into OpenBB Hub. It optionally accepts a PAT. If no PAT
    is provided, it attempts to use the PAT stored in the environment variable
    `OBB_PAT`.

    Parameters
    ----------
    pat : str | None, optional
        The personal access token for authentication. If None, the token is
        retrieved from the environment variable `OBB_PAT`. Default is None.

    Returns
    -------
    bool
        True if login is successful, False otherwise.

    Raises
    ------
    HumblDataError
        If an error occurs during the login process.

    Examples
    --------
    >>> # obb_login("your_personal_access_token_here")
    True

    >>> # obb_login()  # Assumes `OBB_PAT` is set in the environment
    True

    """
    if pat is None:
        pat = Env().OBB_PAT
    try:
        obb.account.login(pat=pat, remember_me=True)
        # obb.account.save()

        # dotenv.set_key(dotenv.find_dotenv(), "OBB_LOGGED_IN", "true")

        return True
    except Exception as e:
        from humbldata.core.standard_models.abstract.warnings import (
            HumblDataWarning,
        )

        # dotenv.set_key(dotenv.find_dotenv(), "OBB_LOGGED_IN", "false")

        warnings.warn(
            "An error occurred while logging into OpenBB. Details below:\n"
            + repr(e),
            category=HumblDataWarning,
            stacklevel=1,
        )
        return False

humbldata.core.utils.openbb_helpers.get_latest_price ¤

get_latest_price(symbol: str | list[str] | Series, provider: OBB_EQUITY_PRICE_QUOTE_PROVIDERS | None = 'yfinance') -> LazyFrame

Context: Core || Category: Utils || Subcategory: OpenBB Helpers || Command: get_latest_price.

Queries the latest stock price data for the given symbol(s) using the specified provider. Defaults to YahooFinance (yfinance) if no provider is specified. Returns a LazyFrame with the stock symbols and their latest prices.

Parameters:

Name Type Description Default
symbol str | list[str] | Series

The stock symbol(s) to query for the latest price. Accepts a single symbol, a list of symbols, or a Polars Series of symbols.

required
provider OBB_EQUITY_PRICE_QUOTE_PROVIDERS

The data provider for fetching stock prices. Defaults is yfinance, in which case a default provider is used.

'yfinance'

Returns:

Type Description
LazyFrame

A Polars LazyFrame with columns for the stock symbols ('symbol') and their latest prices ('last_price').

Source code in src/humbldata/core/utils/openbb_helpers.py
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
def get_latest_price(
    symbol: str | list[str] | pl.Series,
    provider: OBB_EQUITY_PRICE_QUOTE_PROVIDERS | None = "yfinance",
) -> pl.LazyFrame:
    """
    Context: Core || Category: Utils || Subcategory: OpenBB Helpers || **Command: get_latest_price**.

    Queries the latest stock price data for the given symbol(s) using the
    specified provider. Defaults to YahooFinance (`yfinance`) if no provider is
    specified. Returns a LazyFrame with the stock symbols and their latest prices.

    Parameters
    ----------
    symbol : str | list[str] | pl.Series
        The stock symbol(s) to query for the latest price. Accepts a single
        symbol, a list of symbols, or a Polars Series of symbols.
    provider : OBB_EQUITY_PRICE_QUOTE_PROVIDERS, optional
        The data provider for fetching stock prices. Defaults is `yfinance`,
        in which case a default provider is used.

    Returns
    -------
    pl.LazyFrame
        A Polars LazyFrame with columns for the stock symbols ('symbol') and
        their latest prices ('last_price').
    """
    logging.getLogger("openbb_terminal.stocks.stocks_model").setLevel(
        logging.CRITICAL
    )

    return (
        obb.equity.price.quote(symbol, provider=provider)
        .to_polars()
        .lazy()
        .select(["symbol", "last_price"])
        .rename({"last_price": "recent_price"})
    )

humbldata.core.utils.openbb_helpers.aget_latest_price async ¤

aget_latest_price(symbols: str | list[str] | Series, provider: OBB_EQUITY_PRICE_QUOTE_PROVIDERS | None = 'yfinance') -> LazyFrame

Asynchronous version of get_latest_price.

Context: Core || Category: Utils || Subcategory: OpenBB Helpers || Command: get_latest_price_async.

Queries the latest stock price data for the given symbol(s) using the specified provider asynchronously. This functions collects the latest prices for ETF's and Equities, but not futures or options. Defaults to YahooFinance (yfinance) if no provider is specified. Returns a LazyFrame with the stock symbols and their latest prices.

Parameters:

Name Type Description Default
symbols str | List[str] | Series

The stock symbol(s) to query for the latest price. Accepts a single symbol, a list of symbols, or a Polars Series of symbols. You can pass multiple symbols as a string; 'AAPL,XLE', and it will split the string into a list of symbols.

required
provider OBB_EQUITY_PRICE_QUOTE_PROVIDERS

The data provider for fetching stock prices. Default is yfinance.

'yfinance'

Returns:

Type Description
LazyFrame

A Polars LazyFrame with columns for the stock symbols ('symbol') and their latest prices ('recent_price').

Notes

If entering symbols as a string, DO NOT include spaces between the symbols.

Source code in src/humbldata/core/utils/openbb_helpers.py
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
async def aget_latest_price(
    symbols: str | list[str] | pl.Series,
    provider: OBB_EQUITY_PRICE_QUOTE_PROVIDERS | None = "yfinance",
) -> pl.LazyFrame:
    """
    Asynchronous version of get_latest_price.

    Context: Core || Category: Utils || Subcategory: OpenBB Helpers || **Command: get_latest_price_async**.

    Queries the latest stock price data for the given symbol(s) using the
    specified provider asynchronously. This functions collects the latest prices
    for ETF's and Equities, but not futures or options. Defaults to YahooFinance
    (`yfinance`) if no provider is specified. Returns a LazyFrame with the stock
    symbols and their latest prices.

    Parameters
    ----------
    symbols : str | List[str] | pl.Series
        The stock symbol(s) to query for the latest price. Accepts a single
        symbol, a list of symbols, or a Polars Series of symbols.
        You can pass multiple symbols as a string; `'AAPL,XLE'`, and it will
        split the string into a list of symbols.
    provider : OBB_EQUITY_PRICE_QUOTE_PROVIDERS, optional
        The data provider for fetching stock prices. Default is `yfinance`.

    Returns
    -------
    pl.LazyFrame
        A Polars LazyFrame with columns for the stock symbols ('symbol') and
        their latest prices ('recent_price').

    Notes
    -----
    If entering symbols as a string, DO NOT include spaces between the symbols.
    """
    loop = asyncio.get_event_loop()
    result = await loop.run_in_executor(
        None, lambda: obb.equity.price.quote(symbols, provider=provider)
    )
    out = result.to_polars().lazy()
    if {"last_price", "prev_close"}.issubset(out.collect_schema().names()):
        out = out.select(
            [
                pl.when(pl.col("asset_type") == "ETF")
                .then(pl.col("prev_close"))
                .otherwise(pl.col("last_price"))
                .alias("last_price"),
                pl.col("symbol"),
            ]
        )
    elif "last_price" not in out.collect_schema().names():
        out = out.select(
            pl.col("symbol"), pl.col("prev_close").alias("last_price")
        )
    else:
        out = out.select(pl.col("symbol"), pl.col("last_price"))

    return out

humbldata.core.utils.openbb_helpers.aget_last_close async ¤

aget_last_close(symbols: str | list[str] | Series, provider: OBB_EQUITY_PRICE_QUOTE_PROVIDERS = 'yfinance') -> LazyFrame

Context: Core || Category: Utils || Subcategory: OpenBB Helpers || Command: aget_last_close.

Asynchronously retrieves the last closing price for the given stock symbol(s) using OpenBB's equity price quote data.

Parameters:

Name Type Description Default
symbols str | List[str] | Series

The stock symbol(s) to query for the last closing price. Accepts a single symbol, a list of symbols, or a Polars Series of symbols. You can pass multiple symbols as a string; 'AAPL,XLE', and it will split the string into a list of symbols.

required
provider OBB_EQUITY_PRICE_QUOTE_PROVIDERS

The data provider for fetching stock prices. Default is yfinance.

'yfinance'

Returns:

Type Description
LazyFrame

A Polars LazyFrame with columns for the stock symbols ('symbol') and their last closing prices ('prev_close').

Notes

This function uses OpenBB's equity price quote data to fetch the last closing price. It returns a lazy frame for efficient processing, especially with large datasets.

If entering symbols as a string, DO NOT include spaces between the symbols.

Source code in src/humbldata/core/utils/openbb_helpers.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
async def aget_last_close(
    symbols: str | list[str] | pl.Series,
    provider: OBB_EQUITY_PRICE_QUOTE_PROVIDERS = "yfinance",
) -> pl.LazyFrame:
    """
    Context: Core || Category: Utils || Subcategory: OpenBB Helpers || **Command: aget_last_close**.

    Asynchronously retrieves the last closing price for the given stock symbol(s) using OpenBB's equity price quote data.

    Parameters
    ----------
    symbols : str | List[str] | pl.Series
        The stock symbol(s) to query for the last closing price. Accepts a single
        symbol, a list of symbols, or a Polars Series of symbols. You can pass
        multiple symbols as a string; `'AAPL,XLE'`, and it will split the string
        into a list of symbols.
    provider : OBB_EQUITY_PRICE_QUOTE_PROVIDERS, optional
        The data provider for fetching stock prices. Default is `yfinance`.

    Returns
    -------
    pl.LazyFrame
        A Polars LazyFrame with columns for the stock symbols ('symbol') and
        their last closing prices ('prev_close').

    Notes
    -----
    This function uses OpenBB's equity price quote data to fetch the last closing price.
    It returns a lazy frame for efficient processing, especially with large datasets.

    If entering symbols as a string, DO NOT include spaces between the symbols.
    """
    loop = asyncio.get_event_loop()
    result = await loop.run_in_executor(
        None, lambda: obb.equity.price.quote(symbols, provider=provider)
    )
    out = result.to_polars().lazy()

    return out.select(pl.col("symbol"), pl.col("prev_close"))

humbldata.core.utils.openbb_helpers.get_equity_sector ¤

get_equity_sector(symbols: str | list[str] | Series, provider: OBB_EQUITY_PROFILE_PROVIDERS | None = 'yfinance') -> LazyFrame

Context: Core || Category: Utils || Subcategory: OpenBB Helpers || Command: get_sector.

Retrieves the sector information for the given stock symbol(s) using OpenBB's equity profile data.

Parameters:

Name Type Description Default
symbols str | list[str] | Series

The stock symbol(s) to query for sector information. Accepts a single symbol, a list of symbols, or a Polars Series of symbols.

required
provider str | None

The data provider to use for fetching sector information. If None, the default provider will be used.

'yfinance'

Returns:

Type Description
LazyFrame

A Polars LazyFrame with columns for the stock symbols ('symbol') and their corresponding sectors ('sector').

Notes

This function uses OpenBB's equity profile data to fetch sector information. It returns a lazy frame for efficient processing, especially with large datasets.

Source code in src/humbldata/core/utils/openbb_helpers.py
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
def get_equity_sector(
    symbols: str | list[str] | pl.Series,
    provider: OBB_EQUITY_PROFILE_PROVIDERS | None = "yfinance",
) -> pl.LazyFrame:
    """
    Context: Core || Category: Utils || Subcategory: OpenBB Helpers || **Command: get_sector**.

    Retrieves the sector information for the given stock symbol(s) using OpenBB's equity profile data.

    Parameters
    ----------
    symbols : str | list[str] | pl.Series
        The stock symbol(s) to query for sector information. Accepts a single
        symbol, a list of symbols, or a Polars Series of symbols.
    provider : str | None, optional
        The data provider to use for fetching sector information. If None, the default
        provider will be used.

    Returns
    -------
    pl.LazyFrame
        A Polars LazyFrame with columns for the stock symbols ('symbol') and
        their corresponding sectors ('sector').

    Notes
    -----
    This function uses OpenBB's equity profile data to fetch sector information.
    It returns a lazy frame for efficient processing, especially with large datasets.
    """
    try:
        result = obb.equity.profile(symbols, provider=provider)
        return result.to_polars().select(["symbol", "sector"]).lazy()
    except pl.exceptions.ColumnNotFoundError:
        # If an error occurs, return a LazyFrame with symbol and null sector
        if isinstance(symbols, str):
            symbols = [symbols]
        elif isinstance(symbols, pl.Series):
            symbols = symbols.to_list()
        return pl.LazyFrame(
            {"symbol": symbols, "sector": [None] * len(symbols)}
        )

humbldata.core.utils.openbb_helpers.aget_equity_sector async ¤

aget_equity_sector(symbols: str | list[str] | Series, provider: OBB_EQUITY_PROFILE_PROVIDERS | None = 'yfinance') -> LazyFrame

Asynchronous version of get_sector.

Context: Core || Category: Utils || Subcategory: OpenBB Helpers || Command: get_sector_async.

Retrieves the sector information for the given stock symbol(s) using OpenBB's equity profile data asynchronously. If an ETF is passed, it will return a NULL sector for the symbol. The sector returned hasn't been normalized to GICS_SECTORS, it is the raw OpenBB sector output. Sectors are normalized to GICS_SECTORS in the aet_sector_filter function.

Parameters:

Name Type Description Default
symbols str | List[str] | Series

The stock symbol(s) to query for sector information. Accepts a single symbol, a list of symbols, or a Polars Series of symbols.

required
provider str | None

The data provider to use for fetching sector information. If None, the default provider will be used.

'yfinance'

Returns:

Type Description
LazyFrame

A Polars LazyFrame with columns for the stock symbols ('symbol') and their corresponding sectors ('sector').

Notes

This function uses OpenBB's equity profile data to fetch sector information. It returns a lazy frame for efficient processing, especially with large datasets.

If you just pass an ETF to the obb.equity.profile function, it will throw return data without the NULL columns (sector column included) and only returns columns where there is data, so we need to handle that edge case. If an ETF is included with an equity, it will return a NULL sector column, so we can select the sector column from the ETF data and return it as a NULL sector for the equity.

Source code in src/humbldata/core/utils/openbb_helpers.py
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
async def aget_equity_sector(
    symbols: str | list[str] | pl.Series,
    provider: OBB_EQUITY_PROFILE_PROVIDERS | None = "yfinance",
) -> pl.LazyFrame:
    """
    Asynchronous version of get_sector.

    Context: Core || Category: Utils || Subcategory: OpenBB Helpers || **Command: get_sector_async**.

    Retrieves the sector information for the given stock symbol(s) using
    OpenBB's equity profile data asynchronously. If an ETF is passed, it will
    return a NULL sector for the symbol. The sector returned hasn't been
    normalized to GICS_SECTORS, it is the raw OpenBB sector output.
    Sectors are normalized to GICS_SECTORS in the `aet_sector_filter` function.

    Parameters
    ----------
    symbols : str | List[str] | pl.Series
        The stock symbol(s) to query for sector information. Accepts a single
        symbol, a list of symbols, or a Polars Series of symbols.
    provider : str | None, optional
        The data provider to use for fetching sector information. If None, the default
        provider will be used.

    Returns
    -------
    pl.LazyFrame
        A Polars LazyFrame with columns for the stock symbols ('symbol') and
        their corresponding sectors ('sector').

    Notes
    -----
    This function uses OpenBB's equity profile data to fetch sector information.
    It returns a lazy frame for efficient processing, especially with large datasets.

    If you just pass an ETF to the `obb.equity.profile` function, it will throw
    return data without the NULL columns (sector column included) and only
    returns columns where there is data, so we need to handle that edge case.
    If an ETF is included with an equity, it will return a NULL sector column,
    so we can select the sector column from the ETF data and return it as a
    NULL sector for the equity.
    """
    loop = asyncio.get_event_loop()
    try:
        result = await loop.run_in_executor(
            None, lambda: obb.equity.profile(symbols, provider=provider)
        )
        return result.to_polars().select(["symbol", "sector"]).lazy()
    except pl.exceptions.ColumnNotFoundError:
        # If an error occurs, return a LazyFrame with symbol and null sector
        if isinstance(symbols, str):
            symbols = [symbols]
        elif isinstance(symbols, pl.Series):
            symbols = symbols.to_list()
        return pl.LazyFrame(
            {"symbol": symbols, "sector": [None] * len(symbols)}
        ).cast(pl.Utf8)

humbldata.core.utils.openbb_helpers.aget_etf_category async ¤

aget_etf_category(symbols: str | list[str] | Series, provider: OBB_ETF_INFO_PROVIDERS | None = 'yfinance') -> LazyFrame

Asynchronously retrieves the category information for the given ETF symbol(s).

This function uses the obb.etf.info function and selects the category column to get the sector information. This function handles EQUITY symbols that are not ETF's the same way that aget_equity_sector does. The sector returned (under the OpenBB column name category) hasn't been normalized to GICS_SECTORS, it is the raw OpenBB category output. Sectors are normalized to GICS_SECTORS in the aget_sector_filter function.

Parameters:

Name Type Description Default
symbols str | list[str] | Series

The ETF symbol(s) to query for category information.

required
provider OBB_EQUITY_PROFILE_PROVIDERS | None
'yfinance'

Returns:

Type Description
LazyFrame

A Polars LazyFrame with columns for the ETF symbols ('symbol') and their corresponding categories ('category').

Source code in src/humbldata/core/utils/openbb_helpers.py
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
async def aget_etf_category(
    symbols: str | list[str] | pl.Series,
    provider: OBB_ETF_INFO_PROVIDERS | None = "yfinance",
) -> pl.LazyFrame:
    """
    Asynchronously retrieves the category information for the given ETF symbol(s).

    This function uses the `obb.etf.info` function and selects the `category`
    column to get the sector information. This function handles EQUITY
    symbols that are not ETF's the same way that `aget_equity_sector` does.
    The sector returned (under the OpenBB column name `category`) hasn't been
    normalized to GICS_SECTORS, it is the raw OpenBB category output.
    Sectors are normalized to GICS_SECTORS in the `aget_sector_filter` function.

    Parameters
    ----------
    symbols : str | list[str] | pl.Series
        The ETF symbol(s) to query for category information.
    provider : OBB_EQUITY_PROFILE_PROVIDERS | None, optional

    Returns
    -------
    pl.LazyFrame
        A Polars LazyFrame with columns for the ETF symbols ('symbol') and
        their corresponding categories ('category').
    """
    loop = asyncio.get_event_loop()
    try:
        result = await loop.run_in_executor(
            None, lambda: obb.etf.info(symbols, provider=provider)
        )
        out = result.to_polars().lazy().select(["symbol", "category"])
        # Create a LazyFrame with all input symbols
        all_symbols = pl.LazyFrame({"symbol": symbols})

        # Left join to include all input symbols, filling missing sectors with null
        out = all_symbols.join(out, on="symbol", how="left").with_columns(
            [
                pl.when(pl.col("category").is_null())
                .then(None)
                .otherwise(pl.col("category"))
                .alias("category")
            ]
        )
    except OpenBBError:
        if isinstance(symbols, str):
            symbols = [symbols]
        elif isinstance(symbols, pl.Series):
            symbols = symbols.to_list()
        return pl.LazyFrame(
            {"symbol": symbols, "category": [None] * len(symbols)}
        ).cast(pl.Utf8)
    return out

humbldata.core.utils.core_helpers ¤

A module to contain core helper functions for the program.

humbldata.core.utils.core_helpers.is_debug_mode ¤

is_debug_mode() -> bool

Check if the current system is in debug mode.

Returns:

Type Description
bool

True if the system is in debug mode, False otherwise.

Source code in src/humbldata/core/utils/core_helpers.py
14
15
16
17
18
19
20
21
22
23
def is_debug_mode() -> bool:
    """
    Check if the current system is in debug mode.

    Returns
    -------
    bool
        True if the system is in debug mode, False otherwise.
    """
    return False

humbldata.core.utils.core_helpers.run_async ¤

run_async(coro)

Run an async function in a new thread and return the result.

Source code in src/humbldata/core/utils/core_helpers.py
26
27
28
29
30
def run_async(coro):
    """Run an async function in a new thread and return the result."""
    with ThreadPoolExecutor() as executor:
        future = executor.submit(lambda: asyncio.run(coro))
        return future.result()