scale - Implementing a scale¶
According to On the theory of scales of measurement by S.S. Stevens, scales can be classified in four ways -- nominal, ordinal, interval and ratio. Using current(2016) terminology, nominal data is made up of unordered categories, ordinal data is made up of ordered categories and the two can be classified as discrete. On the other hand both interval and ratio data are continuous.
The scale classes below show how the rest of the Mizani package can be used to implement the two categories of scales. The key tasks are training and mapping and these correspond to the train and map methods.
To train a scale on data means, to make the scale learn the limits of the data. This is elaborate (or worthy of a dedicated method) for two reasons:
Practical -- data may be split up across more than one object, yet all will be represented by a single scale.
Conceptual -- training is a key action that may need to be inserted into multiple locations of the data processing pipeline before a graphic can be created.
To map data onto a scale means, to associate data values with values(potential readings) on a scale. This is perhaps the most important concept unpinning a scale.
The apply methods are simple examples of how to put it all together.
- class mizani.scale.scale_continuous[source]¶
Continuous scale
- classmethod apply(x: FloatArrayLike, palette: ContinuousPalette, na_value: Any = None, trans: Trans | None = None) NDArrayFloat [source]¶
Scale data continuously
- Parameters:
- xnumpy:array_like
Continuous values to scale
- palette
python:callable()
f(x)
Palette to use
- na_value
object
Value to use for missing values.
- trans
trans
How to transform the data before scaling. If
None
, no transformation is done.
- Returns:
- outnumpy:array_like
Scaled values
- classmethod train(new_data: FloatArrayLike, old: tuple[float, float] | None = None) tuple[float, float] [source]¶
Train a continuous scale
- Parameters:
- new_datanumpy:array_like
New values
- oldnumpy:array_like
Old range
- Returns:
- out
python:tuple
Limits(range) of the scale
- out
- classmethod map(x: FloatArrayLike, palette: ContinuousPalette, limits: tuple[float, float], na_value: Any = None, oob: Callable[[TVector], TVector] = <function censor>) NDArrayFloat [source]¶
Map values to a continuous palette
- Parameters:
- xnumpy:array_like
Continuous values to scale
- palette
python:callable()
f(x)
palette to use
- na_value
object
Value to use for missing values.
- oob
python:callable()
f(x)
Function to deal with values that are beyond the limits
- Returns:
- outnumpy:array_like
Values mapped onto a palette
- class mizani.scale.scale_discrete[source]¶
Discrete scale
- classmethod apply(x: AnyArrayLike, palette: DiscretePalette, na_value: Any = None)[source]¶
Scale data discretely
- Parameters:
- xnumpy:array_like
Discrete values to scale
- palette
python:callable()
f(x)
Palette to use
- na_value
object
Value to use for missing values.
- Returns:
- outnumpy:array_like
Scaled values
- classmethod train(new_data: AnyArrayLike, old: Sequence[Any] | None = None, drop: bool = False, na_rm: bool = False) Sequence[Any] [source]¶
Train a continuous scale
- Parameters:
- new_datanumpy:array_like
New values
- oldnumpy:array_like
Old range. List of values known to the scale.
- dropbool
Whether to drop(not include) unused categories
- na_rmbool
If
True
, remove missing values. Missing values are eitherNaN
orNone
.
- Returns:
- out
python:list
Values covered by the scale
- out
- classmethod map(x: AnyArrayLike, palette: DiscretePalette, limits: Sequence[Any], na_value: Any = None) AnyArrayLike [source]¶
Map values to a discrete palette
- Parameters:
- palette
python:callable()
f(x)
palette to use
- xnumpy:array_like
Continuous values to scale
- na_value
object
Value to use for missing values.
- palette
- Returns:
- outnumpy:array_like
Values mapped onto a palette