dataquality.dq_start package#
Module contents#
- class BaseInsights(model, *args, **kwargs)#
Bases:
ABC
Base class for dq start integrations.
Initialize the base class. :type model:
Any
:param model: The model to be tracked :type args:Any
:param args: Positional arguments to be passed to the watch function :type kwargs:Any
:param kwargs: Keyword arguments to be passed to the watch function-
framework:
ModelFramework
#
-
watch:
Optional
[Callable
]#
-
unwatch:
Optional
[Callable
]#
-
call_finish:
bool
= True#
- enter()#
Call the watch function (called in __enter__).
- Return type:
None
- exit()#
Call the unwatch function (called in __exit__).
- Return type:
None
- set_project_run(project='', run='', task=TaskType.text_classification)#
Set the project and run names. To the class. If project and run are not provided, generate them. :type project:
str
:param project: The project name :type run:str
:param run: The run name :type task:TaskType
:param task: The task type- Return type:
None
- init_project(task, project='', run='')#
Initialize the project and calls dq.init(). :type task:
TaskType
:param task: The task type :type project:str
:param project: The project name :type run:str
:param run: The run name- Return type:
None
- setup_training(labels, train_data, test_data=None, val_data=None)#
Log dataset and labels to the run. :type labels:
Optional
[List
[str
]] :param labels: The labels :type train_data:Any
:param train_data: The training dataset :type test_data:Optional
[Any
] :param test_data: The test dataset :type val_data:Optional
[Any
] :param val_data: The validation dataset- Return type:
None
-
framework:
- class TorchInsights(model)#
Bases:
BaseInsights
Initialize the base class. :type model:
Any
:param model: The model to be tracked :param args: Positional arguments to be passed to the watch function :param kwargs: Keyword arguments to be passed to the watch function-
framework:
ModelFramework
= 'torch'#
-
framework:
- class TFInsights(model)#
Bases:
BaseInsights
Initialize the base class. :type model:
Any
:param model: The model to be tracked :param args: Positional arguments to be passed to the watch function :param kwargs: Keyword arguments to be passed to the watch function-
framework:
ModelFramework
= 'keras'#
-
framework:
- class TrainerInsights(model)#
Bases:
BaseInsights
Initialize the base class. :type model:
Any
:param model: The model to be tracked :param args: Positional arguments to be passed to the watch function :param kwargs: Keyword arguments to be passed to the watch function-
framework:
ModelFramework
= 'hf'#
-
framework:
- class AutoInsights(model)#
Bases:
BaseInsights
Initialize the base class. :type model:
Any
:param model: The model to be tracked :param args: Positional arguments to be passed to the watch function :param kwargs: Keyword arguments to be passed to the watch function-
framework:
ModelFramework
= 'auto'#
-
call_finish:
bool
= False#
-
auto_kwargs:
Dict
[str
,Any
]#
- setup_training(labels, train_data, test_data=None, val_data=None)#
Setup auto by creating the parameters for the auto function. :type labels:
Optional
[List
[str
]] :param labels: Labels for the training :type train_data:Any
:param train_data: Training dataset :type test_data:Optional
[Any
] :param test_data: Test dataset :type val_data:Optional
[Any
] :param val_data: Validation dataset- Return type:
None
- init_project(task, project='', run='')#
Initialize the project and run but dq init is not called. :type task:
TaskType
:param task: The task type :type project:str
:param project: The project name :type run:str
:param run: The run name- Return type:
None
- enter()#
Call auto function with the generated paramters.
- Return type:
None
-
framework:
- detect_model(model, framework)#
Detect the model type in a lazy way and return the appropriate class. :type model:
Any
:param model: The model to inspect, if a string, it will be assumed to be auto :rtype:Type
[BaseInsights
]- Parameters:
framework (
Optional
[ModelFramework
]) – The framework to use, if provided it will be used instead of the model
- class DataQuality(model=None, task=TaskType.text_classification, labels=None, train_data=None, test_data=None, val_data=None, project='', run='', framework=None, *args, **kwargs)#
Bases:
object
- Parameters:
model (
Optional
[Any
]) – The model to inspect, if a string, it will be assumed to be autotask (
TaskType
) – Task type for example “text_classification”project (
str
) – Project namerun (
str
) – Run nametrain_data (
Optional
[Any
]) – Training datatest_data (
Optional
[Any
]) – Optional test dataval_data (
Optional
[Any
]) – Optional: validation datalabels (
Optional
[List
[str
]]) – The labels for the runframework (
Optional
[ModelFramework
]) – The framework to use, if provided it will be used instead of inferring it from the model. For example, if you have a torch model, you can pass framework=”torch”. If you have a torch model, you can pass framework=”torch”args (
Any
) – Additional argumentskwargs (
Any
) – Additional keyword arguments
from dataquality import DataQuality with DataQuality(model, "text_classification", labels = ["neg", "pos"], train_data = train_data) as dq: model.fit(train_data)
If you want to train without a model, you can use the auto framework:
from dataquality import DataQuality with DataQuality(labels = ["neg", "pos"], train_data = train_data) as dq: dq.finish()
- get_metrics(split=Split.train)#
- Return type:
Dict
[str
,Any
]