backprop.models.hf_seq_tc_model

backprop.models.hf_seq_tc_model.model

class HFSeqTCModel(model_path=None, tokenizer_path=None, name: Optional[str] = None, description: Optional[str] = None, tasks: Optional[List[str]] = None, details: Optional[Dict] = None, model_class=<class 'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification'>, tokenizer_class=<class 'transformers.models.auto.tokenization_auto.AutoTokenizer'>, device=None)[source]

Bases: backprop.models.generic_models.HFModel

Class for Hugging Face sequence classification models

model_path

path to HF model

tokenizer_path

path to HF tokenizer

name

string identifier for the model. Lowercase letters and numbers. No spaces/special characters except dashes.

description

String description of the model.

tasks

List of supported task strings

details

Dictionary of additional details about the model

model_class

Class used to initialise model

tokenizer_class

Class used to initialise tokenizer

device

Device for model. Defaults to “cuda” if available.

__call__(task_input, task='text-classification', train=False)[source]

Uses the model for text classification. At this point, the model needs to already have been finetuned. This is what sets up the final layer for classification.

Parameters
  • task_input – input dictionary according to the text-classification task specification

  • task – text-classification

encode(text, target, max_input_length=128)[source]
get_label_probabilities(outputs, top_k)[source]
init_pre_finetune(labels)[source]
static list_models()[source]
process_batch(params, task='text-classification')[source]
training: bool
training_step(batch, task='text-classification')[source]

backprop.models.hf_seq_tc_model.models_list