backprop.models.clip

backprop.models.clip.clip

available_models()[source]
load(name: str, device: Union[str, torch.device] = 'cpu', jit=False)[source]
tokenize(tokenizer, texts: Union[str, List[str]], context_length: int = 77, truncation=True)[source]

backprop.models.clip.model

class AttentionPool2d(spacial_dim: int, embed_dim: int, num_heads: int, output_dim: Optional[int] = None)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class Bottleneck(inplanes, planes, stride=1)[source]

Bases: torch.nn.modules.module.Module

expansion = 4
forward(x: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class CLIP(embed_dim: int, image_resolution: int, vision_layers: Union[Tuple[int, int, int, int], int], vision_width: int, vision_patch_size: int, context_length: int, vocab_size: int, transformer_width: int, transformer_heads: int, transformer_layers: int)[source]

Bases: torch.nn.modules.module.Module

build_attention_mask()[source]
property dtype
encode_image(image)[source]
encode_text(text)[source]
forward(image, text)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

initialize_parameters()[source]
training: bool
class LayerNorm(normalized_shape: Union[int, List[int], torch.Size], eps: float = 1e-05, elementwise_affine: bool = True)[source]

Bases: torch.nn.modules.normalization.LayerNorm

Subclass torch’s LayerNorm to handle fp16.

elementwise_affine: bool
eps: float
forward(x: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

normalized_shape: Union[int, List[int], torch.Size]
class ModifiedResNet(layers, output_dim, heads, input_resolution=224, width=64)[source]

Bases: torch.nn.modules.module.Module

A ResNet class that is similar to torchvision’s but contains the following changes: - There are now 3 “stem” convolutions as opposed to 1, with an average pool instead of a max pool. - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - The final pooling layer is a QKV attention instead of an average pool

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class QuickGELU[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class ResidualAttentionBlock(d_model: int, n_head: int, attn_mask: Optional[torch.Tensor] = None)[source]

Bases: torch.nn.modules.module.Module

attention(x: torch.Tensor)[source]
forward(x: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class Transformer(width: int, layers: int, heads: int, attn_mask: Optional[torch.Tensor] = None)[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class VisualTransformer(input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int)[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
build_model(state_dict: dict)[source]
convert_weights(model: torch.nn.modules.module.Module)[source]

Convert applicable model parameters to fp16

backprop.models.clip.models_list

backprop.models.clip.module

class CLIP(model_path='ViT-B/32', init_model=<function load>, init_tokenizer=<class 'backprop.models.clip.simple_tokenizer.SimpleTokenizer'>, name: Optional[str] = None, description: Optional[str] = None, tasks: Optional[List[str]] = None, details: Optional[Dict] = None, device=None)[source]

Bases: backprop.models.generic_models.BaseModel

CLIP is a recent model by OpenAI.

model_path

ViT-B/32, RN50, RN101, RN50x4

init_model

initialise model from model_path

init_tokenizer

initializes tokenizer

name

string identifier for the model. Lowercase letters and numbers. No spaces/special characters except dashes.

description

String description of the model.

tasks

List of supported task strings

details

Dictionary of additional details about the model

device

Device for model. Defaults to “cuda” if available.

__call__(task_input, task='image-classification', return_tensor=False)[source]

Do inference with the model.

Parameters
  • task_input – input dictionary according to task

  • task – one of supported tasks

  • return_tensor – return a tensor instead of list for vectorisation output

image_classification(image: torch._C.TensorType, text: torch._C.TensorType, labels, top_k=10000)[source]
image_text_vectorisation(image: torch._C.TensorType, text: torch._C.TensorType)[source]
image_vectorisation(image: torch._C.TensorType)[source]
static list_models()[source]
process_batch(params, task)[source]
text_vectorisation(text: torch._C.TensorType)[source]
training: bool
training_step(params, task)[source]

backprop.models.clip.simple_tokenizer

class SimpleTokenizer[source]

Bases: object

bpe(token)[source]
decode(tokens)[source]
encode(text)[source]
basic_clean(text)[source]
bytes_to_unicode()[source]

Returns list of utf-8 byte and a corresponding list of unicode strings. The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you’re at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a signficant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. And avoids mapping to whitespace/control characters the bpe code barfs on.

default_bpe()[source]
get_pairs(word)[source]

Return set of symbol pairs in a word. Word is represented as tuple of symbols (symbols being variable-length strings).

whitespace_clean(text)[source]