backprop.models.clip¶
backprop.models.clip.clip¶
backprop.models.clip.model¶
-
class
AttentionPool2d
(spacial_dim: int, embed_dim: int, num_heads: int, output_dim: Optional[int] = None)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
Bottleneck
(inplanes, planes, stride=1)[source]¶ Bases:
torch.nn.modules.module.Module
-
expansion
= 4¶
-
forward
(x: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
CLIP
(embed_dim: int, image_resolution: int, vision_layers: Union[Tuple[int, int, int, int], int], vision_width: int, vision_patch_size: int, context_length: int, vocab_size: int, transformer_width: int, transformer_heads: int, transformer_layers: int)[source]¶ Bases:
torch.nn.modules.module.Module
-
property
dtype
¶
-
forward
(image, text)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
property
-
class
LayerNorm
(normalized_shape: Union[int, List[int], torch.Size], eps: float = 1e-05, elementwise_affine: bool = True)[source]¶ Bases:
torch.nn.modules.normalization.LayerNorm
Subclass torch’s LayerNorm to handle fp16.
-
elementwise_affine
: bool¶
-
eps
: float¶
-
forward
(x: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
normalized_shape
: Union[int, List[int], torch.Size]¶
-
-
class
ModifiedResNet
(layers, output_dim, heads, input_resolution=224, width=64)[source]¶ Bases:
torch.nn.modules.module.Module
A ResNet class that is similar to torchvision’s but contains the following changes: - There are now 3 “stem” convolutions as opposed to 1, with an average pool instead of a max pool. - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - The final pooling layer is a QKV attention instead of an average pool
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
QuickGELU
[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
ResidualAttentionBlock
(d_model: int, n_head: int, attn_mask: Optional[torch.Tensor] = None)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
Transformer
(width: int, layers: int, heads: int, attn_mask: Optional[torch.Tensor] = None)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
VisualTransformer
(input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
backprop.models.clip.models_list¶
backprop.models.clip.module¶
-
class
CLIP
(model_path='ViT-B/32', init_model=<function load>, init_tokenizer=<class 'backprop.models.clip.simple_tokenizer.SimpleTokenizer'>, name: Optional[str] = None, description: Optional[str] = None, tasks: Optional[List[str]] = None, details: Optional[Dict] = None, device=None)[source]¶ Bases:
backprop.models.generic_models.BaseModel
CLIP is a recent model by OpenAI.
-
model_path
¶ ViT-B/32, RN50, RN101, RN50x4
-
init_model
¶ initialise model from model_path
-
init_tokenizer
¶ initializes tokenizer
-
name
¶ string identifier for the model. Lowercase letters and numbers. No spaces/special characters except dashes.
-
description
¶ String description of the model.
-
tasks
¶ List of supported task strings
-
details
¶ Dictionary of additional details about the model
-
device
¶ Device for model. Defaults to “cuda” if available.
-
__call__
(task_input, task='image-classification', return_tensor=False)[source]¶ Do inference with the model.
- Parameters
task_input – input dictionary according to task
task – one of supported tasks
return_tensor – return a tensor instead of list for vectorisation output
-
image_classification
(image: torch._C.TensorType, text: torch._C.TensorType, labels, top_k=10000)[source]¶
-
training
: bool¶
-
backprop.models.clip.simple_tokenizer¶
-
bytes_to_unicode
()[source]¶ Returns list of utf-8 byte and a corresponding list of unicode strings. The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you’re at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a signficant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. And avoids mapping to whitespace/control characters the bpe code barfs on.