BERT

Bert

We currently support loading the following checkpoint via Bert.from_pretrained(identifier)

  • bert-base-cased

  • bert-base-uncased

  • bert-large-cased

  • bert-large-uncased

  • bert-base-chinese

  • bert-base-multilingual-cased

BertConfig

class model_center.model.BertConfig(vocab_size=119547, type_size=2, position_size=512, dim_model=768, num_heads=12, dim_head=64, dim_ff=3072, num_layers=12, dropout_p=0.1, emb_init_mean=0.0, emb_init_std=1, pos_bias_type='none', position_bias_max_distance=1024, norm_init_var=1.0, norm_bias=True, norm_eps=1e-12, att_init_mean=0.0, att_init_std=0.02, att_bias=True, att_mask_value=- 10000.0, ffn_init_mean=0.0, ffn_init_std=0.02, ffn_bias=True, ffn_activate_fn='gelu', proj_init_mean=0.0, proj_init_std=1, proj_bias=True, length_scale=False, attn_scale=True, half=True, int8=False, tied=False, cls_head=None, post_layer_norm=True)

This is a configuration class that stores the configuration of the BERT model, which inherits from the Config class. It is used to instantiate the Bert model according to the specified parameters and define the model architecture. You can set specific parameters to control the output of the model.

For example: [dim_model] is used to determine the Dimension of the encoder layers and the pooler layer. You can choose to use the default value of 768 or customize their dimensions.

BertModel

class model_center.model.Bert(config: model_center.model.config.bert_config.BertConfig)
forward(input_ids=None, length=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=True, return_logits=False)
This model inherits from BaseModel. This model is also a PyTorch torch.nn.Module subclass.

You can use it as a regular PyTorch Module. You can also select the data and data type that you want the model to return through changing the value of return_dict and return_logits.

Parameters
  • input_ids (torch.Tensor of shape (batch, seq_length)) – Indices of input sequence tokens. It will be embedded by model’s internal embedding lookup matrix.

  • length (torch.Tensor of shape (batch)) – Length of input sequence before padding.

  • attention_mask (torch.Tensor of shape (batch, seq_length)) – Used to avoid performing attention on padding token indices.

  • token_type_ids (torch.Tensor of shape (batch, seq_length)) – Unused.

  • position_ids (torch.Tensor of shape (batch, seq_length)) – Unused.

  • head_mask (torch.Tensor of shape (num_layers, num_heads)) – Unused.

  • inputs_embeds (torch.Tensor of shape (batch, seq_length, dim_model)) – Embedding of the input. You can choose to directly pass the inputs embedding to control the way of embedding.

  • encoder_hidden_states (torch.Tensor of shape(batch, seq_length, dim_model)) – Unused.

  • encoder_attention_mask (torch.Tensor of shape (batch, seq_length)) – Unused.

  • output_attentions (torch.Tensor of shape (batch, num_heads, seq_length, seq_length)) – Unused.

  • output_hidden_states (torch.Tensor of shape (batch, seq_length, dim_model)) – Unused.

  • return_dict (bool) – Whether to return a BaseModelOutputWithPoolingAndCrossAttentions instead of just a tuple.

  • return_logits (bool) – Whether to return the prediction score for each token in vocabulary (before softmax).

Returns

The Bert output. Depended on the value of return_dict and return_logits

Return type

BaseModelOutputWithPoolingAndCrossAttentions or tuple or torch.Tensor of shape (batch, seq_length, vocab_output_size) or (batch, seqlen, cls_head)

BertTokenizer

class model_center.tokenizer.BertTokenizer

The current implementation is mainly an alias to BertTokenizer of Hugging Face Transformers. we will change to our SAM implementation in the future, which will be a more efficient tokenizer.