GPT2
We currently support loading the following checkpoint via GPT2.from_pretrained(identifier)
gpt2-base
gpt2-medium
gpt2-large
gpt2-xl
GPT2Config
- class model_center.model.GPT2Config(vocab_size=50258, dim_model=768, num_heads=12, dim_head=64, dim_ff=3072, num_layers=12, dropout_p=0.1, emb_init_mean=0.0, emb_init_std=1, pos_bias_type='none', position_size=1024, norm_init_var=1.0, norm_bias=True, norm_eps=1e-05, att_init_mean=0.0, att_init_std=0.02, att_bias=True, att_mask_value=- 10000.0, ffn_init_mean=0.0, ffn_init_std=0.02, ffn_bias=True, ffn_activate_fn='gelu', proj_init_mean=0.0, proj_init_std=1, proj_bias=True, length_scale=False, attn_scale=True, half=True, int8=False, tied=True, cls_head=None, post_layer_norm=False)
This is a configuration class that stores the configuration of the GPT-2 model, which inherits from the Config class. It is used to instantiate the Bert model according to the specified parameters and define the model architecture. You can set specific parameters to control the output of the model.
For example: [dim_model] is used to determine the Dimension of the encoder layers. You can choose to use the default value of 768 or customize their dimensions.
GPT2Model
- class model_center.model.GPT2(config: model_center.model.config.gpt2_config.GPT2Config)
- forward(input_ids=None, length=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=True, return_logits=False)
- The GPT-2 Model transformer outputs raw hidden-states or logits as you want.
This model inherits from BaseModel. This model is also a PyTorch torch.nn.Module subclass.You can use it as a regular PyTorch Module. You can also select the data and data type that you want the model to return through changing the value of return_dict and return_logits.
- Parameters
input_ids (
torch.Tensor
of shape(batch, seq_length)
) – Indices of input sequence tokens. It will be embedded by model’s internal embedding lookup matrix.length (
torch.Tensor
of shape(batch)
) – Length of input sequence before padding.attention_mask (
torch.Tensor
of shape(batch, seq_length)
) – Used to avoid performing attention on padding token indices.token_type_ids (
torch.Tensor
of shape(batch, seq_length)
) – Unused.position_ids (
torch.Tensor
of shape(batch, seq_length)
) – Unused.head_mask (
torch.Tensor
of shape(num_layers, num_heads)
) – Unused.inputs_embeds (
torch.Tensor
of shape(batch, seq_length, dim_model)
) – Embedding of the input. You can choose to directly pass the inputs embedding to control the way of embedding.encoder_hidden_states (
torch.Tensor
of shape(batch, seq_length, dim_model)) – Unused.encoder_attention_mask (
torch.Tensor
of shape(batch, seq_length)) – Unused.output_attentions (
torch.Tensor
of shape(batch, num_heads, seq_length, seq_length)
) – Unused.output_hidden_states (
torch.Tensor
of shape(batch, seq_dec, dim_model)
) – Unused.return_dict (
bool
) – Whether to return a BaseModelOutputWithPastAndCrossAttentions instead of just a tuple.return_logits (
bool
) – Whether to return the prediction score for each token in vocabulary (before softmax).
- Returns
The GPT-2 output. Depended on the value of return_dict and return_logits
- Return type
BaseModelOutputWithPastAndCrossAttentions or tuple or torch.Tensor of shape (batch, seq_dec, vocab_output_size) or (batch, seqlen, cls_head)
GPT2Tokenizer
- class model_center.tokenizer.GPT2Tokenizer
The current implementation is mainly an alias to GPT2Tokenizer of Hugging Face Transformers. we will change to our SAM implementation in the future, which will be a more efficient tokenizer.