GPT-j

GPTj

We currently support loading the following checkpoint via GPTj.from_pretrained(identifier)

  • gptj-6b

GPTjConfig

class model_center.model.GPTjConfig(vocab_size=50400, dim_model=4096, num_heads=16, dim_head=256, dim_ff=16384, num_layers=28, dropout_p=0, emb_init_mean=0.0, emb_init_std=1, pos_bias_type='rotary', pos_rotary_dim=64, norm_init_var=1.0, norm_bias=True, norm_eps=1e-05, att_init_mean=0.0, att_init_std=0.1, att_bias=False, att_mask_value=- inf, ffn_init_mean=0.0, ffn_init_std=0.1, ffn_bias=True, ffn_activate_fn='gelu', proj_init_mean=0.0, proj_init_std=1, proj_bias=True, length_scale=False, attn_scale=True, half=True, int8=False, tied=False, cls_head=None, post_layer_norm=False)

This is a configuration class that stores the configuration of the GPT-J model, which inherits from the Config class. It is used to instantiate the Bert model according to the specified parameters and define the model architecture. You can set specific parameters to control the output of the model.

For example: [dim_model] is used to determine the Dimension of the encoder layers. You can choose to use the default value of 4096 or customize their dimensions.

GPTjModel

class model_center.model.GPTj(config: model_center.model.config.gptj_config.GPTjConfig)
forward(input_ids=None, length=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=True, return_logits=False)
The GPT-J Model transformer outputs raw hidden-states or logits as you want.

This model inherits from BaseModel. This model is also a PyTorch torch.nn.Module subclass.You can use it as a regular PyTorch Module. You can also select the data and data type that you want the model to return through changing the value of return_dict and return_logits.

Parameters
  • input_ids (torch.Tensor of shape (batch, seq_length)) – Indices of input sequence tokens. It will be embedded by model’s internal embedding lookup matrix.

  • length (torch.Tensor of shape (batch)) – Length of input sequence before padding.

  • attention_mask (torch.Tensor of shape (batch, seq_length)) – Used to avoid performing attention on padding token indices.

  • token_type_ids (torch.Tensor of shape (batch, seq_length)) – Unused.

  • position_ids (torch.Tensor of shape (batch, seq_length)) – Unused.

  • head_mask (torch.Tensor of shape (num_layers, num_heads)) – Unused.

  • inputs_embeds (torch.Tensor of shape (batch, seq_length, dim_model)) – Embedding of the input. You can choose to directly pass the inputs embedding to control the way of embedding.

  • output_attentions (torch.Tensor of shape (batch, num_heads, seq_length, seq_length)) – Unused.

  • output_hidden_states (torch.Tensor of shape (batch, seq_dec, dim_model)) – Unused.

  • return_dict (bool) – Whether to return a BaseModelOutputWithPastAndCrossAttentions instead of just a tuple.

  • return_logits (bool) – Whether to return the prediction score for each token in vocabulary (before softmax).

Returns

The GPT-J output. Depended on the value of return_dict and return_logits

Return type

BaseModelOutputWithPastAndCrossAttentions or tuple or torch.Tensor of shape (batch, seq_dec, vocab_output_size) or (batch, seqlen, cls_head)

GPTjTokenizer

class model_center.tokenizer.GPTjTokenizer

The current implementation is mainly an alias to AutoTokenizer of Hugging Face Transformers. we will change to our SAM implementation in the future, which will be a more efficient tokenizer.