Generative Pre-trained Transformer – A New Text Generation AI-Based Technology

Generative Pre-trained Transformer 3 (GPT-3) is an artificial intelligence model that uses deep learning to generate human-like English Text. GPT-3 model is a very complex neuron network composed of billions of neurons, it has 175 billion parameters such as layers, number of neurons per layer, hidden layers, number of training iterations, etc.


According to Lambda Computing, it would take 355 years for a single GPU to perform this amount of computation and the cost would be $4.6 million. Furthermore, to keep all the weight values in memory it would require 700Gb of capacity for the 175 billion parameters of GPT-3. (lebigdata, 2021).

The first version GPT-1 was launched in June 2018, then GPT-2 in February 2019, and GPT-3 in May 2020. Open AI the editor of GPTs promises to offer soon an API accessible through HTTPS for developers who might use it for many purposes such as “semantic search, summarization, sentiment analysis, content generation, translation, and more — with only a few examples or by specifying your task in English” (Open AI, 2021).


RAUTOR AI is another Generative Pretrained Text Model created by Yoctobe in September 2018, the model uses deep learning to generate texts to an acceptable quality level in any language (Yoctobe, 2018).

Unlike GPT-3, the training process of RAUTOR is naturally slower, but probably, it would ensure the most efficient resource consumption/performance ratio, with a single CPU, the learning process takes approximately 45 minutes to build the model from a training dataset of 150,000 words. This model could generate 1000-3000 words – unique texts to an acceptable quality level.


lebigdata, 2021. Open AI GPT-3 tout savoir. [Online]
Available at:

Open AI, 2021. Open AI. [En ligne]
Available at:

Yoctobe, 2018. Yoctobe Twitter. [En ligne]
Available at:

Open chat
Powered by