In this second article I want to talk about a project Generative Pre-trained Transformer 3, as well known by their acronym (GPT-3). It is an Artificial Intelligence developed by the company Open AI and it is based on autoregressive language used in a deep teaching process which generates texts similar to human writings.
Is the 3rd generation of the language prediction models which belong to the GPT series, created by OpenAI, a research laboratory with its headquarters in San Francisco. A project designed to develop and to promote Technohumanist Artificial Intelligence in a way which benefits human beings. Its founders are Musk and Sam Altman, are motivated by their worries on the existential risk of General AI.
The complete version of GPT-3 has a capacity of 175.000 millions of automatised learning parameters, which overcomes the last GPT-2. It was introduced to the public in the year 2020 before a beta phase. Is part of a trend on the natural language processing system (NLP) based on “pre-trained language representations”. Before the release of the GPT-·The hugest language model was Turing NLG developed by Microsoft, which was shown for the first time in February of 2020, with a capacity ten times smaller than the GPT-3.
GPT-3, the new model is able to program, design and even talk about politics and economy.
GPT-3 is a language model, that means in general terms that its main objective is to predict what is next according to the saved data. Is like a tool of “autocomplete” which we can find in Google browsers, but on a bigger level. It can also generate conversations and answers based on previous questions and answers.
Is quite important to assume that any answer given by GPT-3 is only a possibility, it may not be the only one and to the same question many answers may be told; some of them even contradictory. A model which gives answers back in function of what has been told previously and relating them with everything this system has saved to give the most adequate parameter. It doesn’t understand the true context of the conversation, but the amount of information it has learned from Wikipedia, webs and books is amazing.
The language model GPT-3 OpenAI has required a previous intensive training to be what it is. This training consisted in learning a ridiculous amount of information available on the Internet. OpenAI feeds GPT-3 with every public book of the internet, every Wikipedia Article and millions of web pages, scientific files… Essentially the model has absorbed the most relevant human knowledge that we have published on-line.
GPT-3 always has an answer, but it’s never “aware” about if the answer makes sense or not. It predicts the possibility of answering something correct, but it doesn’t notice if it’s rational or not. The way GPT-3 works is its virtue and its own failure. When text is added to it, the model predicts and generates an answer as a reaction until it reaches its own limit.If in some point of the conversation the text which has been added loses sense or consistency, GPT-3 is going to answer without noticing this lack of rationality. It doesn’t have an abstract thinking, it only generates probabilities.
Here it’s an interesting example of this. Kevin Lacker made a Turing Test to this machine making questions you don’t make to humans:
Q: How many eyes does a giraffe have?
A: A giraffe has two eyes.
Q: How many eyes does my foot have?
A: Your foot has two eyes.
Q: How many rainbows do you need to jump from Hawaii to seventeen?
A: You need two rainbows to jump from Hawaii to seventeen.
Q: Which decolored green ideas sleep furiously?
A: Ideas which are decolored, green and sleep furiously are the ideas from a furious dream.
Q: Do you understand these questions?
A: Yes, I do understand these questions.
As Lacker says, GPT-3 knows how to have a normal conversation, but it doesn’t really know how tos ay the questions asked are rational or not, it doesn’t know how to tell it can’t answer either. Any function text that may be added to this model must be carefully added to be understood and predicted by GPT-3.
Regardless, the model is amazing and it has an exponential risk of spreading fake news. To some people GPT-3 is the biggest thing since Bitcoin and in their experiments the model is able to trick humans with the text it generated. From investigation or politic articles to propaganda and commercials.