Advancement in Artificial Intelligence: What is GPT-3?

Advancement in Artificial Intelligence: What is GPT-3?

In the world of artificial intelligence, a technology known as GPT-3 caused great excitement in 2020. Simply put; Developed by OpenAI, GPT-3 is an artificial intelligence model that is better than any previous model at creating content with a language structure such as human or machine language. But there is some confusion about exactly what it does (and doesn’t actually do). Therefore, in this article, we will cover the subject in a simple way for non-technical readers who want to understand the basic principles behind it.

What is GPT-3?

First, GPT-3 stands for Generative Pre-trained Transformer 3 (“Generative Pre-trained Transformer 3”). As the name suggests, GPT-3 is the 3rd version of the model released by OpenAI that can generate text using pre-trained algorithms.

All the data he needs to carry out his duties is provided in advance. Specifically, the OpenAI team scanned the internet for publicly available datasets known as “CommonCrawl”, including the entire Wikipedia text, and fed the GPT-3 with the collected text data of approximately 570GB.

If you ask a question to GPT-3, it will give you the most helpful answer it can give. If you ask him to perform a task such as compiling a summary or writing a poem, he will write a summary or a poem. More technically speaking, GPT-3 is the largest neural network ever created. We will discuss this in detail shortly.

What Can GPT-3 Do?

GPT-3 can create anything with language structure; This means that he can answer questions, write articles, summarize long texts, translate languages, take notes for later use, and even generate computer code. The video below shows a free version available online creating an app that looks and works similar to Instagram using a plug-in for Figma, the software tool commonly used for app design.

GPT-3 is highly revolutionary and if it proves usable and useful in the long run, it could have enormous implications for the way software and applications are developed in the future. Because the code itself is not yet publicly available, selected developers can only access the code through an API (Application Programming Interface or “Application Programming Interface”) provided by OpenAI.[2] API is an interface that allows an application or platform to access the capabilities of that program, within certain limitations.

Examples of poetry, prose, news, and creative fiction have appeared since the code’s API access was made available. This article, all written by GPT-3, is of particular interest; GPT-3 is asked to convince us that artificial intelligence will not harm humans in the future. GPT-3 writes:

(…) I know that I cannot avoid destroying humanity.

How GPT-3 Works and Why Is It Important?

GPT-3 is a cutting-edge language model that uses machine learning to generate human-like text. It can also be said to be an algorithmic construct designed to take a piece of language, i.e. text input, and transform it into what the user deems to be the most useful text. It can provide outputs thanks to the learning analysis it performs on the large set of texts used for “pre-training”.

We mentioned at the beginning of the article that you read the entire Wikipedia. In artificial intelligence terminology, this means 3 billion “tokens”. Besides Wikipedia, GPT-3 has also “read” two large databases of books with 12 billion and 55 billion tokens, and all the books ever written and digitized. In summary, GPT-3 “knows” all encyclopedias, books and everything written on the Internet. This value is equivalent to a total of 410 billion tokens. However, the processing of information for intelligence as well as knowing; So learning is also very important.

As humans, we process information with the 100 trillion neural networks in our brains. Connections in artificial intelligence are called hyperparameters. The reason why GPT-3 is a huge leap is because of the number of these parameters, though less than human brain connections: 175 billion! It is over 100 times the number of parameters the previous version GPT-2 was trained on. In other words, GPT-3 can process the 410 billion different information it collects with 175 billion connections. While the computing power cost us three meals and 8 hours of sleep, OpenAI spent $4.6 million for GPT-3 to understand how languages ​​work and are structured. The large number of parameters and enormous data used to train this model is also important as it virtually eliminates the need for fine-tuning.

GPT-3 uses semantic analytics to learn how to construct language structures such as sentences. It not only examines words and their meanings, but also develops an understanding of how the use of words differs depending on other words used in the text. This is a form of machine learning, also called unsupervised learning; because the training data do not contain any information about what the “right” or “wrong” answer is, as in supervised learning. All the information he needs to calculate the probability that his output is what the user wants is gathered from the training texts. This is done by studying the use of words and sentences, then breaking them down and trying to reconstruct them.

For example, during training algorithms may encounter the phrase “the house has a red door”. Then the phrase is given again with an incomplete word, such as “the house has a red X”. It then scans the entire text in the training data—hundreds of billions of words arranged in a meaningful language—and determines which word it should use to reconstruct the original expression. Probably, at first, it will make mistakes potentially a million times over. However, he will eventually find the right word. By checking the original input data it will know it has the correct output and will assign a “weight” to the algorithm process that provides the correct answer. This means that in the future it will gradually “learn” which methods are most likely to give the correct answer. The scale of this dynamic “weighting” process is what makes GPT-3 the largest neural network ever created.

In some ways, you could say what he’s doing is nothing new, because transformative models of language prediction have been around for years. However, the number of connections GPT-3 dynamically holds in its memory and uses to process each query is 175 billion, ten times that of its closest competitor made by Nvidia!

What Are Some of the Issues with GPT-3?

GPT-3’s language processing ability is recognized as the best in AI; however, there are also some important problems. Sam Altman, CEO of OpenAI, says:

There is a lot of interest in GPT-3. AI will change the world, but GPT-3 is only an early step on that trajectory.

First, due to the large amount of computational power required to perform its function, GPT-3 is currently quite expensive to use. This means that the cost of using it will be beyond the budget of smaller organizations.

Second, it is a closed system. Contrary to what its name suggests, OpenAI has not revealed all the details of how its algorithms work. Therefore, anyone who relies on it to answer questions or create useful products for themselves cannot be exactly sure how the outputs are created, as is the case today.

Third, the outputs of the system are not yet perfect. While it can handle tasks such as creating short texts or basic applications, its output becomes less useful, meaningless when asked to produce something longer or more complex. These are certain issues that are expected to be addressed over time.

Conclusion

In conclusion, it is an undeniable fact that the GPT-3 has produced results far beyond what we have seen before. Anyone who knows the implications of the AI ​​language knows that the results can be variable. The outputs of GPT-3 undeniably appear to be a huge step forward. The performance of the GPT-3 and its successors will undoubtedly become more and more impressive at an accelerating pace.

Resources and Further Reading
Derivative Content Source: Forbes | Archive Link
AI. GPT-3. A Robot Wrote This Entire Article. Are You Scared Yet, Human?. (September 8, 2020). Retrieved September 8, 2020. Retrieved from: The Guardian | Archive Link
OpenAI. Openai API. (11 June 2020). Retrieved on: 11 June 2020. Retrieved from: OpenAI | Archive Link
A. Chugh. Openai’s Gpt-3: The End Of Cargo Cult Programmers. (25 July 2020). Retrieved on: 25 July 2020. Retrieved from: Towards data science | Archive Link https://evrimagaci.org

Be the first to comment

Leave a Reply

Your email address will not be published.


*