Covering Disruptive Technology Powering Business in The Digital Age

image
GPT-3 – The Next Big Technology After Blockchain
image
July 29, 2020 News

 

With great prompts comes great outputs. Well, at least for the largest language model released for beta testers mid-July this year, the OpenAI’s GPT-3 (Generative Pre-trained Transformer). GPT-3 is OpenAI’s third instalment of their artificial general intelligence capable of generating text outputs, answering questions, coding program languages and even completing the pixels of images based on the users’ inputs.

Coming with 175 billion parameters (values for its neural network, similar to a human brain’s neurons), GPT-3 surpasses its predecessor GPT-2 which had 1.5 billion parameters. What does this mean for the future of technology? GPT-3 has a much larger amount of information which translates to smarter and more comprehensive results.

OpenAI has already published some research about the model in May and with its official beta testing, many tech-savvy people tested GPT-3 for various purposes and shared their experiences on the Internet.

Some used the model to create text-based results such as poems with certain poets as input to write in their style, one used it to write an article about GPT-3 itself and others tested the model to generate news articles. Aside from these, users can also generate songs, manuals, short stories, essays, recipes and more text outputs using a prompt or input that best describes what you are looking for or starting with an incomplete sentence/phrase you want the model to fill in.

The model can also answer questions from the users or generate the questions itself to be used for quizzes. However, the model doesn’t only generate text-based outputs.

On Twitter, many users shared their experiences with GPT-3 regarding logical inquiries and programming languages. The model can generate the prime numbers following the first 12 primes as the prompt.

In a post, one user tested the model by giving it a description of a to-do list application he wants to see in the program. GPT-3 then generated the React code for a “fully-functioning app within a few seconds” as described by the user. One user even prompted the model to generate a code for a machine learning model, just by describing the dataset and required output.

For non-text results, GPT-3 can also complete a picture based on an initial portion of the photo as demonstrated in this article. It can also generate music videos based on text input by replacing the final layers of GPT-3 with a Flow-GAN architecture, according to the user.

With a big model like GPT-3, the future of technology is bright as the model can assist in a plethora of tasks that people and companies need. Some even described GPT-3 as the next big thing to blockchain because of its potential to be a disruptive technology.

However, such technology, like any other one, has its benefits and drawbacks. GPT-3 can help industries to automate mundane or complex tasks but the model can also introduce a spread of false information, as it can generate texts based on a user’s desired input. In this age of information, this setback can be detrimental. The model can also be used to spread hate against certain individuals.

With this, the model should be used with caution, as its output relies on a user’s input. The next thing GPT-3 will generate is in your hands and with good prompts to start with, GPT-3 will also produce good results.

(0)(0)

Archive