Just 4 months after the discharge of ChatGPT, OpenAI has announced its next-generation artificial intelligence (AI) model — and it’s greater and higher than ever. GPT-4 not only blows ChatGPT out of the water in a spread of standardized tests just like the SATs and the bar exam, it also adds a key latest feature: it may see.
GPT-4 is a “large multimodal model,” meaning it may analyze not only text, but images as well. The exciting addition of “computer vision” allows the AI’s users to input photos and drawings, which the model can analyze and, seemingly, understand.
In a promotional video released by OpenAI, the corporate claims that, given a photograph of helium balloons tied to an anvil, alongside a question reminiscent of, ‘What would occur if the strings were cut?’ — GPT-4 would have the option to logically deduce that the balloons would fly away.
Though GPT-4 can’t generate images (OpenAI’s DALL-E has that covered) the applications of its computer vision are stunning.
In a live demonstration of the AI’s capabilities, OpenAI’s president and co-founder Greg Brockman showed that the AI could create a complete website basely solely on a hand-drawn note.
GPT-4 can generate and absorb as much as 25,000 words, which is an eight-fold improvement over ChatGPT. The corporate says it may be used to help in “composing songs, writing screenplays, or learning a user’s writing style.”
In a research paper released with the launch of GPT-4, OpenAI shared GPT-4’s scores on quite a lot of academic tests, including AP exams, the bar, the Graduate Record Examination (GRE), and even sommelier certification exams.
The outcomes show just how far the AI has come since its predecessor GPT-3.5, which powered ChatGPT. While GPT-3.5 scored within the tenth percentile, eking just above 50 per cent on the Uniform Bar Exam, GPT-4 could land itself squarely within the courtroom with a ninetieth percentile rating.
Similar large jumps in test scores were seen for the LSAT, the quantitative and verbal GREs, and Medical Knowledge Self-Assessment Program.
GPT-4’s test scores suggest it excels with basic reasoning and comprehension, but still struggles with creative thought, demonstrated by its poor performance on the AP English Literature and Language exams and the GRE Writing exam. It seems there was little improvement on this area as those test scores were unchanged from GPT-3.5’s performance.
OpenAI claims that its latest model “exhibits human-level performance” while still being “less capable than humans in lots of real-world scenarios.” The corporate added that it finished training the brand new AI last August, but has been withholding it to make the chatbot safer for users.
“GPT-4 is 82% less likely to reply to requests for disallowed content and 40% more prone to produce factual responses than GPT-3.5 on our internal evaluations,” the corporate said.
Tempering expectations, the corporate added that GPT-4 remains to be susceptible to “hallucinations” like its predecessors, which describes when the AI produces sentences which might be coherent, but are factually inaccurate or not based in point of fact. OpenAI CEO Sam Altman says the model hallucinates significantly lower than GPT-3.5, but the corporate is looking forward to “feedback on its shortcomings.”
One other way GPT-4 improves on its predecessor is that it now works across dozens of languages. Under testing, GPT-4 performed higher on multiple-choice questions in languages reminiscent of Latvian, Welsh and Swahili, higher than GPT-3.5 could perform in English.
GPT-4 uses a transformer-style architecture in its neural network. The tactic uses an attention mechanism, loosely inspired by human cognition, that enables the neural network to parse out which pieces of information are more relevant than others, improving the model’s accuracy and cutting down on training time.
The thought of transformers was revolutionized and popularized by Google Brain researchers in a 2017 paper Attention is All You Need. One among the lead researchers of the paper was Aidan Gomez, who’s now the CEO of Cohere in Toronto, a Canadian natural language processing company that operates in the identical space as OpenAI.
Most of the world’s most vital advancements in machine learning, which laid the bedrock for ChatGPT, were pioneered by Canadian scientists.
Three men are lauded because the godfathers of AI and two of them are Canadian: Yoshua Bengio of the Université de Montréal and Geoffrey Hinton of the University of Toronto (U of T). The third, Yann LeCun, is French, but a few of his most groundbreaking research was done at Bell Labs and U of T.
In truth, the chief science officer and co-founder of OpenAI, Ilya Sutskever, was educated at U of T and was a PhD student of Hinton’s.
As for Bengio, he’s the most cited computer scientist on this planet. When asked if he could draw a direct line from his work to ChatGPT he said, point-blank, “Yeah, definitely.”
Bengio warns of AI’s potential to disrupt the social and economic fabric of the world through job losses, misinformation campaigns and the potential for AI-equipped weapons. He and other scientists have called for greater regulations of AI to make sure its advantages are enjoyed by all.
He also points out that ChatGPT is removed from having the ability to reason like a human, and that such technology remains to be a ways away. But he is definite that a day will come when humans are in a position to create a synthetic general intelligence with human-level cognition.
“What’s inevitable is that the scientific progress will get there. What shouldn’t be is what we determine to do with it.”
OpenAI acknowledged the potential for its tool for use for malicious intent in its GPT-4 research paper, writing, “GPT-4 and successor models have the potential to significantly influence society in each useful and harmful ways. We’re collaborating with external researchers to enhance how we understand and assess potential impacts, in addition to to construct evaluations for dangerous capabilities which will emerge in future systems.”
The corporate added that it is going to publish a follow-up paper with “recommendations on steps society can take to organize for AI’s effects and initial ideas for projecting AI’s possible economic impacts.”
© 2023 Global News, a division of Corus Entertainment Inc.