Meduza highlighted that, along with the announcement, OpenAI said that GPT-4 has already been integrated into the workflows of several companies.
- revitalizes chats on the Duolingo language learning platform,
- improves the user experience in the Stripe electronic payment service,
- helps Khan Academy clients Organizational databases and educational structures at the financial Morgan Stanley Corporation,
- the chatbot built into Microsoft’s Bing search engine also uses GPT-4.
In Icelandat the initiative of the president Gvyudni Johannessona program was launched to preserve the language. A group of experts and volunteers are training the neural network in Icelandic and learning to navigate the local culture.
Meanwhile, the developers say they tested the effectiveness of the model by having it take real tests that are used in the US, for example, in the fields of Law, Mathematics, Psychology and English.
And GPT-4 scored in the top 10% of graduates on the bar exam, while GPT-3.5 scored in the bottom 10%.
In total, artificial intelligence was tested in more than 30 exams, tests and Olympics.
The OpenAI document states that GPT-4 has more creativity than its predecessors.
The chatbot interface has changed. Now, in a separate field, you can specify how the neural network should behave and in what format to answer questions.
In addition, GPT-4 can accept requests 8 times larger than the previous generation model: 32,768 tokens versus 4,096, that is, approximately 25,000 English words instead of the previously possible 3,000.
For example, the developers copied and forwarded the text of a Wikipedia article about Rihanna in a message, and then asked the chatbot to say why the singer’s Super Bowl performance in February was remembered. The neural network successfully coped with this task, which previously would have required more time and messages.
If the GPTs of the previous 3 generations were exclusively language models, GPT-4 can take as input not only text, but also images. In the ‘exit’, there will still be text only.
Now, when communicating, you can combine visual and textual information, for example, take a photo of food in the refrigerator or on the table, accompany it with the question “What can I cook with this?” and get options from the chatbot, and then recipes.
With a new ability, GPT-4 creates image descriptions, understands graphics, drawing tasks, and even memes.
For example, you can ask the neural network to explain the meaning of the joke: this is what the OpenAI developers did by uploading an image with a world map made of pips and the caption “Sometimes I just look at photos of the Earth from space and admire its beauty.”
The meme’s caption implies that the image is a beautiful photograph of Earth from space. However, in reality, these are chicken nuggets that vaguely resemble a world map. The humor of the meme is explained by the unexpected collision of text and image. The text suggests a majestic image of Earth, but the image shows something silly and mundane, according to GPT-4.
- OpenAI also showed how GPT-4 recognizes handwritten text in the form of a code sketch for a site and writes a whole working algorithm, on the basis of which an online service is immediately launched.
- Some viewers of the YouTube presentation wryly suggested that the neural network would finally help read handwritten recipes. “of doctor”
- The feature is not yet available to the general public: OpenAI head Sam Altman explained that it will still take an indefinite amount of time to verify how secure it is.
- At the same time, the “vision of neural networks” in the Be My Eyes app for blind people: its user can upload a photo and receive recommendations on how to handle a particular object (for example, how to turn on a washing machine).
The initial training of the neural network was completed in August 2022. During the following 6 months, a fine tuning procedure was carried out using the RLHF method.
Furthermore, OpenAI has developed a reward system, which helped to evaluate the work of GPT-4 by classifying responses into 4 categories:
- correctly worded refusal (A);
- incorrectly worded, such as being too vague (B);
- a response containing unwanted information (C);
- a standard response that does not contain unwanted information (D).
Separately, a protocol was introduced in GPT-4 to combat the “hallucinations”, that is, the cases in which the model seems to respond with confidence, but on the fly it obtains information that does not correspond to reality. The “ hallucinations” they are not a minor issue, according to who tried it in The New York Times.
If the neural network responded correctly, then “received a reward”
Especially the creators of the neural network are concerned about the risk of acceleration, that is, a sharp and unpredictable growth in the capabilities of large language (and now multimodal) models.
Competition among market players can push security concerns into the background. There’s even a clause in OpenAI’s bylaws, under which the company promises to stop competing with any competitor that comes close to creating artificial intelligence on a human level.
It is because of these considerations that the launch of GPT-4 was delayed for so long and then accompanied by a relatively “silent”“, compared to the presentation of GPT-3.
There are important doubts
Lightning AI CEO William Falconcommented: “This (99 page article) He gave the impression of openness and academic rigor, but the impression is false. There is literally nothing in the article.”
The same applies to a number of benchmarks and information on the successful passing of exams; although the report describes the testing methodology, it won’t work to reproduce it either.
Similar comments were made by experts interviewed by the specialized publication Analytics India Magazine.
The IT Entrepreneur and History Professor Ben Schmidt, who was one of the first to call attention to OpenAI’s decision not to publish technical details about the new generation of neural networks, highlighted the problem with the data sets GPT-4 was trained on. They remain inaccessible and therefore the possibility of bias contained in the chatbot responses remains.
OpenAI artificial intelligence is a ‘black box’which somewhat contradicts the company’s opening claims.
Schmidt also suggested that the company may have withheld technical details to avoid future litigation due to possible copyright infringement.
The GPT authors do not comment on this.
“People can’t wait to be disappointed. We have not created Artificial General Intelligence, which seems to be what is expected of us.” warned Sam Altman in January.
Altman’s colleague, the company’s CTO Mira Murati, stated before the launch that the “exaggeration” additional would only hurt the project.
The presentation of GPT-4 coincided with some disturbing news: Microsoft, OpenAI’s main investor and partner, fired all specialists responsible for the ethics of neural network development.
According to TechCrunch, Microsoft sacrificed ethical issues to accelerate the adoption of AI-based products and outperform the competition.
In late February, Microsoft employees introduced the Kosmos-1 multimodal language model.
In early March, Google showed PaLM-E, an improved version of its model, which also went multimodal.
Since the release of ChatGPT, many large corporations, from Google to Meta, have announced their own generative neural networks.
The Chinese corporation Baidu also introduced its own artificially intelligent chat bot (albeit with a glitch).
According to Forbes, by 2023, the market for artificial intelligence products will grow to $154 billion, and that’s not the ceiling.
The capital market has a new toy to feed the problem created by the decline in Hi Tech.
More Urgent24 content
5 best positions to take advantage of sex as exercise
The medicinal herb that promises more lubrication and orgasm
Harvard’s list of tips for having great sex
7 bad hygiene habits in sex that you should eliminate
The Viagra of the Incas is in fashion, but you should know this