Artificial Intelligence has positioned itself as the technology that will lead the main projects of the coming years, it was especially with the arrival of ChatGPT to the public, that the world began to see that this science was going to change everything. Little by little the main companies in the sector have been presenting their models, the last to do so has been Meta, however, they have not been the ones who have presented it to the world since apparently someone has leaked it ahead of time.
It is true that Meta has been focused mainly on the development of the Metaverse for a few years, since Zuckerberg bet that this technology would be the leading edge, however, he was wrong with his prediction.
This has had three major consequences, the first being thousands of workers have lost their jobalso that the company has lost hundreds of millions invested in the development of this virtual world and finally has made Meta is at the tail of the Artificial Intelligence race (AI).
And it is that OpenAI and Microsoft with ChatGTP and Google with Bard (although it has not yet come out), the big competitors of the old Facebook are a few steps ahead, And now things are more complicated.
This is because Meta announced LlaMa in February that its AI would soon be available under a non-commercial license with the aim of be used for research or academic purposes and prevent misuse. And now, when the company has released the model checkpoints to the researchers, someone (no one knows who) has published a torrent with the model so now without much effort you can access LlaMa.
How is Llama?
The truth is that It’s not like there are big differences compared to the other AIs either. What we have seen in recent months is that, as explained in the leaked documents, LlaMa is very similar to the Deep Mind project called Chinchilla.
But unlike this one, from GPT-3 or PaLM, Meta’s AI use public data which makes it possible for the research work to be compatible with open source. Apparently Llama is input with text in 20 different languages and has been trained with sources like Wikipedia or GitHub.
According to Meta, its AI takes sequences of words to later predict the next ones and start generating a text, like all the others it is still subject to making mistakes and giving incorrect information, but it does stand out because being a small model requires less power and computing resources.