We just found this in our back pocket and thought some of you might be interested in your own LLaMA
Whether you create your own instance or just read about what went into the dataset and training of this one it's still pretty interesting. The current release is only a preview of what the complete OpenLLaMA release will offer. We are currently focused on completing the training process on the entire RedPajama dataset. This can gives us a good apple-to-apple comparison between the original LLaMA and our OpenLLaMA. Other than the 7B model, we are also training a smaller 3B model in hope of facilitating language model usage in low resource use cases. https://github.com/openlm-research/open_llama |
Author: <see article>
These links serve as tributes to those who have written them. Please find contributor details in the links provided Archives
April 2024
Categories |