LLaMA2-Accessory is an open-source toolkit for pre-training, fine-tuning, and deploying Large Language Models (LLMs) and multimodal LLMs. It supports various datasets, tasks, and efficient optimization techniques, inherited from LLaMA-Adapter with advanced features.
• Copy the embed code to showcase this product on your website
• Share on X to spread the word about this amazing tool
Github: https://github.com/Alpha-VLLM/LLaMA2-Accessory
🚀
LLaMA2-Accessory is an open-source toolkit for pre-training, fine-tuning and deployment of
Large Language Models (LLMs) and
mutlimodal LLMs. This repo is mainly inherited from
LLaMA-Adapter with more advanced features.🧠## News-
[2023.08.05] We release the multimodel fine-tuning codes and checkpoints🔥🔥🔥-
[2023.07.23] Initial release 📌## Features*
💡Support More Datasets and Tasks - 🎯 Pre-training with
RefinedWeb and
StarCoder. - 📚 Single-modal fine-tuning with
Alpaca,
ShareGPT,
LIMA,
UltraChat and
MOSS. - 🌈 Multi-modal fine-tuning with image-text pairs (
LAION,
COYO and more), interleaved image-text data (
MMC4 and
OBELISC) and visual instruction data (
LLaVA,
Shrika,
Bard) - 🔧 LLM for API Control (
GPT4Tools and
Gorilla).*
⚡Efficient Optimization and Deployment - 🚝 Parameter-efficient fine-tuning with
Zero-init Attenion and
Bias-norm Tuning. - 💻 Fully Sharded Data Parallel (
FSDP),
Flash Attention 2 and
QLoRA.*
🏋️♀️Support More Visual Encoders and LLMs - 👁🗨 Visual Encoders:
CLIP,
Q-Former and
ImageBind. - 🧩 LLMs: LLaMA and LLaMA2.## InstallationSee
docs/install.md. ## Training & InferenceSee
docs/pretrain.md and
docs/finetune.md. ## Demos* Instruction-tuned LLaMA2:
alpaca &
gorilla.* Chatbot LLaMA2:
dialog_sharegpt &
dialog_lima &
llama2-chat.* Multimodal LLaMA2:
in-context




## LicenseLlama 2 is licensed under the
LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.