Full fine-tuning recipe: DPO on Mistral Nemo 12B via LLaMA-Factory, targeting Lambda Labs 8xH100, with data mix and eval plan.
Full fine-tuning recipe: IPO on Mistral Small 3 via Hugging Face TRL, targeting 8x H100, with data mix and eval plan.
Full fine-tuning recipe: IPO on Mistral Nemo 12B via Megatron-LM, targeting 8x H100, with data mix and eval plan.
Full fine-tuning recipe: IPO on Qwen 2.5 32B via FSDP, targeting 8x H100, with data mix and eval plan.
Full fine-tuning recipe: IPO on Qwen 2.5-Coder 7B via Hugging Face TRL, targeting single RTX 4090 (24GB), with data mix and eval plan.
Full fine-tuning recipe: IPO on Gemma 2 27B via Megatron-LM, targeting single RTX 4090 (24GB), with data mix and eval plan.
Full fine-tuning recipe: IPO on Phi-3.5-mini via FSDP, targeting single RTX 3090 (24GB), with data mix and eval plan.
Full fine-tuning recipe: IPO on DeepSeek-V3 base via LLaMA-Factory, targeting single RTX 3090 (24GB), with data mix and eval plan.
Full fine-tuning recipe: IPO on Mixtral 8x7B via Axolotl, targeting single RTX 3090 (24GB), with data mix and eval plan.
Full fine-tuning recipe: IPO on Yi 1.5 34B via LitGPT, targeting 2x RTX 4090, with data mix and eval plan.
Full fine-tuning recipe: IPO on Llama 3.3 70B via torchtune, targeting 2x RTX 4090, with data mix and eval plan.
Full fine-tuning recipe: IPO on Llama 3.1 8B via Axolotl, targeting AWS g5.12xlarge, with data mix and eval plan.