Claude-llm-trainer is a Google Colab project that fine-tunes Llama-2-7B with a single description about the downstream task.
Is it better than just putting the prompt in context though?
Cool system regardless
I always test in-context learning before deciding on FT, so it depends a lot on the application. Fortunately, in this case, comparing the two is not very difficult because the entire FT takes less than an hour, even less if you have Colab Pro.
Is it better than just putting the prompt in context though?
Cool system regardless
I always test in-context learning before deciding on FT, so it depends a lot on the application. Fortunately, in this case, comparing the two is not very difficult because the entire FT takes less than an hour, even less if you have Colab Pro.