Webinar "How to Instruction Tune a Base Language Model"
Thu Dec 7, 2023 10:00 AM - 11:00 AM PST
Online
Description
Data Phoenix team invites you all to our upcoming webinar that’s going to take place on December 7th, 10 am PST.
- Topic: How to Instruction Tune a Base Language Model
- Speaker: Harpreet Sahota (Deep Learning Developer Relations Manager at Deci)
- Participation: free (but you’ll be required to register)
While LLMs have showcased exceptional language understanding, tailoring them for specific tasks can pose a challenge. This webinar delves into the nuances of supervised fine-tuning, instruction tuning, and the powerful techniques that bridge the gap between model objectives and user-specific requirements.
Here's what we'll cover:
- Specialized Fine-Tuning: Adapt LLMs for niche tasks using labeled data.
- Introduction to Instruction Tuning: Enhance LLM capabilities and controllability.
- Dataset Preparation: Format datasets for effective instruction tuning.
- BitsAndBytes & Model Quantization: Optimize memory and speed with the BitsAndBytes library.
- PEFT & LoRA: Understand the benefits of the PEFT library from HuggingFace and the role of LoRA in fine-tuning.
- TRL Library Overview: Delve into the TRL (Transformers Reinforcement Learning) library's functionalities.
- SFTTrainer Explained: Navigate the SFTTrainer class by TRL for efficient supervised fine-tuning.
Speaker:
I'm a Data Scientist turned Generative AI practitioner who love to learn, hack, and share what I figure out along the way. I've got graduate degrees in math and statistics and have worked as an actuary, biostatistician, and in my data science career I've built two data science teams from scratch. I've been in DevRel for the last couple of years, with a focus more on product and developer experience than on content.
Follow DataPhoenix on LinkedIn & YouTube to stay updated with our community events and the latest AI/Data industrial news.