Udemy - LLM Reinforcement Learning Fine-Tuning DeepSeek Method GRPO

Category: Other
Type: Tutorials
Language: English
Total Size: 1.8 GB
Uploaded By: freecoursewb
Downloads: 36177
Last checked: Mar. 24th '26
Date uploaded: Mar. 24th '26
Seeders: 25477
Leechers: 9631
INFO HASH: F59A7AB8A353750371BF1F139AF5B0A92032B223

LLM Reinforcement Learning Fine-Tuning DeepSeek Method GRPO

https://WebToolTip.com

Last updated 6/2025
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz, 2 Ch
Language: English | Duration: 3h 45m | Size: 1.85 GB

[EN] LLM Fine-Tuning and Reinforcement Learning with SFT, LoRA, DPO, and GRPO Custom Data HuggingFace

What you'll learn
You will grasp the core principles of Large Language Models (LLMs) and the overall structure behind their training processes.
You will learn the differences between base models and instruct models, as well as the methods for preparing data for each.
You’ll learn data preprocessing techniques along with essential tips, how to identify special tokens required by models, understanding data formats, and methods
You’ll gain practical, hands-on experience and detailed knowledge of how LoRA and Data Collator work.
You’ll gain a detailed understanding of crucial hyperparameters used in training, including their purpose and how they function.
You’ll practically learn, in detail, how trained LoRA matrices are merged with the base model, as well as key considerations and best practices to follow during
You’ll learn what Direct Preference Optimization (DPO) is, how it works, the expected data format, and the specific scenarios in which it’s used.
You’ll learn key considerations when preparing data for DPO, as well as understanding how the DPO data collator functions.
You’ll learn about the specific hyperparameters used in DPO training, their roles, and how they function.
You’ll learn how to upload your trained model to platforms like Hugging Face and manage hyperparameters effectively after training.
You’ll learn in detail how Group Relative Policy Optimization (GRPO), a reinforcement learning method, works, including an in-depth understanding of its learnin
You’ll learn how to prepare data specifically for Group Relative Policy Optimization (GRPO).
You’ll learn how to create reward functions—the most critical aspect of Group Relative Policy Optimization (GRPO)—through various practical reward function exam
In what format should data be provided to GRPO reward functions, and how can we process this data within the functions? You’ll learn these details thoroughly.
You’ll learn how to define rewards within functions and establish clear reward templates for GRPO.
You’ll practically learn numerous details, such as extracting reward-worthy parts from raw responses and defining rewards based on these extracted segments.
You’ll learn how to transform an Instruct model into one capable of generating “Chain of Thought” reasoning through GRPO (Group Relative Policy Optimization).

Requirements
Basic knowledge of Python programming.
Introductory-level familiarity with artificial intelligence and machine learning concepts.
Ideally, prior experience with Jupyter Notebook or Google Colab.