Sebastian Raschka 534a704364 RoPE increase (#407) 17 小时之前
..
01_main-chapter-code bfa4215774 llama note 3 周之前
02_alternative_weight_loading bbb2a0c3d5 fixed num_workers (#229) 4 月之前
03_bonus_pretraining_on_gutenberg 7feb8cad86 Update README.md 2 月之前
04_learning_rate_schedulers cf39abac04 Add and link bonus material (#84) 7 月之前
05_bonus_hparam_tuning d16527ddf2 total training iters may equal to warmup_iters (#301) 2 月之前
06_user_interface 76e9a9ec02 Add user interface to ch06 and ch07 (#366) 1 月之前
07_gpt_to_llama 534a704364 RoPE increase (#407) 17 小时之前
08_memory_efficient_weight_loading cd2753a36d update mmap section 1 周之前
README.md 05b04f2a5a Memory efficient weight loading (#401) 1 周之前

README.md

Chapter 5: Pretraining on Unlabeled Data

 

Main Chapter Code

 

Bonus Materials

  • 02_alternative_weight_loading contains code to load the GPT model weights from alternative places in case the model weights become unavailable from OpenAI
  • 03_bonus_pretraining_on_gutenberg contains code to pretrain the LLM longer on the whole corpus of books from Project Gutenberg
  • 04_learning_rate_schedulers contains code implementing a more sophisticated training function including learning rate schedulers and gradient clipping
  • 05_bonus_hparam_tuning contains an optional hyperparameter tuning script
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM
  • 07_gpt_to_llama contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
  • 08_memory_efficient_weight_loading contains a bonus notebook showing how to load model weights via PyTorch's load_state_dict method more efficiently