Of course, when you think AI, you think GPU. But NPU and LPU companies are game changer:
Main caracteristics of a LLMÂ
Model size (parameters)
Pre-trained
Fine-tuned for chat use cases
Main steps: Dataset > Pre-processing > Pre-training > Post-training > Optimization
In the post-training, you can use:
Supervised Fine-Tuning (SFT) : Create the prompt -> Write the response
Reinforcement Learning from Human Feedback (RLHF): Create the prompt -> AI generated responses -> Human choose the best response
Of course, LLM mix both of them
For RLHF, one company trust the market: https://www.turing.com/
Helpfulness: capacity of a LLM to give accurate, precise, useful response
Frontier model: general LLM, best in class in several domain