Gradient Boosting Ankit Tomar, July 1, 2025July 1, 2025 As we continue our journey into ML algorithms, in this post, we’ll go deeper into gradient boosting — how it works, what’s happening behind the scenes mathematically, and why it performs so well. 🌟 What is gradient boosting? Gradient boosting is an ensemble method where multiple weak learners (usually shallow decision trees) are combined sequentially. Each new tree corrects the errors (residuals) of the combined previous trees. 🧠 How does it actually work? Initial prediction: Start with a simple model, like predicting the mean target value. Compute residuals: Find the difference between true values and current predictions. Fit a new tree: Train a tree to predict these residuals (i.e., the model’s mistakes). Update: Add this new tree’s output to the current prediction, scaled by a learning rate. Repeat: Build many such trees iteratively. The final prediction is the sum of all trees. 🧮 Why is it called “gradient” boosting? At each step, instead of just predicting residuals, the algorithm fits to the negative gradient of the loss function (how error changes as predictions change). This is a form of numerical optimization: we take steps in the direction that most quickly reduces error. For example, with mean squared error (MSE): The negative gradient is simply the residuals (actual – predicted). But for log loss (classification), the gradient is different. This makes gradient boosting very flexible — it can optimize almost any differentiable loss function. ✏️ How does it pick the best split in each tree? When building each tree: For each feature and threshold, it computes how much splitting at that point reduces the chosen loss (e.g., MSE or log loss). It picks the split with the highest improvement. Efficient calculation: Libraries like XGBoost and LightGBM use clever tricks (histograms, sampling) to make this faster even with large datasets. 📐 Formulas that help in interviews Gini impurity: Entropy: In regression, the typical objective is to minimize mean squared error: And the negative gradient tells us how to adjust predictions to reduce this error. ⚙️ Why is gradient boosting powerful? Focuses learning on hard-to-predict data. Works with different loss functions. Builds complex nonlinear models. Can handle numerical and categorical data. But it can overfit, so tuning is essential. 🛡️ How to control overfitting Reduce tree depth. Use lower learning rate. Add subsampling (random rows or columns). Add regularization like shrinkage. We will discuss XGboost, Catboost and LightGBM in upcoming blogs. Post Views: 161 Machine Learning
Career Why I Love Working in the AI Field— A Personal Reflection from a Product Leader in Data Science June 17, 2025June 6, 2025 This is a question I’ve been asked a lot lately—and it’s a good one. Why do I continue to work in AI and data science, especially when I could easily pivot into more conventional technology domains like ERP systems? The answer lies somewhere between passion, purpose, and impact. When I… Read More
Machine Learning 2. How Do Machine Learning Models Get Trained? June 19, 2025June 10, 2025 So far, we’ve talked about what machine learning models do at a high level—they take in historical data, learn patterns, and help us make predictions. But how exactly does a machine learning model get trained, tested, and prepared for the real world? Let’s walk through that journey step by step…. Read More
Machine Learning 🎯 Go-To-Market Reduction with a Hypothesis-First Approach in Data Science June 12, 2025June 6, 2025 Let’s face it — most machine learning models never make it to production. Despite the effort, time, and resources poured into data science projects, a staggering percentage fail to deliver actual business value. Why? One of the biggest culprits is that we often jump straight into the data and start… Read More