<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Untitled Publication]]></title><description><![CDATA[Untitled Publication]]></description><link>https://blog.gopinathbalu.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 19 Apr 2026 10:57:12 GMT</lastBuildDate><atom:link href="https://blog.gopinathbalu.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Understanding Quantization: Part I]]></title><description><![CDATA[Introduction
Quantization in general can be defined as mapping values from a large set of real numbers i.e., FP32 or even FP16 to values in a small discrete set most likely Int8 or Int4. There are recent works trying to map to 1bit models.
Typically ...]]></description><link>https://blog.gopinathbalu.com/understanding-quantization-part-i</link><guid isPermaLink="true">https://blog.gopinathbalu.com/understanding-quantization-part-i</guid><category><![CDATA[faster inference]]></category><category><![CDATA[llm]]></category><category><![CDATA[quantization]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Gopinath Balu]]></dc:creator><pubDate>Thu, 26 Sep 2024 15:47:56 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Quantization in general can be defined as mapping values from a large set of real numbers i.e., FP32 or even FP16 to values in a small discrete set most likely Int8 or Int4. There are recent works trying to map to 1bit models.</p>
<p>Typically it involves mapping continuous inputs to fixed values at the output. Usually Quantization in Digital Signal Processing can achieved in two ways</p>
<ul>
<li><p>Rounding</p>
</li>
<li><p>Truncating</p>
</li>
</ul>
<h3 id="heading-rounding">Rounding</h3>
<p>Finding the nearest integer</p>
<p>\(1.8 \rightarrow2\\\)</p>
<p>\(1.4 \rightarrow 1\\\)</p>
<h3 id="heading-truncating">Truncating</h3>
<p>Simply removing the least significant decimal part or entirely</p>
<p>\(1.8 \rightarrow1\\\)</p>
<p>\(1.4 \rightarrow1\\\)</p>
<h2 id="heading-motivation">Motivation</h2>
<p>Mainly to improve the inference speed. Needless to say both the training and inference of the LLM are really costly. With the recent advent of really large language models, memory footprint is only getting higher and higher. Some common representation of model weights are</p>
<ul>
<li><p>FP32</p>
</li>
<li><p>FP16</p>
</li>
<li><p>BF16</p>
</li>
<li><p>TF32</p>
</li>
<li><p>Int8</p>
</li>
</ul>
<p>In general floating part numbers offer a broad range of values, consume less memory and enable the execution of complex computation with ease. They also tend to deliver faster performance and come in various formats making them ideal for a wide range of applications including gaming, simulations and machine learning.</p>
<h2 id="heading-how-numbers-are-represented-as-floating-points">How numbers are represented as floating points?</h2>
<p>As you are aware, the floating point data type is widely used for its ability to represent a large range including fractional component. This makes it particularly useful in scenarios that require scientific notation. Such as scientific computations and precise data representations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727367010212/e7be985f-17ed-4a0f-b302-e2529092a057.png" alt class="image--center mx-auto" /></p>
<p>So this is how a floating point value of 34.4669 is represented under the hood. To break is down part by part</p>
<ul>
<li><p>Sign : Whether the values is positive or negative.</p>
</li>
<li><p>Mantissa : Significand number.</p>
</li>
<li><p>Exponent : Integer value to use with the significand</p>
</li>
<li><p>Base : Base in which values are encoded.</p>
</li>
</ul>
<h3 id="heading-commonly-used-data-types-in-machine-learning">Commonly used data types in Machine Learning</h3>
<ol>
<li><p>Floating point32 → Single/Full precision</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727365854588/0cfe3869-d702-4153-969a-16b07b794593.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Floating point16 → Half precision</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727365875809/02bc0828-186d-4227-8f26-c7f40482c290.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Brain Float16 → Half precision</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727365889867/98dbcf8e-7df8-436e-8506-53db6e166311.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Tensor Float32</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727365902810/3b58635a-0940-4d0b-bf92-b1b37aec359f.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>TF32 offers a significant advantage in the it requires minimal compiler support limited mainly to the bare bones level, particularly within the CUDA compiler. For the rest of the code-base, TF32 behaves similarly to FP32 maintaining the same dynamic range but with reduced precision. Leveraging TF32 is primarily about adjusting library calls to specify whether using TF32 is acceptable. It’s design allows for quick integration unlike BF16 and TF16 requiring more work since they involve different bit layouts, enabling developers to harness the performance benefits of Tensor cores with minimal effort.</p>
<p>Now that we have revised the floating point representation, we dive right back into Quantization.</p>
<p>In simple terms quantization is the conversion of weights from a higher memory format to a lower memory format. And there can two cases of quantization.</p>
<h3 id="heading-i-uniform-quantization">I) Uniform Quantization.</h3>
<p>In the uniform quantization the conversation involves in mapping the input to be output in a linear function resulting in uniformly spaced outputs for uniformly spaced inputs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727365928387/ae907849-290e-41db-8b4f-f2d01dee4a85.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-ii-non-uniform-quantization">II) Non-Uniform Quantization.</h3>
<p>The mapping in this case is a non-linear function so the output wouldn’t be uniformly spaced for uniformly spaced inputs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727365981719/f7a41dbe-1c86-4f04-a0bb-dec18ffda1a4.png" alt class="image--center mx-auto" /></p>
<p>It’s quite easy to figure out the input representation values are not equally distributed so they are not equally represented on the output representation.</p>
<h2 id="heading-uniform-quantization">Uniform Quantization.</h2>
<p>I) Symmetric Quantization</p>
<p>Most times in uniform quantization, the linear mapping function can be scaling, rounding or both.</p>
<p>$$\displaystyle \begin{array} \displaystyle Q = \displaystyle round(\frac{x}S) \\\\ \text{where:}\\ x = \text{Og floating point value}\\ S = \text{Scaling factor}\\ \end{array}$$</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727366339767/ab6ca861-8b1c-4ab2-a3ed-2b03e61dd36b.png" alt class="image--center mx-auto" /></p>
<p>To transform the input representation to a light weight transformed representation, scaling factor \(S\\\) is involved. This scaling factor helps in restricting the values of Float32 to Int8 within the ranges of \(-127\\\) to \(127\\\) and when the zero point of the input maps perfectly to be the zero point of the output then it is called Symmetric quantization.</p>
<p>II) Asymmetric Quantization</p>
<p>Now for a symmetric quantization the range of transformation should fall within \(-127\\\) to \(127\\\) to be able to transform the input values to an Int8 quantization. But if the range goes off rail say like \(-129\\\) to \(125\\\) which most times put off the input zero away from the output zero and this is called Asymmetric Quantization. Since the zero is off of its place we should count for it by adding a zero scale, now the equation becomes</p>
<p>$$\displaystyle \begin{array} \displaystyle Q = \displaystyle round(\frac{x}S) +Z \\\\ \text{where:}\\ Z = \text{Zero scale}\\ \end{array}$$</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727366360067/8666a6a9-487e-4244-925a-0b724fc81923.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-is-scaling-and-zero-factor">What is scaling and zero factor?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727366383664/16706cf6-18ed-4d51-ba42-270784417328.png" alt class="image--center mx-auto" /></p>
<p>The scaling factor is a number which divides the entire range of numbers from the \(\displaystyle x_{min}\\\) to \(\displaystyle x_{max}\\\) across the input into uniform partition. We can choose to lip the input range beyond it’s max dense regions because anyway the values beyond the \(\displaystyle x_{min}\\\) and \(\displaystyle x_{max}\\\) would just fall onto the min and max of the output range i.e., here the minimum is \(-127\\\) and maximum \(127\\\). Now the process of choosing \(\alpha\) and \(\beta\) which helps in finding the clipping range is called <strong>Calibration.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727366390735/971c3460-159a-4903-a4a5-5543c0a58d72.png" alt class="image--center mx-auto" /></p>
<p>To avoid excessive clipping we can use this formula for</p>
<p>$$\begin{array}{llll} \displaystyle S = \displaystyle \frac{\beta-\alpha}{2^b-1} = \displaystyle \frac{x_{max}-x_{min}}{2^b-1} \\\\ \text{where:}\\ \alpha = x_{min}\\ \beta = x_{max}\\ b = \text{quantization bit width} \end{array}$$</p><p>Sometime the \(\displaystyle x_{min}\\\) and \(\displaystyle x_{max}\\\) values could be different and end up in an asymmetric manner. Ex: \(\displaystyle x_{min}\\\) = -2.0 and \(\displaystyle x_{max}\\\) = 1.0. So to constraint it to be a symmetric quantization,</p>
<p>$$\begin{array}{llll} \displaystyle -\alpha = \displaystyle \beta = \displaystyle max(|r_{min}|, |r_{max}|) \\ \displaystyle Z = \displaystyle 0 \\ \end{array}$$</p><p>ReLU and GeLU are most common examples for an asymmetric quantization. Why because the values are skewed on to one side i.e., these activation are only positive. It is also to consider the activation charges with each varying inputs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727366405867/6de29e22-eeeb-40e3-acc6-9e2180931ec1.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-de-quantization">De-quantization</h2>
<p>One cool thing about quantization is one can always get back to its original representation \(\bar{x}\\\) (i.e.,) the original floating point representation.</p>
<p>$$\begin{array}{llll} \displaystyle \bar{x}= \displaystyle S(Q+Z) \\ \displaystyle \bar{x}= \displaystyle r+e \\ i.e.\ \displaystyle x \neq \displaystyle \bar{x}\\ \end{array}$$</p><p>Here \(\bar{x}\\\) is just an approximation of the original data point \(x\\\). Usually in machine learning until the error is huge enough to hurt the accuracy and perplexity we don’t really bother about the output representation \(\bar{x}\\\) being deviated from the original input representation \(x\\\).</p>
<h2 id="heading-types-of-quantization">Types of Quantization</h2>
<ul>
<li><p>Post Training Quantization</p>
</li>
<li><p>Quantization Aware Training</p>
</li>
</ul>
<p>Post Training Quantization</p>
<p>We start with the existing pre-trained model however we wouldn’t train it but calculate the calibration data which we use it to find the clipping range, scaling factor and zero factor. We get these values from the model values.</p>
<p>Quantization Aware Training</p>
<p>This is a tricky quantization technique because to be able to train a model it should be differentiable in the first place. But the quantization operation in non-differentiable to circumvent this issue, fake quantizers like Straight Through Estimator is used. During fine-tuning both the forward and backward of the quantized model use floating point values but then these parameters are quantized after each gradient update.</p>
<p>QAT helps in recovering the lost accuracy while training or any other metric. Although they provide good accuracy when compared over PTQ, however one downside is QAT needs more data to get the desirable results.</p>
<h3 id="heading-references">References:</h3>
<p><a target="_blank" href="https://arxiv.org/pdf/2103.13630">A Survey of Quantization Methods for Efficient Neural Network Inference</a></p>
]]></content:encoded></item><item><title><![CDATA[New beginning!]]></title><description><![CDATA[Hello there, 
Vanakkam and welcome to my blog. I've always wanted to use the medium of blogging to share my thoughts and ideas with the world, but was never able to for varied reasons. But now I'm here with all of my collective experiences and best e...]]></description><link>https://blog.gopinathbalu.com/new-beginning</link><guid isPermaLink="true">https://blog.gopinathbalu.com/new-beginning</guid><category><![CDATA[sample]]></category><category><![CDATA[first post]]></category><category><![CDATA[#first-article]]></category><category><![CDATA[newbie]]></category><dc:creator><![CDATA[Gopinath Balu]]></dc:creator><pubDate>Mon, 03 Oct 2022 12:31:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1664800162722/Qv6jWrUnc.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello there, </p>
<p>Vanakkam and welcome to my blog. I've always wanted to use the medium of blogging to share my thoughts and ideas with the world, but was never able to for varied reasons. But now I'm here with all of my collective experiences and best efforts to make it possible, consistent, and successful at it. </p>
<h3 id="heading-who-am-i">Who am I?</h3>
<p>My name is Gopi, and I am a research engineer from Chennai. I work with data to identify patterns, explore new machine learning research ideas, and when business requires I experiment/implement them into product as a new feature. Python, C++, SBCL, PyTorch, TensorFlow, Scikit-Learn, Pandas, Numpy, and Git are just a few of the tools in my arsenal. Outside of work, I enjoy travelling, motorsports, astronomy, and stargazing. </p>
<h3 id="heading-what-can-you-expect-from-my-blog-posts">What can you expect from my blog posts?</h3>
<p>A slice of my life, but with a focus on technical knowledge and travel experience rather than personal reflection on life events. I just love learning new things and now I'm documenting them here so may be it is useful for some or none but myself. For the time being, I plan to have two categories: Technology and Travel. The goal is to contribute something useful to the web, rather than just more noise and useless data. But if you have any suggestions or find a problem, please let me know so I can fix it and make it better. </p>
<blockquote>
<p>So here's another item crossed off my bucket list. </p>
</blockquote>
<p>Stay tuned!</p>
]]></content:encoded></item></channel></rss>