Fix the comments on the precisions in quantization

Fix the comments on the precision of quantizers and tx coefficients
during a quantization process for different input depth and tx size.

I think the author really meant "de-quantized/de-coded coefficients" by
"quantized/coded coefficients". So, made it clear to avoid any possible
misunderstandings.

Change-Id: Ib92ac7dcfddcbe58cf3adfb9448497512381c1f5
diff --git a/av1/common/quant_common.c b/av1/common/quant_common.c
index 0ea57d2..5bec9a2 100644
--- a/av1/common/quant_common.c
+++ b/av1/common/quant_common.c
@@ -290,8 +290,9 @@
 // 64x32 and 64x64 transforms return Q1 coefficients.  However, the
 // quantizers are de-scaled down on-the-fly by the same amount
 // (av1_tx_get_scale()) during quantization, and as such the
-// quantized/coded coefficients, even for large TX blocks, are always
-// effectively Q3.
+// dequantized/decoded coefficients, even for large TX blocks, are always
+// effectively Q3. Meanwhile, quantized/coded coefficients are Q0
+// because Qn quantizers are applied to Qn tx coefficients.
 
 // Note that encoder decision making (which uses the quantizer to
 // generate several bespoke lamdas for RDO and other heuristics)