What
Let's support mxint8 quantized model serialization. Right now tico/quantization/wrapq/examples/quantize_linear.py fails to save model with observer set to MXObserver with elem_format set to int8. No decomposition and serialization.
Why
Need this to produce quantized llm model in circle format.