AI Ants

迈向通用人工智能之路

0%

模型压缩(蒸馏/剪枝/量化)方法总结

蒸馏

1、Distilling Object Detectors with Fine-grained Feature Imitation(CVPR19)

https://github.com/twangnh/Distilling-Object-Detectors

剪枝

1、soft-filter-pruning(FPGM)

https://github.com/he-y/filter-pruning-geometric-median?utm_source=catalyzex.com

Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration(CVPR19 Oral)

量化

1、EasyQuant: Post-training Quantization via Scale Optimization

https://github.com/deepglint/EasyQuant

EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activations

2、dnn-gating(PACT)

https://github.com/cornell-zhang/dnn-gating?utm_source=catalyzex.com

PACT: PARAMETERIZED CLIPPING ACTIVATION FOR QUANTIZED NEURAL NETWORKS

3、scale-adjusted-training

https://github.com/jakc4103/scale-adjusted-training?utm_source=catalyzex.com

Towards Efficient Training for Neural Network Quantization

-------------本文结束感谢您的阅读-------------

本文标题:模型压缩(蒸馏/剪枝/量化)方法总结

文章作者:

发布时间:2021年01月07日 - 16:36

最后更新:2021年01月08日 - 11:05

原始链接:https://yangsuhui.github.io/p/f647.html

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。

如果您觉得内容不错,可以对我打赏哦!