All Tags
AWS
algorithm-design
architecture
cloud-principles
cost-reduction
data-centric
data-compression
data-processing
deployment
design
edge-computing
energy-footprint
hardware
libraries
locality
machine-learning
management
measured
migration
model-optimization
model-training
performance
queries
rebuilding
scaling
services
strategies
template
workloads
Tactic: Use Quantization-Aware Training
Tactic sort:
Awesome Tactic
Type: Architectural Tactic
Category: green-ml-enabled-systems
Title
Use Quantization-Aware Training
Description
Quantization-aware training is a technique used to train neural networks to convert data types to lower precision. The idea is to use fixed-point or integer representations instead of the more commonly used higher-precision floating-point representations. This improves the performance and energy efficiency of the model in federated learning.
Participant
Data Scientist
Related software artifact
Model
Context
Machine Learning
Software feature
Model Training
Tactic intent
Improve energy efficiency by using quantization-aware training to convert high-precision data types to lower precision
Target quality attribute
Accuracy
Other related quality attributes
Energy Efficiency
Measured impact
< unknown >