All Tags
AWS
algorithm-design
architecture
cloud-principles
cost-reduction
data-centric
data-compression
data-processing
deployment
design
edge-computing
energy-footprint
hardware
libraries
locality
machine-learning
management
measured
migration
model-optimization
model-training
performance
queries
rebuilding
scaling
services
strategies
template
workloads
Tactic: Use Input Quantization
Tactic sort:
Awesome Tactic
Type: Architectural Tactic
Category: green-ml-enabled-systems
Title
Use Input Quantization
Description
Input quantization in machine learning refers to the process of converting data to a smaller precision (e.g., reduce number of bits to represent data). For example, Abreu et al (2022) investigated different input widths (bits) and found that 10-bit precision is sufficient for achieving accuracy in models, and that increasing the number of bits does not contribute to accuracy. Therefore, using higher precision is a waste of resources. Additionally, using precise data values through input quantization can even have a positive impact on the machine learning model by reducing overfitting.
Participant
Data Scientist
Related software artifact
Data
Context
Machine Learning
Software feature
< unknown >
Tactic intent
Improve accuracy (and energy efficiency) by reducing data precision with input quantization
Target quality attribute
Accuracy
Other related quality attributes
Energy Efficiency
Measured impact
< unknown >