All Tags
AWS
ai
algorithm-design
architecture
browser
cloud
cloud-efficiency
cloud-principles
cost-reduction
data-centric
data-compression
data-processing
deployment
design
documentation
edge-computing
email-sharing
energy-efficiency
energy-footprint
enterprise-optimization
green-ai
hardware
libraries
llm
locality
machine-learning
maintainability
management
measured
microservices
migration
mobile
model-optimization
model-training
multi-objective
network-traffic
parameter-tuning
performance
queries
rebuilding
scaling
services
storage-optimization
strategies
tabs
template
testing
workloads
Tactic(s) tagged with "machine-learning"
- Apply Cloud Fog Network Architecture (AT)
- Apply Sampling Techniques (AT)
- Choose a Lightweight Algorithm Alternative (AT)
- Choose an Energy Efficient Algorithm (AT)
- Consider Energy-Aware Pruning (AT)
- Consider Federated Learning (AT)
- Consider Graph Substitution (AT)
- Consider Knowledge Distillation (AT)
- Consider Reinforcement Learning for Energy Efficiency (AT)
- Consider Transfer Learning (AT)
- Decrease Model Complexity (AT)
- Design for Memory Constraints (AT)
- Enhance Model Sparsity (AT)
- Minimize Referencing to Data (AT)
- Monitor Computing Power (AT)
- Reduce Number of Data Features (AT)
- Remove Redundant Data (AT)
- Retrain the Model If Needed (AT)
- Set Energy Consumption as a Model Constraint (AT)
- Use Built-In Library Functions (AT)
- Use Checkpoints During Training (AT)
- Use Computation Partitioning (AT)
- Use Data Projection (AT)
- Use Dynamic Parameter Adaptation (AT)
- Use Energy-Aware Scheduling (AT)
- Use Energy-Efficient Hardware (AT)
- Use Informed Adaptation (AT)
- Use Input Quantization (AT)
- Use Power Capping (AT)
- Use Quantization-Aware Training (AT)
- Choose an energy efficient drift detection algorithm (SP)
- [Using Adaptive Response for Sustainable LLM Inference] (AT)
- [Using Energy-Efficient Multi-Objective Optimization for AI Training and Inference] (AT)
- [Using Storage Optimization for Efficient LLM Inference] (AT)
- Limit Ensemble Size (AT)
- RAG Context Caching (AT)
- RAG Context Filtering and Compression (AT)
- Use Majority Voting (AT)
- Adaptive Ensemble (AT)
- Detection Based Model Reconstruction (AT)
- Detection Based Model Repository (AT)
- Energy Efficient Hardware (AT)