(1)
Alexander Müller. Optimizing Large Language Model Inference: Strategies for Latency Reduction, Energy Efficiency, and Cybersecurity Applications. arlijcsis 2025, 10, 93-97.