1.
Alexander Müller. Optimizing Large Language Model Inference: Strategies for Latency Reduction, Energy Efficiency, and Cybersecurity Applications. arlijcsis [Internet]. 2025 Nov. 30 [cited 2026 Jan. 17];10(11):93-7. Available from: https://colomboscipub.com/index.php/arlijcsis/article/view/58