ALEXANDER MÜLLER. Optimizing Large Language Model Inference: Strategies for Latency Reduction, Energy Efficiency, and Cybersecurity Applications. Academic Reseach Library for International Journal of Computer Science & Information System, [S. l.], v. 10, n. 11, p. 93–97, 2025. Disponível em: https://colomboscipub.com/index.php/arlijcsis/article/view/58. Acesso em: 17 jan. 2026.