[1]
Alexander Müller 2025. Optimizing Large Language Model Inference: Strategies for Latency Reduction, Energy Efficiency, and Cybersecurity Applications. Academic Reseach Library for International Journal of Computer Science & Information System. 10, 11 (Nov. 2025), 93–97.