[1]
Alexander Müller, “Optimizing Large Language Model Inference: Strategies for Latency Reduction, Energy Efficiency, and Cybersecurity Applications”, arlijcsis, vol. 10, no. 11, pp. 93–97, Nov. 2025.