I am a PhD student in the Computer Science Department at Columbia University, advised by Prof. Kostis Kaffes. My research integrates LLM agents with traditional machine learning to improve performance and efficiency in operating systems—with an emphasis on CPU schedulering, workload-aware adaptation, and safe, auditable control.
Previously, I was a research affiliate with the Atlas Systems Group at Brown University, advised by Prof. Nikos Vasilakis. I contributed to PaSh and also developed hs, an out-of-order shell script executor. We also built , a lightweight shell-based sandbox.
Before Brown, I was a research associate at BALab with Prof. Diomidis Spinellis, where I worked on empirical software engineering topics.
Today my work centers on agents for system optimization (e.g., LLM-driven parameter tuning and control) that blend robust systems abstractions with data-driven adaptation.
Honored to receive a fellowship from the Gerondelis Foundation!
Jun. 2025
Our research proposal “From Noisy Signals to Clear Decisions: Optimizing Long-Term Outcomes with Relative Feedback and Causal Inference” will be funded by the Columbia–Dream Sports AI Innovation Center. Excited to pursue this work!
Despite growing interest in AI agents across industry and academia, their execution in an environment is often slow, hampering training, evaluation, and deployment. Inspired by speculative execution in microprocessors and speculative decoding in LLM inference, we propose speculative actions, a lossless framework that predicts likely actions using faster models, enabling multiple steps to be executed in parallel. We evaluate this framework across four agentic environments: gaming, e-commerce, web search, and operating systems. In all cases, speculative actions achieve substantial accuracy in next-action prediction, translating into significant reductions in end-to-end latency.
@inproceedings{liargkovas2025speculative_actions,title={Speculative Actions: A Lossless Framework for Faster Agentic Systems},author={Liargkovas*, Georgios and Ye*, Naimeng and Ahuja*, Arnav and Lu*, Yunan and Kaffes, Kostis and Peng, Tianyi},booktitle={Under Review},year={2026},address={},}
ML4Sys ’25
An Expert in Residence: LLM Agents for Always-On Operating System Tuning
Classical machine-learning auto-tuners for OS control struggle with semantic gaps, brittle rewards, and unsafe exploration. We introduce an online, LLM-driven agent that emulates expert reasoning for continuous OS optimization. When tuning the Linux Completely Fair Scheduler’s hyperparameters, the agent outperforms Bayesian optimization by 5% in single-parameter tuning, 7.1% in two-parameter co-tuning, and a human expert by 2.98% overall, while converging faster and adapting more quickly to workload changes. When application counters are unavailable, system-level proxies (e.g., Instructions Per Cycle (IPC)) preserved tail latency in our setup. Putting this together, we propose adopting the Model Context Protocol (MCP) for tool/resource discovery and invocation and a logging channel; on top of that, we propose adding transactional apply–commit–revert, host-mediated approval gates, and policy controls in the OS-tuning server and host to ensure safe, auditable operation. Our results and the proposed architecture point toward a new generation of self-adapting, expert-level OS tuners.
@inproceedings{liargkovas2025llm_tuning,title={An Expert in Residence: LLM Agents for Always-On Operating System Tuning},author={Liargkovas, Georgios and Jabrayilov, Vahab and Franke, Hubertus and Kaffes, Kostis},booktitle={ML for Systems Workshop at NeurIPS},year={2025},url={https://mlforsystems.org/},note={To appear},doi={}}
PACMI ’25
Set It and Forget It: Zero-Mod ML Magic for Linux Tuning
Georgios Liargkovas, Prabhpreet Singh Sodhi, and Kostis Kaffes
In Proceedings of the 2025 ACM Workshop on Practical Adoption Challenges of ML for Systems (PACMI ’25) 2025
Machine learning can turbocharge OS optimization—if one is willing to reinvent the whole stack. Recent work pushes exotic instrumentation or new OS designs that break real-world constraints, demanding app metrics nobody can (or wants to) provide. The alternative—naively optimizing for simple system proxies like IPC—is just as flawed, leading to misleading results that fail to generalize across real-world workloads. Our framework sidesteps this dilemma by learning to optimize without direct visibility. Instead of building brittle models to predict absolute performance, we reframe the problem to learn the relative ranking of system configurations, using a diversified performance signature built from the system counters the OS already has. The outcome is a scalable, robust, and ML-driven performance boost for real applications—delivered without demanding radical shifts in the OS landscape.
@inproceedings{liargkovas2025proxy_tuning,title={Set It and Forget It: Zero-Mod ML Magic for Linux Tuning},author={Liargkovas, Georgios and Sodhi, Prabhpreet Singh and Kaffes, Kostis},booktitle={Proceedings of the 2025 ACM Workshop on Practical Adoption Challenges of ML for Systems (PACMI '25)},year={2025},pages={1--7},address={Seoul, Republic of Korea},publisher={ACM},url={https://dl.acm.org/doi/10.1145/3766882.3767175},doi={10.1145/3766882.3767175},}
eBPF ’25
Empowering machine-learning assisted kernel decisions with eBPFML
Prabhpreet Singh Sodhi, Georgios Liargkovas, and Kostis Kaffes
Proceedings of the 3rd Workshop on EBPF and Kernel Extensions 2025
Machine-learning (ML) techniques can optimize core operating system paths—scheduling, I/O, power, and memory—yet practical deployments remain rare. Existing prototypes either (i) bake simple heuristics directly into the kernel or (ii) off-load inference to user space to exploit discrete accelerators, both of which incur unacceptable engineering or latency cost. We argue that eBPF, the Linux kernel’s safe, hot-swappable byte-code runtime, is the missing substrate for moderately complex in-kernel ML. We present eBPFML, a design that (1) extends the eBPF instruction set with matrix-multiply helpers, (2) leverages upcoming CPU matrix engines such as Intel Advanced Matrix Extensions (AMX) through the eBPF JIT, and (3) retains verifier guarantees and CO-RE portability.
@inproceedings{10.1145/3748355.3748363,author={Sodhi, Prabhpreet Singh and Liargkovas, Georgios and Kaffes, Kostis},title={Empowering machine-learning assisted kernel decisions with eBPFML},year={2025},isbn={9798400720840},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3748355.3748363},doi={10.1145/3748355.3748363},journal={Proceedings of the 3rd Workshop on EBPF and Kernel Extensions},pages={28–30},numpages={3},keywords={Operating systems, eBPF, hardware acceleration, machine learning},location={Coimbra, Portugal},series={eBPF '25},}
HotOS ’23
Executing Shell Scripts in the Wrong Order, Correctly
Shell scripts are critical infrastructure for developers, administrators, and scientists; and ought to enjoy the performance benefits of the full suite of advances in compiler optimizations. But between the shell’s inherent challenges and neglect from the community, shell tooling and performance lags far behind the state of the art. We propose executing scripts out-of-order to better use modern computational resources. Optimizing any part of an arbitrary shell script is very challenging: the shell language’s complex, late-bound semantics makes extensive use of opaque external commands with arbitrary side effects. We work with the grain of the shell’s challenges, meeting dynamism with dynamism: we optimize at runtime, speculatively executing commands in an isolated and monitored environment to determine and contain their behavior. Our proposed approach can yield serious performance benefits (up to 3.9x for a bioinformatics script on a 16-core machine) for arbitrarily complex scripts without modifying their behavior. Contained out-of-order execution obviates the need for command specifications, operates on external commands, and yields a much more general framework for the shell. Script writers need not change a thing and observe no differences: they get improved performance with the interpretability of sequential output.
@inproceedings{LKGV23,author={Liargkovas, Georgios and Kallas, Konstantinos and Greenberg, Michael and Vasilakis, Nikos},journal={The 19th Workshop on Hot Topics in Operating Systems},title={Executing Shell Scripts in the Wrong Order, Correctly},year={2023},doi={10.1145/3593856.3595891},}
Personal
In my free time, I enjoy a variety of indoor and outdoor activities. Long-distance running is close to my heart (feel free to follow me on ), either solo or with the company of others. I also love cooking, often trying and adding personal twists to recipes from many cuisines. Hopefully, I'll share some of my recipes here soon. Additionally, I'm passionate about curating playlists. Alternative Rock, Indie, and Jazz are among my top genres. In earlier days, I used to play the piano and take music theory lessons.