About

I'm a researcher working on Federated Learning and Computer Vision. I'm wrapping up my PhD at City St George's, University of London, where I design architectures and aggregation methods for decentralized training under data heterogeneity.

Before London, I studied Electrical and Computer Engineering in Thessaloniki, then moved to Imperial College for my MSc in AI. Somewhere in between, I worked at Equideum Health on contribution evaluation for federated data markets.

I work on federated learning for distributed, heterogeneous data. My research focuses on making decentralized training practical when clients hold data that varies in distribution, modality, or imaging characteristics — the norm in real-world medical and multimodal settings.

Architectures and Training Pipelines for FL

I study how architecture choice, weight initialization, and aggregation method interact in federated visual classification. My benchmarking work shows these elements must be chosen jointly — ImageNet pretraining helps, but self-supervised pretraining on domain-relevant data can match or exceed it; normalization-free networks and transformers each have distinct failure modes under heterogeneity.

Aggregation Under Heterogeneity

I design aggregation methods that adapt to client-level training dynamics rather than treating all participants uniformly. FedCLAM derives per-client momentum and dampening from local validation progress, combined with a foreground intensity matching loss that handles scanner-specific brightness and contrast biases.

Federated Multimodal Learning

I'm building benchmarks and methods for federated fine-tuning of multimodal large language models, where clients may hold different modalities entirely — missing images, text-only data, or mixtures. This introduces a new axis of heterogeneity beyond the label skew typically studied.

Federated Reinforcement Learning

My MSc thesis developed a framework formalizing how multiple data owners can collaboratively train RL agents without sharing raw trajectories, covering privacy-preserving representations, aggregation strategies, and evaluation under client heterogeneity. I applied this to sepsis treatment across real ICU partitions in MIMIC-III and released FeRaL, an open-source library for FRL experimentation. As autonomous agents become increasingly deployed and begin interacting with one another, the intersection of federated learning and multi-agent RL feels ripe for revisiting.

Earlier Work

My diploma thesis at Aristotle University focused on learning under label noise in image classification — an early encounter with the challenge of training on imperfect supervision that has informed my subsequent work on robustness under data heterogeneity.

Education

2022 – present

PhD in Computer Science

City St George's, University of London

2020 – 2021

MSc in Computing (AI & ML) — Distinction

Imperial College London · Bodossaki Scholar

2013 – 2019

MEng in Electrical & Computer Engineering — Top 5%

Aristotle University of Thessaloniki

When I'm not working, I'm usually maintaining a 20-year-old sports car, hiking, or at the range.