I'm a self-taught machine learning developer with a background in biological sciences, currently focused on natural language processing, voice interfaces, and building tools around large language models (LLMs).
My work combines local model deployment, speech processing, and preference modeling. I develop full-stack solutions for voice-enabled assistants, participate in open-source projects, and contribute to Kaggle competitions involving NLP and classification tasks.
Projects I've built include a voice-interactive assistant with local LLMs and emotion-driven behavior, a RoBERTa-based preference classifier for LLM outputs, and tools for real-time voice emotion recognition. I also design plugin architectures and lightweight GUIs to expand functionality and accessibility.
I’m particularly interested in the intersection of voice, memory, and personalization in AI tools. I aim to make intelligent systems more adaptable, private, and capable of running on everyday hardware.
Open to connecting with others working on machine learning, natural language processing, and locally-deployed AI systems.