I'm a freelance AI Trainer with a specialized focus in English-language reasoning and STEM-based subject matter. Over the past few years, I've worked at the intersection of human expertise and machine learning, contributing directly to the development, alignment, and evaluation of large language models (LLMs). My work supports the improvement of model accuracy, contextual understanding, and overall reliability in both general and domain-specific tasks.
With a strong academic foundation in the sciences and a passion for language and logic, I bring a dual competency that bridges technical rigor with communicative clarity. I’ve collaborated with teams building cutting-edge LLMs, contributing to training data pipelines through meticulous annotation, prompt engineering, adversarial testing, and in-depth evaluation of model outputs. My contributions often involve multi-step reasoning tasks, math and science problem-solving, instructional content creation, and the design of edge-case scenarios to test and refine model behavior.
My skill set includes:
I’m comfortable working in fast-paced, collaborative environments with research scientists, engineers, and other annotators. My experience includes contributing to confidential and experimental projects, often under NDA, where attention to detail and intellectual rigor are paramount.
I take pride in producing training data that doesn’t just "feed the machine" but actively shapes the next generation of intelligent, responsible AI systems. Whether it’s refining a complex math solution, stress-testing a model’s reasoning abilities, or designing novel task formats, I approach every project with precision, adaptability, and curiosity.
I’m always interested in opportunities that push the boundaries of what AI can understand and achieve — particularly those that value deep reasoning, subject-matter accuracy, and ethical responsibility in AI development.