My name is Ahmed, and I come from a medical background that naturally evolved into a strong interest in AI, data quality, and human-in-the-loop systems. I grew up in Egypt speaking Arabic as my native language, using it at home, in school, in the community, and across every social setting. My dialect is Egyptian Arabic, known for its clarity, wide regional understanding, and strong media influence across the Arab world. English became my second major language through school, university, clinical training, research writing, and daily interaction with global medical and AI communities. Over the years, I’ve developed full professional proficiency in both reading and writing, and I’m comfortable analyzing complex content in both languages.
My medical education and internship gave me a solid foundation in structured decision-making, evidence evaluation, and guideline-based reasoning — all of which transfer directly into AI annotation, content evaluation, policy analysis, and safety review work. Working in hospitals trained me to be detail-oriented, consistent, and comfortable applying rules with zero room for ambiguity, especially in high-pressure environments. These habits naturally shaped the way I approach annotation tasks: I break down information, test assumptions, question edge cases, and ensure my judgment aligns with the intended policy.
I have hands-on experience with AI tools and have worked on annotation, prompt evaluation, content assessment, and model-assisted tasks. I’ve also contributed to academic research, systematic reviews, and data analysis, which built my ability to interpret messy datasets, identify patterns, and maintain accuracy across repetitive workflows. I’m particularly drawn to roles that require critical thinking, linguistic sensitivity, and an ability to toggle between strict policies and subtle contextual judgment — exactly the balance required for AI safety evaluation and RLHF-style tasks.
At my core, I’m practical, adaptable, and committed to delivering work that actually moves the needle. I take quality personally. I don’t cut corners. And I’m not afraid to ask: “Does this actually make sense?” That mix of curiosity and skepticism is what keeps me honest when evaluating AI outputs. I enjoy this field because it blends structure with nuance, and it rewards people who can think clearly, communicate well, and stay consistent.