I’m a data analyst and statistician with over two decades of experience helping organizations, universities, and individual researchers turn data into clear, actionable insights. I hold a B.A. from Cornell University and a Master’s from UCLA, and I’ve built my career at the intersection of research, analytics, and applied technology.
Over the years, I’ve completed thousands of projects spanning academic research, program evaluation, and business intelligence, supporting clients who need more than just numbers—they need interpretation, rigor, and clarity. I’m fluent in R, Python, SQL, and SPSS, and I specialize in building reproducible, transparent workflows that move smoothly from data cleaning and transformation to analysis and reporting.
My technical background covers ANOVA, regression, clustering, time-series forecasting, and multivariate modeling, as well as parametric and non-parametric methods depending on data structure and assumption testing. I routinely handle data wrangling, recoding, integration, and imputation, and I apply bootstrapping, resampling, and robust estimation techniques when needed to ensure the reliability of results.
In recent years, I’ve also worked on AI-driven data projects, combining traditional statistical techniques with natural language processing and generative AI to analyze large, unstructured datasets. This blend of classical statistics and modern data science allows me to design forward-looking analytic systems that stay grounded in sound methodology.
Clients describe my approach as precise, organized, and responsive. I deliver analyses that are both statistically rigorous and easy to understand, whether the end goal is a publication, report, or data-driven business decision.
If you’re looking for an experienced analyst who combines academic training, technical fluency, and real-world practicality, I’d be glad to discuss how I can help you get meaningful results from your data.