Computational Biology and Data Science, Translational Medicine Co-Op
Job Summary
Translational Medicine Computational Biology & Data Science is seeking an independent and self-motivated Co-Op student to support a broad range of data-driven projects for Heme-Onc and Immunology/Autoimmune therapeutic areas. The successful candidate will provide collaborative data science expertise, analytical and machine learning/AI support to Discovery and Translational projects, including target identification, biomarker discovery, and the interpretation of high-dimensional data sets to inform translational and therapeutic decisions. In addition, the candidate may contribute to implementation of advanced analytics pipelines and tools for novel data integration, visualization, and analysis.
Key responsibilities
- Implement AI/ML algorithms and tools to support Discovery and Translational programs and projects.
- Provide data engineering support and Shiny app development for translational data
- Provide AI/ML analytical support for translational data in preclinical projects and clinical programs.
- Analyze and interpret internal and external NGS (DNA-seq/RNA-seq/scRNA/CITE-seq/spatial profiling) and clinicogenomics data sets.
- Identify and evaluate pharmacodynamic and predictive biomarkers for translational programs.
- Provide technical expertise and biological interpretation of analysis results within a highly collaborative environment.
- Integrate external and internal data sets in unified Data Lake and develop query interfaces to address stakeholder needs.
Minimum Qualifications
- Ph.D. candidate in Bioinformatics, Computational Biology, Systems Biology, Data Science, or Biostatistics.
- Proficiency in R (tidyverse/Shiny/Bioconductor) and Python.
- Strong Linux skills and working knowledge of AWS cloud computing infrastructure.
- Training in mining high-dimensional data with statistical and ML background
- Excellent verbal and written communication skills.
Preferred Qualifications
- Experience working with Rstudio with Rmarkdown reporting, git, Jupyter Notebooks, VScode
- Experience with Python frameworks (eg. PyTorch, Tensorflow/Keras, Scikit-learn, XGBoost)
- Experience with HPC in AWS, Microsoft Fabric, and Nextflow workflow development.
- Experience with database management systems and knowledge graphs.
- Working knowledge of public data resources including GEO, TCGA, ICGC, CCLE, DepMap, cBioPortal, dbGaP, HPA, UK Biobank.
- Experience working with and developing AI/ML platforms
Disclaimer: The above statements are intended to describe the general nature and level of work performed by employees assigned to this job. They are not intended to be an exhaustive list of all duties, responsibilities, and qualifications. Management reserves the right to change or modify such duties as required.