You are viewing a preview of this job. Log in or register to view more details about this job.

[SK Telecom] [Proprietary AI Foundation Model Internship] Model Benchmarking & Data Processing Intern

[SK Telecom] [Proprietary AI Foundation Model Internship] Model Benchmarking & Data Processing Intern

 

This is an off-campus position where you do not need to apply via Handshake – please make sure to read the details and apply according to the company’s regulations.
 

Description

Join SKT’s AI Research team to advance our in-house LLM platform A.X into a state-of-the-art foundation model (hundreds of billions of parameters). You will contribute to the development, training, and commercialization of next-generation LLMs and MLLMs that power SKT’s conversational AI services.

Job Information

Position: AI Research Intern (Foundation Model)

Duration: 3 months (until end of December, extendable upon agreement)

Process: Application → Coding Test → Interview → Final Offer

Workplace: Seongsu, Seoul

Responsibilities

  • Research and develop architectures & training techniques for large-scale foundation models (LLM, MLLM)
  • Design distributed learning methods for training models with multiple GPUs/servers
  • Build benchmarks for language understanding, reasoning, math/logic, coding, multimodal tasks
  • Research data augmentation methods for large-scale training

Qualifications

  • Enrolled in or graduated with a Master’s/Ph.D. degree in Computer Science, NLP, Machine Learning, Math/Statistics, or related fields
  • 3+ years of research/engineering experience in NLP, dialogue systems, multimodal AI, or vision-language tasks
  • Hands-on experience in deep learning model training, serving, and applied AI services
  • Experience with distributed training on multi-GPU systems
  • Publication record in major ML/NLP/Multimodal conferences or journals
  • Awards in AI-related challenges
  • Large-scale data processing experience (Hadoop, Spark, Dask, etc.)

Preferred

  • 3+ years using frameworks such as Megatron-LM, NeMo, DeepSpeed
  • Expertise in multi-GPU, multi-node model optimization & distributed training
  • Research/development experience in very large LLM/Multimodal architectures
  • Data synthesis & evaluation methodology design experience
  • Ph.D. in AI-related field (NLP, ML, DL preferred)
  • Strong publication record in top-tier AI/ML/NLP venues

Other Information

Any false information in the application may lead to cancellation of the offer.

National veterans and applicants with disabilities will be given preference according to relevant laws.

Anybody who needs help with a resume & cover letter, a mock interview schedule with Jacob Lee for this position: bit.ly/calendly-jacobhw-lee