️Hiring: ML Research Engineer (ASR & Fine-tuning Specialist)
Location: Bangalore | Experience: 2+ Years | Full-time | On-site
What You’ll Do
Train & fine-tune ASR models (Whisper, Wav2Vec2, Conformer) for multilingual, healthcare-focused speech data.
Build, optimize, and fine-tune NLP components (Intent, NER, Entity Linking) using transformer-based architectures.
Research & implement SOTA fine-tuning techniques — LoRA, QLoRA, or full fine-tuning — to push real-world model performance.
Design data pipelines for collection, annotation, augmentation, and synthetic generation in low-resource languages.
Develop robust evaluation frameworks — precision, recall, F1, and domain benchmarks.
Collaborate with AI engineers to deploy optimized models into production-ready pipelines powering healthcare voice systems.
You’re a Great Fit If You Have
- 2+ years of ML/DL research or applied model training experience.
- Hands-on expertise in ASR systems or LLM fine-tuning workflows.
- Strong command over PyTorch or TensorFlow, distributed training, and model evaluation.
- Practical exposure to multilingual datasets, cross-lingual transfer, and speech/NLP hybrid tasks.
- A passion for research-backed experimentation and model optimization in production settings.
Bonus Points
- Published research or open-source contributions.
- Background in low-resource language modeling or few-shot learning.
- Experience with quantization, distillation, or pruning for deployment optimization.
- Understanding of knowledge graphs or entity linking frameworks.