While technology is the heart of our business, a global and diverse culture is the heart of our success. We love our people and we take pride in catering them to a culture built on transparency, diversity, integrity, learning and growth.
If working in an environment that encourages you to innovate and excel, not just in professional but personal life, interests you- you would enjoy your career with Quantiphi!
Role : Associate Architect - Platform (MLOps)
Experience Level : 6 to 9 Years
Location : Mumbai / Bangalore / Trivandrum (Hybrid)
Roles and Responsibilities:
Orchestrating LLM Workflows & Development: Design, implement, and scale the underlying platform that supports GenAI workloads, be it for real-time or batch. The workloads can also vary from fine-tuning/distilling to inference.
LLMOps (LLM Operations): Build and manage operational pipelines for training, fine-tuning, and deploying LLMs such as Llama, Mistral etc, GPT-3/4, BERT, or similar. Ensure smooth integration of these models into production systems.
GPU Optimization: Optimize GPU utilization and resource management for AI workloads, ensuring efficient scaling, low latency, and high throughput in model training and inference. Develop techniques to manage multi-GPU systems for high-performance computation. Have clarity on LLM parallelization techniques as well as other inference optimization techniques.
Infrastructure Design & Automation: Design, deploy, and automate scalable, secure, and cost-effective infrastructure for training and running AI models. Work with cloud providers (AWS, GCP, Azure) to provision the necessary resources, implement auto-scaling, and manage distributed training environments.
Platform Reliability & Monitoring: Implement robust monitoring systems to track the performance, health, and efficiency of deployed AI models and workflows. Troubleshoot issues in real-time and optimize system performance for seamless operations. Transferable knowledge from traditional software monitoring in production is fine. Monitoring knowledge of ML/GenAI workloads is preferred.
Maintain Knowledge Base: Good knowledge of database concepts ranging from performance tuning, RBAC, sharding, along with exposure to different types of databases from relational to object & vector databases is preferred.
Collaboration with AI/ML Teams: Work closely with data scientists, machine learning engineers, and product teams to understand and support their platform requirements, ensuring the infrastructure is capable of meeting the needs of AI model deployment and experimentation.
Security & Compliance: Ensure that platform infrastructure is secure, compliant with organizational policies, and follows best practices for managing sensitive data and AI model deployment.
Required Skills & Qualifications:
Experience:
3+ years of experience in platform engineering, DevOps, or systems engineering, with a strong focus on machine learning and AI workloads.
Proven experience working with LLM workflows, and GPU-based machine learning infrastructure.
Hands-on experience in managing distributed computing systems, training large-scale models, and deploying AI systems in cloud environment.
Strong knowledge of GPU architectures (e.g., NVIDIA A100, V100, etc.), multi-GPU systems, and optimization techniques for AI workloads.
Technical Skills:
Proficiency in Linux systems and command-line tools. Strong scripting skills (Python, Bash, or similar).
Expertise in containerization and orchestration technologies (e.g., Docker, Kubernetes, Helm).
Experience with cloud platforms (AWS, GCP, Azure), tools such as Terraform, /Terragrunt, or similar infrastructure-as-code solutions, and exposure to automation of CICD pipelines using Jenkins/Gitlab/Github, etc.
Familiarity with machine learning frameworks (TensorFlow, PyTorch, etc.) and deep learning model deployment pipelines. Exposure to vLLM or NVIDIA software stack for data & model management is preferred.
Expertise in performance optimization tools and techniques for GPUs, including memory management, parallel processing, and hardware acceleration.
Soft Skills:
Strong problem-solving skills and ability to work on complex system-level challenges.
Excellent communication skills, with the ability to collaborate across technical and non-technical teams.
Self-motivated and capable of driving initiatives in a fast-paced environment.
Preferred Skills & Qualifications:
Experience in building or managing machine learning platforms, specifically for generative AI models or large-scale NLP tasks.
Familiarity with distributed computing frameworks (e.g., Dask, MPI, Pytorch DDP) and data pipeline orchestration tools (e.g., AWS Glue, Apache Airflow, etc).
Knowledge of AI model deployment frameworks such as TensorFlow Serving, TorchServe, vLLM, Triton Inference Server.
Good understanding of LLM inference & how to optimize self-managed infrastructure
Understanding of AI model explainability, fairness, and ethical AI considerations.
Experience in automating and scaling the deployment of AI models on a global infrastructure.
Preferred Experience:
Previously working on NVIDIA Ecosystem or well aware of NVIDIA Ecosystem - Triton Inference Server, CUDA, NVAIE, TensorRT, NeMo, etc
Good at Kubernetes (GPU Operator), Linux, and AI Deployment & experimentation tools.
If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

0 applies
14 views
Other Jobs from Quantiphi
Associate Technical Architect - Platform - AWS Connect
Machine Learning Engineer
Associate Architect - ML Optimization
Machine Learning Engineer
Associate Technical Architect – Generative AI Software Development
Similar Jobs
Senior Software Engineer
Machine Learning Engineer
Machine Learning Engineer
Mgr. Software Engineering
There are more than 50,000 engineering jobs:
Subscribe to membership and unlock all jobs
Engineering Jobs
60,000+ jobs from 4,500+ well-funded companies
Updated Daily
New jobs are added every day as companies post them
Refined Search
Use filters like skill, location, etc to narrow results
Become a member
🥳🥳🥳 452 happy customers and counting...
Overall, over 80% of customers chose to renew their subscriptions after the initial sign-up.
To try it out
For active job seekers
For those who are passive looking
Cancel anytime
Frequently Asked Questions
- We prioritize job seekers as our customers, unlike bigger job sites, by charging a small fee to provide them with curated access to the best companies and up-to-date jobs. This focus allows us to deliver a more personalized and effective job search experience.
- We've got about 70,000 jobs from 5,000 vetted companies. No fake or sleazy jobs here!
- We aggregate jobs from 5,000+ companies' career pages, so you can be sure that you're getting the most up-to-date and relevant jobs.
- We're the only job board *for* software engineers, *by* software engineers… in case you needed a reminder! We add thousands of new jobs daily and offer powerful search filters just for you. 🛠️
- Every single hour! We add 2,000-3,000 new jobs daily, so you'll always have fresh opportunities. 🚀
- Typically, job searches take 3-6 months. EchoJobs helps you spend more time applying and less time hunting. 🎯
- Check daily! We're always updating with new jobs. Set up job alerts for even quicker access. 📅
What Fellow Engineers Say