Home

100% Remote Job // Senior ML Platform Engineer (Serving Infrastructure, Kubernetes, Cloud) // Contract at Remote, Remote, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=2239707&uid=

Hello Professionals,

We have openings for:

Role: Senior ML Platform Engineer (Serving Infrastructure)

Location: Remote

Contract

Role Overview: We're looking for an experienced engineer to build our ML serving infrastructure. You'll create the platforms and systems that enable reliable, scalable model deployment and inference. This role focuses on the runtime infrastructure that
powers our production ML capabilities.

Key Responsibilities:

Design and implement scalable model serving platforms for both batch and real-time inference

Build model deployment pipelines with automated testing and validation

Develop monitoring, logging, and alerting systems for ML services

Create infrastructure for A/B testing and model experimentation

Implement model versioning and rollback capabilities

Design efficient scaling and load balancing strategies for ML workloads

Collaborate with data scientists to optimize model serving performance

Technical Requirements:

7+ years of software engineering experience, with 3+ years in ML serving/infrastructure

Strong expertise in container orchestration (Kubernetes) and cloud platforms

Experience with model serving technologies (TensorFlow Serving, Triton, KServe)

Deep knowledge of distributed systems and microservices architecture

Proficiency in Python and experience with high-performance serving

Strong background in monitoring and observability tools

Experience with CI/CD pipelines and GitOps workflows

Nice to Have:

Experience with model serving frameworks:

o TorchServe for PyTorch models

o TensorFlow Serving for TF models

o Triton Inference Server for multi-framework support

o BentoML for unified model serving

Expertise in model runtime optimizations:

o Model quantization (INT8, FP16)

o Model pruning and compression

o Kernel optimizations

o Batching strategies

o Hardware-specific optimizations (CPU/GPU)

Experience with model inference workflows:

o Pre/post-processing pipeline optimization

o Feature transformation at serving time

o Caching strategies for inference

o Multi-model inference orchestration

o Dynamic batching and request routing

Experience with GPU infrastructure management

Knowledge of low-latency serving architectures

Familiarity with ML-specific security requirements

Background in performance profiling and optimization

Experience with model serving metrics collection and analysis

Years of Experience:

12.00 Years of Experience

Thanks,

Lishy A       

Talent Acquisition Group

Email : [email protected]

Phone : +1 201-201-8239  (Ext: 1061)

Smart IT Frame LLC

Keywords: continuous integration continuous deployment machine learning information technology
100% Remote Job // Senior ML Platform Engineer (Serving Infrastructure, Kubernetes, Cloud) // Contract
[email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=2239707&uid=
[email protected]
View All
05:12 AM 08-Mar-25


To remove this job post send "job_kill 2239707" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 0

Location: ,