AI/ML Backend Developer

Position Overview

We are looking for a highly skilled AI/ML Backend Developer to design and deliver scalable backend services that integrate seamlessly with advanced machine learning workflows. This role sits at the intersection of software engineering and applied AI—requiring a balance of backend development expertise, cloud-native architecture knowledge, and hands-on experience deploying and supporting ML models in production.

Why This Role Matters

AI models only generate business value when they are effectively integrated into reliable systems. This role ensures that data science teams and product stakeholders can trust their models to run at scale, in real-time, and with full observability. By building resilient backend services and enabling continuous learning pipelines, the AI/ML Backend Developer directly accelerates innovation, improves decision-making, and safeguards the reliability of AI-driven solutions.

About the Role

As an AI/ML Backend Developer, you will:

  • Develop and maintain backend microservices using Python, Java, and Spring Boot
  • Build and integrate APIs (both GraphQL and REST) for scalable service communication
  • Deploy and manage services on Google Cloud Platform (GKE)
  • Work with Google Cloud Spanner (Postgres dialect) and pub/sub tools like Confluent Kafka (or similar)
  • Automate CI/CD pipelines using GitHub Actions and Argo CD
  • Design and implement AI-driven microservices
  • Collaborate with Data Scientists and MLOps teams to integrate ML models into production
  • Implement NLP pipelines and enable continuous learning workflows using Vertex AI or Kubeflow
  • Ensure observability of AI decisions by logging predictions, confidence scores, and fallbacks into monitoring tools or data lakesJD-AIMLBackend

Minimum Qualifications

  • Bachelor’s degree or equivalent experience (High School Diploma + 4 years relevant experience)
  • 5+ years of backend development experience with Java and Spring Boot
  • 2+ years working with APIs (GraphQL and REST) in microservices architectures
  • 2+ years’ experience integrating or consuming ML/AI models in production environments (e.g., RESTful ML APIs, TensorFlow Serving, Vertex AI Endpoints)
  • Experience with structured and unstructured data (e.g., Rx claim metadata, clinical documents, NLP processing)
  • Familiarity with the ML model lifecycle (data ingestion, training, deployment, real-time inference, MLOps)
  • 2+ years hands-on experience with GCP, AWS, or Azure
  • 2+ years working with pub/sub tools like Kafka or similar
  • 2+ years’ experience with databases (Postgres or similar)
  • 2+ years’ experience with CI/CD tools (GitHub Actions, Jenkins, Argo CD, or similar)

Preferred Qualifications

  • Hands-on experience with Google Cloud Platform
  • Familiarity with Kubernetes concepts; deploying services on GKE is a plus
  • Strong understanding of microservice best practices and distributed systems
  • Familiarity with Vertex AI, Kubeflow, or similar AI platforms for model training and serving
  • Understanding of GenAI use cases, LLM prompt engineering, and orchestration tools (e.g., LangChain, Transformers)
  • Experience deploying Python-based ML services into Java microservice ecosystems (via REST, gRPC, or sidecar patterns)
  • Knowledge of claim adjudication, Rx domain logic, or healthcare-specific workflow automation.
Job Category: Artificial Intelligence / Search and Machine Learning
Job Type: Full Time
Job Location: Remote

Apply for this position

Allowed Type(s): .pdf, .doc, .docx