Software Engineering LMTS/SMTS – Big Data & Distributed Systems, Salesforce

Apply for this job

Email *
Executive Name *

Job Description

Salesforce is hiring Software Engineering LMTS/SMTS professionals to join the Unified Intelligence Platform (UIP) team. This role focuses on building scalable, cloud-based big data platforms using Spark, Trino, and modern data engineering tools to support AI, analytics, and enterprise services across Salesforce.

Date Posted: January 23, 2026

Expiration Date: NA

Qualification: Degree in Computer Science or equivalent experience

Experience: Relevant software engineering experience in big data and distributed systems

Job ID: JR326185

Apply: Click Here

Main Responsibilities

  • Lead the architecture, design, development, and support of mission-critical data and platform services with full ownership and accountability.
  • Build and manage large-scale, metadata-driven data pipelines to ingest diverse data sources into a multi-cloud, petabyte-scale data platform.
  • Collaborate with product managers and client teams to translate business requirements into scalable technical solutions.
  • Architect secure, governed data solutions covering ingestion, processing, quality, and discovery.
  • Promote a service ownership model using automation, monitoring, alerting, and telemetry best practices.
  • Develop reusable data frameworks to standardize recurring data tasks and simplify tool migrations.
  • Implement advanced data quality services for continuous monitoring and compliance.
  • Build Salesforce-integrated applications to manage and monitor the full data lifecycle.
  • Establish and maintain CI/CD pipelines for seamless deployment across cloud environments.
  • Operate and optimize core technologies such as Spark, Trino, Airflow, Iceberg, Kubernetes, and cloud services (AWS, GCP).

Essential Qualifications

  • Candidates must possess software engineering skills with distributed system and big data platform experience. 
  • Practical experience using Spark and Java and cloud-native data technologies. 
  • Developing data pipelines and services which can handle high traffic without losing performance. 
  • Can perform work for platform and data and backend and DevOps and support engineering positions. 
  • Possesses exceptional skills for working with others and communicating and resolving problems. 
  • Produce high-quality results under fast-paced work conditions which use agile development methods. 

Preferred Qualifications

  • Experience in using data lake and analytics technologies which include Trino and Iceberg and DBT. 
  • Experience working with machine learning and artificial intelligence platforms which include SageMaker and Jupyter Notebooks. 
  • Working with multi-cloud environments that include AWS and GCP. 
  • Experience with CI/CD frameworks and Kubernetes systems and contemporary DevOps methodologies. 

Developing data platforms which serve large businesses and systems which use artificial intelligence.