5 days ago

DataOps Engineer

Placer.ai

On Site
Full Time
$175,000
Ramat Gan, Tel Aviv District, Israel

Job Overview

Job TitleDataOps Engineer
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$175,000
LocationRamat Gan, Tel Aviv District, Israel

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About Placer.ai

Placer.ai is transforming how organizations understand the physical world. Our location analytics platform provides unprecedented visibility into locations, markets, and consumer behavior. Placer empowers thousands of customers—from Fortune 500 companies to local governments and nonprofits—to make smarter, data-driven decisions.

What sets us apart? We've built the most advanced location intelligence platform in the market while maintaining an uncompromising commitment to privacy, proving that powerful analytics and responsible data practices can coexist.

Our growth reflects the market's demand: we reached $100M in annual recurring revenue within just 6 years of launching, achieved unicorn status with a $1B+ valuation in 2022, and continue to expand rapidly as one of North America's fastest-growing tech companies. We're creating a $100B+ market opportunity, and we're just getting started.

Named one of Forbes America's Best Startup Employers and a Deloitte Technology Fast 500 company, we're building a culture where innovation thrives, collaboration is the norm, and every team member contributes to reshaping how the world understands location.

Summary

Placer.ai is looking for a DataOps Engineer to own the infrastructure powering its large-scale data processing platform. This platform-facing role sits at the intersection of data engineering and infrastructure. You will be responsible for making Spark run reliably and efficiently on Kubernetes, enabling data engineers to build with confidence.

This role requires deep understanding of data workloads for smart infrastructure decisions and strong production instincts to maintain complex systems at scale. If you are passionate about optimizing Spark job runtimes, right-sizing cluster autoscalers, and building internal tooling for an effortless data platform, this opportunity is for you.

Responsibilities

  • Design, deploy, and operate the Kubernetes-based infrastructure running Apache Spark and large-scale data processing workloads.
  • Own the reliability, performance, and cost-efficiency of the data platform, including SLAs, autoscaling, resource quotas, and workload isolation.
  • Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost.
  • Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components.
  • Develop observability tooling (metrics, logging, alerting, data quality dashboards) to proactively surface issues across the pipeline stack.
  • Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions.
  • Manage cloud storage (GCS/S3), Delta Lake, and Unity Catalog infrastructure.
  • Drive platform improvements end-to-end: from design through deployment and ongoing ownership.

Requirements

  • 5+ years of experience in a production infrastructure, SRE, or DevOps role.
  • 2+ years of hands-on experience running data processing workloads (Apache Spark, Flink, or similar) in production.
  • Strong Kubernetes experience, including Spark-on-K8s, autoscaling, resource management, and the broader K8s ecosystem.
  • 2+ years with infrastructure-as-code tools (Terraform, Pulumi, or similar).
  • Proficiency in at least one general-purpose language—Python or Go preferred.
  • Experience with workflow orchestration tools, particularly Apache Airflow.
  • Solid understanding of cloud infrastructure—GCP preferred (GCS, GKE, IAM).
  • Strong observability skills: metrics pipelines, structured logging, alerting frameworks.

Other Requirements

  • Familiarity with Delta Lake, Parquet, and columnar storage formats.
  • Experience with data quality frameworks and pipeline lineage tooling.
  • Knowledge of query optimization, partition strategies, and Spark performance tuning.
  • Experience managing queues and databases (Kafka, PostgreSQL, Redis, or similar).

Why Join Placer.ai?

  • Join a rocketship! Pioneer a new market in location analytics.
  • Take a central and critical role at Placer.ai.
  • Work with, and learn from, top-notch talent.
  • Competitive salary and excellent benefits.

Noteworthy Links To Learn More About Placer.ai

  • Placer.ai's $100M round C funding (unicorn valuation!)
  • See our data in action at The Anchor
  • Placer.ai in the news
  • Video: About Placer for Commercial Real Estate
  • Video Playlist: Placer Brand & Explainer Videos

Placer.ai is committed to maintaining a drug-free workplace and promoting a safe, healthy working environment for all employees. Placer.ai is an equal opportunity employer and has a global remote workforce. Placer.ai’s applicants are considered solely based on their qualifications, without regard to an applicant’s disability or need for accommodation. Any Placer.ai applicant who requires reasonable accommodations during the application process should contact Placer.ai’s Human Resources Department to make the need for an accommodation known.

Key skills/competency

  • Kubernetes
  • Apache Spark
  • Data Processing
  • Infrastructure as Code (IaC)
  • DevOps / SRE
  • Apache Airflow
  • Google Cloud Platform (GCP)
  • Observability
  • Delta Lake
  • CI/CD

Tags:

DataOps Engineer
Data Processing
Infrastructure
Kubernetes
Apache Spark
DevOps
SRE
Airflow
GCP
Terraform
Observability
CI/CD
Delta Lake
Python
Go
Kafka
PostgreSQL
Redis
Autoscaling
Data Quality

Share Job:

How to Get Hired at Placer.ai

  • Research Placer.ai's vision: Study their mission in location analytics, market impact, and growth story as a unicorn company.
  • Tailor your resume: Highlight your experience with Kubernetes, Apache Spark, and cloud infrastructure (GCP preferred) for DataOps Engineer roles.
  • Showcase problem-solving skills: Prepare to discuss how you optimized data pipelines, managed large-scale infrastructure, or improved system reliability.
  • Demonstrate cloud expertise: Emphasize hands-on experience with GCP services, IaC tools like Terraform, and observability frameworks in your Placer.ai application.
  • Network effectively: Connect with Placer.ai employees on LinkedIn to gain insights and potentially secure a referral.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background