5 days ago

Staff Data Engineer – Cloud Data Platform

Calix

Hybrid
Full Time
$210,000
Hybrid

Job Overview

Job TitleStaff Data Engineer – Cloud Data Platform
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$210,000
LocationHybrid

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About Calix

Calix is spearheading a transformation for service providers, enabling them to deliver a distinct subscriber experience for Smart Homes and Businesses. This is achieved by monetizing their network through Role-based Cloud Services, Telemetry, Analytics, Automation, and the deployment of Software-Driven Adaptive networks.

The Opportunity: Staff Data Engineer – Cloud Data Platform

As a key member of a high-performing global team, the Staff Data Engineer – Cloud Data Platform will play a pivotal role in Calix's Cloud Data initiatives. This position focuses on architecture design, implementation, and providing technical leadership across data ingestion, extraction, transformation, and analytics within their cutting-edge cloud data platform.

Responsibilities and Duties

  • Collaborate closely with Cloud product owners to understand and analyze product requirements, providing valuable feedback.
  • Develop conceptual, logical, and physical data models, alongside comprehensive metadata solutions.
  • Design and manage various data design deliverables, including data models, diagrams, flows, and corresponding data dictionary documentation.
  • Determine database structural requirements by thoroughly analyzing client operations, applications, and existing system data.
  • Provide technical leadership in software design to ensure service stability, reliability, scalability, and security.
  • Guide technical discussions within the engineering group and provide clear technical recommendations.
  • Conduct design reviews and code reviews with peer engineers to maintain high quality standards.
  • Guide testing architecture specifically for large-scale data ingestion and transformation processes.
  • Serve in a customer-facing engineering capacity for debugging and resolving field issues.

Qualifications

  • 10+ years of development experience in Data modeling, master data management, and building ETL/data pipeline implementations.
  • Proficiency in both Google Cloud Platform (GCP) services (BigQuery, Dataflow, Dataproc, PubSub/Kafka, Cloud Storage) and AWS.
  • Strong knowledge of big data processing frameworks such as Apache Spark and Flink.
  • Expertise in SQL and at least one programming language (Python, Java, or Scala), along with DBT.
  • Experience with BI tools like Google Data Studio, Looker, ThoughtSpot, and utilizing BigQuery BI Engine for optimized reporting.
  • Demonstrated strong analytical and troubleshooting skills, particularly in complex data scenarios.
  • Ability to collaborate effectively in a team environment and engage with cross-functional teams.
  • Proficient in conveying complex technical concepts to various stakeholders.
  • In-depth knowledge of data governance, security best practices, and compliance regulations across both GCP and AWS environments.
  • Bachelor’s degree in Computer Science, Information Technology, or a related field.

Location

This is a remote-based position, open to candidates located anywhere in the United States.

Key skills/competency

  • Data Modeling
  • ETL/Data Pipeline
  • Google Cloud Platform (GCP)
  • AWS
  • Apache Spark
  • Flink
  • SQL
  • Python
  • DBT
  • BigQuery

Tags:

Data Engineer
Data Modeling
ETL Implementation
Data Pipeline
Cloud Architecture
Analytics
Data Governance
Technical Leadership
Problem Solving
Design Review
Security Best Practices
GCP
AWS
BigQuery
Dataflow
Apache Spark
Flink
SQL
Python
DBT
PubSub

Share Job:

How to Get Hired at Calix

  • Research Calix's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
  • Tailor your resume: Highlight 10+ years of experience in data modeling, ETL, and cloud platforms (GCP, AWS) using keywords like BigQuery, Spark, and DBT.
  • Showcase technical expertise: Prepare to discuss complex data pipeline designs, big data frameworks, and your proficiency in SQL, Python, or Java during interviews.
  • Emphasize problem-solving: Be ready to articulate your approach to troubleshooting intricate data scenarios and ensuring data reliability and scalability.
  • Demonstrate leadership and communication: Illustrate your experience guiding technical discussions, conducting code reviews, and effectively conveying technical concepts to diverse stakeholders.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background