Sr. Data Engineer
Cloudera
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Sr. Data Engineer at Cloudera
At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world’s largest enterprises.
Cloudera Data Engineers are key contributors to Cloudera's Data Platform, responsible for ingesting, curating, and provisioning data to support Business Operations, Analytics, and AI/ML initiatives. In this Sr. Data Engineer role, you'll design and implement data pipelines, utilizing a broad array of Big Data and Cloud-based technologies to manage complex data ingestion and transformation workflows with diverse requirements and SLAs. These pipelines support complex machine learning workflows and business intelligence systems.
As a Cloudera Data Engineer, you will deepen your expertise in data ingestion and transformation while ensuring the quality and integrity of data pipelines. You’ll work in an agile development environment, creating flexible and reusable pipelines based on the specifications provided by Data, Business Intelligence, and Operations Architects. Beyond traditional engineering, you will focus on modernizing development workflows and building internal tools that enable the team and business users to interact with data more efficiently.
A successful candidate will contribute to building robust data management processes on Cloudera's native Data Platform. These processes will support internal analytics and serve as models to drive external customer success.
As a Senior Data Engineer you will:
- Collaborate with Data Architects, Operational Architects, and Data Analysts to understand the data and operational requirements across different business units.
- Partner with data owners to ensure seamless, reliable data ingestion.
- Develop and implement data transformations to enrich and provision data, following established specifications and standards.
- Create real-time, near real-time, and point-in-time data flows to meet the operational demands of business systems.
- Implement monitoring processes to track data quality and ensure the reliability of data services.
- Leverage AI-orchestrated development techniques (such as "Vibe Coding") to accelerate the delivery of new data pipelines and reduce the end-to-end development lifecycle.
- Design and deploy internal self-service tools, including automated documentation generators and natural language interfaces, to empower business users and reduce routine engineering requests.
- Promote and standardize AI-first engineering workflows to ensure high-quality, auto-validated, and well-documented code delivery across the team.
Required Experience:
- 5+ years of experience as a Data Engineer.
- Proficient in coding with Python (primary) and SQL, with experience in ETL, Business Intelligence, and data processing.
- Proven track record of contributing to the architecture and implementation of reliable and scalable data pipelines.
- Hands-on experience with Distributed Systems and Big Data technologies, including Spark and the Hadoop ecosystem (Hive, Impala, Kafka).
- Proven proficiency in Data Modeling using industry best practices (e.g., Kimball, Inmon) to ensure data integrity.
- Hands-on experience or strong proficiency in using AI-assisted coding tools (Copilots/LLMs) to modernize engineering workflows ("Vibe Coding").
- Ability to monitor critical data pipelines for quality and resolve any issues effectively.
- Education: Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field.
- Strong communication skills, both written and verbal.
- This role is not eligible for immigration sponsorship.
You may also have:
- Experience with Apache Airflow or Apache NiFi.
- Expertise in optimizing data storage using HDFS/Parquet/Avro, Kudu, or HBase.
- Experience developing data engineering processes to support AI/ML use cases.
- Familiarity with building automation scripts or interfaces that leverage LLMs for data discovery or documentation.
What you can expect from us:
- Generous PTO Policy
- Support work life balance with Unplugged Days
- Flexible WFH Policy
- Mental & Physical Wellness programs
- Phone and Internet Reimbursement program
- Access to Continued Career Development
- Comprehensive Benefits and Competitive Packages
- Paid Volunteer Time
- Employee Resource Groups
- EEO/VEVRAA
Key skills/competency
- Data Engineering
- Python
- SQL
- ETL
- Data Pipelines
- Big Data
- Spark
- Hadoop
- AI/ML
- Data Modeling
How to Get Hired at Cloudera
- Research Cloudera's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
- Tailor your resume: Highlight Python, SQL, Spark, Hadoop, and AI-assisted coding experience for data pipelines.
- Showcase pipeline expertise: Detail your track record in designing and implementing reliable, scalable data pipelines.
- Prepare for technical deep-dives: Focus on distributed systems, data modeling (Kimball, Inmon), and Big Data technologies.
- Demonstrate agile collaboration: Provide examples of working effectively with architects and analysts in agile environments.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background