13 hours ago

Data Engineer

Q4

Hybrid
Full Time
R$340,000
Hybrid

Job Overview

Job TitleData Engineer
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered SalaryR$340,000
LocationHybrid

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About Q4

At Q4, we make an impact together, obsess over our customer, operate with integrity, and bring big ideas to life. Q4 is charting a bold new path for investor relations as the first AI-driven IR Ops Platform, providing everything an IR team needs to succeed on a single, powerful platform. The Q4 Platform enables public companies to attract, manage, and understand investors - all in one place. Over 2,600 customers, including many of the most respected brands in the world, trust Q4 to help drive premium valuations for their companies. Only Q4 offers a tech stack holistically designed to equip IR teams with data, insights, and smart workflows that power remarkable outcomes. Learn more at q4inc.com.

We hire smart, curious, and talented people to push boundaries, reimagine what’s possible, and turn challenges into opportunities. All while keeping the needs of our clients at the heart of everything we do. Come grow with us!

About The Role

As a Data Engineer, you will be at the heart of our AI-driven evolution, implementing scalable pipelines and high-quality data governance that fuel our powerful IR Ops Platform. We’re looking for a technical trailblazer who operates with integrity and is ready to work cross-functionally to deliver remarkable outcomes for our global client base.

What You'll Do

  • Responsible for implementing and maintaining data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams.
  • Extend support to implement governance across data landscape and ensure data ingress/egress follows the set hygiene guidelines and sanity rules.
  • Work with stakeholders including the executive, product, data and design teams to assist with data acquisition, data-related technical issues and other analytics needs.
  • Work cross-functionally to explore and propose solutions to business problems that can be addressed using insights from data.
  • Responsible for various types of documentation including business requirements, functional/technical specifications, process flows, unit test plans, user acceptance plans etc.

Tasks

  • Build high quality, scalable, optimized and maintainable data pipelines based on Q4 best practices.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Contribute to definition and implementation of standards and best practices for Data Analytics and Data Governance.
  • Maintain and troubleshoot the infrastructure built for optimal extraction, transformation, and loading of data from a wide variety of data sources.
  • Identify, design, and implement process improvements: automate manual processes, optimize data delivery, improve data reliability, efficiency, and quality, etc.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into student learning, customer acquisition, operational efficiency, and other key metrics.

Qualifications

  • Minimum of 4+ years of professional experience in a Data Engineer or related role.
  • Working familiarity with a variety of different storage mechanisms including SQL & No-SQL databases, Data Warehouses, and Data Lakes.
  • Experience working with AWS Cloud platforms and related systems.
  • Experience building and optimizing data pipelines, architectures, and data sets.
  • Experience with big data tools: Databricks, Spark, Kafka, etc.
  • Experience with data pipeline and workflow management tools, such as Airflow, Snowpipe.
  • Experience with real-time data processing and stream-processing systems: Kinesis, Spark-Streaming, etc.
  • Experience in requirements analysis, design, implementation, and testing of software solutions, especially data related, using Python, Scala, and/or other programming languages.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong project management, organizational and communication skills.

Compensation & Pay Transparency

The anticipated compensation for this role is R$ 280,000 – R$ 360,000 BRL per year (Gross). Final compensation is determined by a candidate's unique skills, experience, and internal equity. This job posting is for an existing vacancy currently open at Q4.

Artificial Intelligence (AI) Disclosure

In our commitment to an efficient and objective hiring process, Q4 utilizes machine-based systems (AI) to assist in the initial sourcing of applicants. All final hiring and selection decisions are reviewed and conducted by our human recruitment team.

Key skills/competency

  • Data Pipeline Architecture
  • Data Governance
  • AWS Cloud
  • SQL & No-SQL Databases
  • Big Data Tools (Databricks, Spark, Kafka)
  • Workflow Management (Airflow, Snowpipe)
  • Real-time Data Processing (Kinesis)
  • Python/Scala
  • Data Manipulation & Analysis
  • Process Improvement

Tags:

Data Engineer
data pipeline
data governance
data acquisition
analytics
process improvement
troubleshooting
optimization
documentation
stakeholder management
AWS
SQL
NoSQL
Databricks
Spark
Kafka
Airflow
Snowpipe
Kinesis
Python
Scala

Share Job:

How to Get Hired at Q4

  • Research Q4's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
  • Customize your resume: Highlight your data engineering expertise, AWS cloud experience, and proficiency with big data tools relevant to Q4.
  • Showcase pipeline expertise: Detail your experience building and optimizing data pipelines using tools like Databricks, Spark, Kafka, and Airflow.
  • Prepare for technical deep dives: Be ready to discuss SQL, No-SQL databases, data warehousing, data lakes, and programming in Python or Scala.
  • Demonstrate problem-solving: Share examples of how you've manipulated large datasets, performed root cause analysis, and implemented process improvements.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background