Sidley Austin LLP

Senior Data Engineer

Recruiting Location US-IL-Chicago
Department
Data and AI

Summary

The Senior Data Engineer will design, build, and maintain the scalable data pipelines, models, and infrastructure that power analytics, business intelligence, and machine‑learning products across the company. Partnering closely with business, product, and analytics teams, you will translate complex requirements into elegant, reliable data solutions and help drive the delivery of innovative data products. This role reports to the Senior Manager, Data Engineering.

Duties and Responsibilities

  • Design, develop, and maintain robust, scalable data pipelines and ETL processes, ensuring efficient ingestion, transformation, and storage of data.
  • Build and optimize data models and schemas for analytics, reporting, and operational data stores
  • Implement and maintain data quality frameworks, including data validation, monitoring, and alerting mechanisms.
  • Collaborate closely with data architects, analysts, data scientists, and product teams to align data engineering activities with business goals.
  • Leverage cloud data platforms (AWS, Azure, GCP) to build and optimize data storage solutions, including data warehouses, data lakehouses, and real-time data processing.
  • Develop automation processes and frameworks for CI/CD supported by version control, linting, automated testing, security scanning, and monitoring
  • Contribute to the maintenance and improvement of data governance practices, helping to ensure data integrity, accessibility, and compliance with regulations such as GDPR.
  • Provide technical mentorship and guidance to junior team members, promoting best practices in software engineering, data engineering, and agile development.
  • Troubleshoot and resolve complex data infrastructure and pipeline issues, ensuring minimal downtime and optimal performance.

Salaries vary by location and are based on numerous factors, including, but not limited to, the relevant market, skills, experience, and education of the selected candidate. If an estimated salary range for this role is available, it will be provided in our Target Salary Range section. Our compensation package also includes bonus eligibility and a comprehensive benefits program. Benefits information can be found at Sidley.com/Benefits.

Target Salary Range

$148,000 - $164,000 if located in Illinois

Qualifications

To perform this job successfully, an individual must be able to perform the Duties and Responsibilities (Duties) above satisfactorily and meet the requirements below. The requirements listed below are representative of the minimum knowledge, skill, and/or ability required. Reasonable accommodations will be made to enable individuals with disabilities to perform the essential functions of the job. If you need such an accommodation, please email staffrecruiting@sidley.com (current employees should contact Human Resources). 

 

Education and/or Experience: 

Required:

  • Bachelor's degree in Computer Science, Engineering, Data Science, or a related field
  • A minimum of 5 years of hands-on experience in data engineering, building scalable data pipelines, ETL/ELT processes
  • Extensive experience with cloud data platforms in Azure, AWS, Google
  • Strong proficiency with Python, SQL, and Apache Spark for data processing
  • Hands-on experience with modern data-platform components (object storage, Lakehouse engines, orchestration tools, columnar warehouses, streaming services).
  • Proven experience with data modeling, schema design, and performance tuning of large-scale data systems.
  • Deep understanding of data engineering best practices: code repositories, CI/CD pipelines, test automation, monitoring, and alerting systems.
  • Skilled at crafting compelling data narratives through tables, reports, dashboards, and other visualization tools
  • Strong problem-solving and analytical skills with excellent attention to detail.
  • Excellent communication skills and experience collaborating with technical and business stakeholders.

Preferred:

  • Master's degree in Computer Science, Engineering
  • Experience building data pipelines in an Azure Databricks environment
  • Experience migrating to—or building—data platforms from the ground up
  • Experience with Infrastructure as Code (IAC) and Governance as Code
  • Familiarity with machine-learning workloads and partnering on feature engineering
  • Experience working in an Agile delivery model

 

Other Skills and Abilities:

The following will also be required of the successful candidate:

  • Strong organizational skills
  • Strong attention to detail
  • Good judgment
  • Strong interpersonal communication skills
  • Strong analytical and problem-solving skills
  • Able to work harmoniously and effectively with others
  • Able to preserve confidentiality and exercise discretion
  • Able to work under pressure
  • Able to manage multiple projects with competing deadlines and priorities

 

Sidley Austin LLP is an Equal Opportunity Employer

#LI-Hybrid

#LI-OE1

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed