Sidley Austin LLP

Data Engineer

Recruiting Location US-IL-Chicago
Department
Data and AI

Summary

The Data Engineer will help build and maintain the data pipelines, models, and infrastructure that power analytics, business-intelligence, and machine-learning initiatives across the company. You’ll work alongside senior engineers and cross-functional partners to turn raw data into reliable, well-documented datasets that drive informed decisions and innovative products. This role reports to the Senior Manager, Data Engineering.

Duties and Responsibilities

  • Build E2E Azure Databricks-based data solutions.
  • Design, develop, and maintain scalable ETL and streaming data pipelines on Azure Databricks, leveraging Apache Spark, Delta Lake, and Azure Data Lake Storage (ADLS Gen2) to enable reliable lakehouse architectures and ensure efficient ingestion, transformation, and storage of data
  • Build and optimize data models and schemas for analytics, reporting, and operational data stores
  • Build and optimize Delta Lake / Lakehouse patterns (Bronze/Silver/Gold), including schema evolution and time travel
  • Develop high-quality PySpark / Spark SQL transformations, optimize joins, partitioning, caching, and shuffle behavior.
  • Implement and maintain data quality frameworks, including data validation, monitoring, and alerting mechanisms.
  • Collaborate closely with data architects, data analysts, BI engineers and product teams to align data engineering activities with business goals.
  • Contribute to CI/CD pipelines supported by version control, linting, automated testing, security scanning, and monitoring
  • Troubleshoot and resolve complex Azure Databricks platform data infrastructure and pipeline issues, ensuring minimal downtime and optimal performance.
  • Follow coding standards, participate in code reviews, and document your work to foster knowledge sharing.

Salaries vary by location and are based on numerous factors, including, but not limited to, the relevant market, skills, experience, and education of the selected candidate. If an estimated salary range for this role is available, it will be provided in our Target Salary Range section. Our compensation package also includes bonus eligibility and a comprehensive benefits program. Benefits information can be found at Sidley.com/Benefits.

Target Salary Range

$129,000 - $144,000 if located in Illinois

Qualifications

To perform this job successfully, an individual must be able to perform the Duties and Responsibilities (Duties) above satisfactorily and meet the requirements below. The requirements listed below are representative of the minimum knowledge, skill, and/or ability required. Reasonable accommodations will be made to enable individuals with disabilities to perform the essential functions of the job. If you need such an accommodation, please email staffrecruiting@sidley.com (current employees should contact Human Resources).

 

Education and/or Experience: 

Required:

  • Bachelor's degree in Computer Science, Engineering, Data Science, or a related field
  • A minimum of 3 years of hands-on experience designing, building, and operating data solutions.
  • Knowledge of Databricks architecture and core components, including Databricks Lakehouse, Delta Lake, Databricks SQL, Apache Spark Clusters, Unity Catalog, Databricks Workflows (Jobs), and Databricks Notebooks
  • Proficiency with Python, SQL, and Apache Spark for data processing
  • Proven experience building reusable, metadata-driven data ingestion frameworks using Python and Scala
  • Familiarity with cloud data-platform components such as object storage, metadata / data catalog services, batch/streaming/CDC ingestion & processing
  • Proven experience with data modeling, schema design, and performance tuning of large-scale data systems
  • Experience working with AI & BI engineers to deliver high-quality data products to stakeholders
  • Skilled at crafting compelling data narratives through tables, reports, dashboards, and other visualization tools
  • Understanding of data engineering best practices: code repositories, CI/CD pipelines, test automation, monitoring, and alerting systems
  • Strong problem-solving and analytical skills with excellent attention to detail
  • Excellent communication skills and experience collaborating with technical and business stakeholders

Preferred:

  • Experience building data pipelines in an Azure Databricks environment
  • Hands-on experience integrating Azure Databricks with Azure DevOps, Azure Blob Storage / ADLS Gen2, Azure Key Vault, and Azure Data Factory
  • Familiarity with enterprise data modeling tools such as ERwin Data Modeler, including the ability to interpret and apply logical and physical data models to analytical and lakehouse architectures
  • Familiarity with Infrastructure as Code (IAC)
  • Experience working with regulated/sensitive data and controls
  • Experience working in an Agile delivery model

Other Skills and Abilities:

The following will also be required of the successful candidate:

  • Strong organizational skills
  • Strong attention to detail
  • Good judgment
  • Strong interpersonal communication skills
  • Strong analytical and problem-solving skills
  • Able to work harmoniously and effectively with others
  • Able to preserve confidentiality and exercise discretion
  • Able to work under pressure
  • Able to manage multiple projects with competing deadlines and priorities

Sidley Austin LLP is an Equal Opportunity Employer

#LI-Hybrid

#LI-OE1

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed