Sr Data Engineer

  • Location: Bridgeville, PA
  • Type: Direct Hire
  • Remote
  • Job #9032

The Senior Engineer is responsible for the design, development, implementation and support of ELT processes for Data Platform. Works as part of a team to develop data pipelines, data transformation logic in ADF, python or Scala and other technologies to move data from a variety of operational platforms into Azure cloud.  Conduct performance tuning of ELT processes for large and medium volumes of data, develop and oversee monitoring systems to ensure data loads complete on schedule and data is accurate. Assume responsibility for resolving production support incidents on data load processes and related analytic tools.  Participate in project scope identification, design, development, testing and deployment-related activities in coordination with other members of the Data team, and across. Develop and improve standards and procedures to support quality development, testing, and production support.  Participate in the on-call duty phone rotation.  Provide leadership on projects and other Data initiatives as assigned. The Senior Engineer will be competent to complete most Data tasks independently and provide guidance to other members of the team as required.  Position requires on-call and off-hours support as needed for operations and projects. 

 

Responsibilities

  • Designs and develops quality Data Warehouse solutions 
  • Develops high quality, scalable data pipelines and data processes in Azure distributed cloud environment 
  • Conducts testing, code reviews, data integrity, and performance optimization 
  • Creates and maintains technical design documentation 
  • Lead requirements gathering for data modelling and contribute to data architecture 
  • Creates and promotes writing good quality code for accurate data  
  • Supports developers, data analysts, business partners, and data scientists who needs to interact with data platform 
  • Responsible for production support, including analyzing root cause and developing fixes to restore ETL and data operational readiness, planning and coordinating maintenance, conducting audits and validating jobs and data 
  • Mentors other team members, cross-trains and provides guidance 
  • Solid understanding of work estimation process to lead large/complex estimation activities 
  • Meets expectations in meeting deadline within budget, schedule and appropriate quality 
  • Adhere to enterprise architecture standards and contribute to making development and testing standards  
  • Maintain pipelines in a git repository 
  • Learn internal programs and contribute to solving data problems in an experienced manner 
  • Contribute to a collaborative work environment within and across teams 

Requirements

  • Bachelor’s Degree in Computer Science or a related technical field 
  • 7+ years of overall experience in Data Warehouse design and data modeling patterns (on-premise or cloud) 
  • 7+ years of overall experience in developing SQL/Synapse data warehouses and T – SQL coding 
  • 5+ years of experience in developing/supporting a data platform in Azure with data lake, Azure SQL server or Databricks 
  • 5+ years of cloud experience with Azure/AWS and Databricks 
  • 5+ years of experience with ETL tools such as ADF or equivalent 
  • 5+ years of experience in python in distributed cloud environment 
  • Strong experience in performance tuning of SQL and Pyspark, with medium and large volumes of data 
  • Highly familiar with ETL tools and Python based notebooks for data transformation 
  • Expert in creating T SQL or equivalent in processing big data 
  • Well-rounded in working in a DEV Ops environment supporting processes in data platform in supporting business units 
  • Thorough knowledge in core data concepts providing solutions for business use cases in distributed computing environment 
  • Expert level knowledge in writing python code in a distributed computing environment handling big data loads in lake house and delta lake environment  
  • Strong knowledge in performance improvement methods in data processes 
  • Experience working in an agile Data Warehouse team with 5+ members 
  • Knowledge about BI tools such as power BI or equivalent in supporting Data Warehouse development, testing and operational support activities 
  • Excellent written and verbal communication skills 
  • Ability to work independently, handle multiple tasks simultaneously and adapt quickly to change with a variety of people and work styles 
  • Must be capable of fully yet concisely articulating technical concepts to non-technical audiences 
  • Keen on learning new concepts and keep up to date with emerging technical stack 
Estimated Compensation: $99,900 - $144,650 Per Year
Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!

MEET YOUR RECRUITER

Vanda AlvesVanda Alves

Recruiting Director, Data Engineering & Analytics/BI

(201) 537-0011

LinkedIn profile

Apply

Subscribe

Searching For Something Else?

Connect with a member of our recruiting team today. We're here to help!

Subscribe to our newsletter!

Be the first to receive monthly recruiting insights, hiring trends, job search tips, and more!