Search open positions
Back to results Apply
Senior Data Engineer
Cleveland
, Ohio Contract
Remote
Senior Data Engineer
Remote
Senior Data Engineer will join their Marketing Research and Analytics team to support the building of new data capabilities and data infrastructure. In this role, you will be an integral part of a team that supports the analysis, creative modeling and measurement, and optimization of advertising campaigns across channels. As a Senior Data Engineer, you will be responsible for designing, building, and maintaining data pipelines and integrating and productionizing models, and supporting dashboards/scorecards used for analytics to solve business problems.
Use tools such as Python, Terraform, and GitHub Actions to deploy cloud infrastructure and develop internal tooling. Support the building of Marketing’s data infrastructure and will integrate with external APIs to exchange data with our vendor partners. Will work closely with data analysts and data science to prepare datasets for analytics, model training, automate scoring processes, and enable the use of new tools for the Marketing Research and Analytics team.
Day-to-Day Responsibilities:
• Working with Marketing data partners and build data pipelines to automate the data feeds from the partners to internal systems on Snowflake.
• Working with Data Analysts to understand their data needs and prepare the datasets for analytics.
• Work with Data Scientists to build the infrastructure to deploy the models, monitor the performance, and build the necessary audit infrastructure.
Required skills:
• Experience with building data pipelines, data pipeline infrastructure and related tools and environments used in analytics and data science (ex: Python, Unix)
• Experience in developing analytic workloads with AWS Services, S3, Simple Queue Service (SQS), Simple Notification Service (SNS), Lambda, EC2, ECR and Secrets Manager.
• Strong proficiency in Python, SQL, Linux/Unix shell scripting, GitHub Actions or Docker, Terraform or CloudFormation, and Snowflake.
Order of Importance: Terraform, Docker, GitHub Actions OR Jenkins
• Experience with orchestration tools such as Prefect, DBT, or Airflow.
• Experience automating data ingestion, processing, and reporting/monitoring.
• Experience with other relevant tools used in data engineering (e.g., SQL, GIT, etc.)
• Ability to set up environments (Dev, QA, and Prod) using GitHub repo and GitHub rules/methodologies; how to maintain (via SQL coding and proper versioning)
Education/Experience:
- Bachelor's Degree or higher in an Information Technology discipline or related field of study and minimum of two years of work experience designing, programming, and supporting software programs or applications.
- In lieu of degree, minimum of four years related work experience designing, programming, and supporting software programs or applications may be accepted.
Remote
Senior Data Engineer will join their Marketing Research and Analytics team to support the building of new data capabilities and data infrastructure. In this role, you will be an integral part of a team that supports the analysis, creative modeling and measurement, and optimization of advertising campaigns across channels. As a Senior Data Engineer, you will be responsible for designing, building, and maintaining data pipelines and integrating and productionizing models, and supporting dashboards/scorecards used for analytics to solve business problems.
Use tools such as Python, Terraform, and GitHub Actions to deploy cloud infrastructure and develop internal tooling. Support the building of Marketing’s data infrastructure and will integrate with external APIs to exchange data with our vendor partners. Will work closely with data analysts and data science to prepare datasets for analytics, model training, automate scoring processes, and enable the use of new tools for the Marketing Research and Analytics team.
Day-to-Day Responsibilities:
• Working with Marketing data partners and build data pipelines to automate the data feeds from the partners to internal systems on Snowflake.
• Working with Data Analysts to understand their data needs and prepare the datasets for analytics.
• Work with Data Scientists to build the infrastructure to deploy the models, monitor the performance, and build the necessary audit infrastructure.
Required skills:
• Experience with building data pipelines, data pipeline infrastructure and related tools and environments used in analytics and data science (ex: Python, Unix)
• Experience in developing analytic workloads with AWS Services, S3, Simple Queue Service (SQS), Simple Notification Service (SNS), Lambda, EC2, ECR and Secrets Manager.
• Strong proficiency in Python, SQL, Linux/Unix shell scripting, GitHub Actions or Docker, Terraform or CloudFormation, and Snowflake.
Order of Importance: Terraform, Docker, GitHub Actions OR Jenkins
• Experience with orchestration tools such as Prefect, DBT, or Airflow.
• Experience automating data ingestion, processing, and reporting/monitoring.
• Experience with other relevant tools used in data engineering (e.g., SQL, GIT, etc.)
• Ability to set up environments (Dev, QA, and Prod) using GitHub repo and GitHub rules/methodologies; how to maintain (via SQL coding and proper versioning)
Education/Experience:
- Bachelor's Degree or higher in an Information Technology discipline or related field of study and minimum of two years of work experience designing, programming, and supporting software programs or applications.
- In lieu of degree, minimum of four years related work experience designing, programming, and supporting software programs or applications may be accepted.







