Data Engineer

Responsibilities

Integrate data into s3 data lake and define schema (EL)
Own distributed data processing jobs from source systems (Glue)
Maintaining streaming data jobs (Lambda, Kinesis Firehose)
Implement systems tracking data quality and consistency (Great Expectations)
Experience
Relevant professional experience

Experience with distributed computing framework (Spark, Glue)
Knowledge of streaming data framework (Kinesis Firehose, Kafka, Lambda)
Strong skills in scripting language (Python, Bash)
Master’s degree or 1+ years of experience with workflow management tools (Airflow, Prefect, Dagster)
Proficient in at least one of the SQL languages (Redshift, PostgreSQL, DB2, dbt)
Comfortable working directly with data analytics to bridge business goals with data engineering
You are a self-starter that can work effectively in a fast-paced, ambiguous environment with changing priorities and minimally defined processes.

Apply for this position

Allowed Type(s): .pdf, .doc, .docx
×