Scaling our data infrastructure to multi-terabyte scale as we expand our customer base. We're struggling with scaling right now and need someone who can jump in and work on performance improvements immediately.
Build ETL pipelines for data profiling, data cleaning, and data aggregation.
Ensure data integrity across all data sources.
Manage data ingestion and processing infrastructure stacks with proper monitoring and alerting.
Manage a team of overseas contractors.
What You Should Have
Bachelor’s degree in Computer Science, a related technical field, or equivalent practical experience.
Experience implementing and scaling production cloud systems for data-intensive applications such as automated ETL, streaming and batch workloads
5+ years of experience in engineering data solutions using big data technologies (Hive, Presto, Spark, Flink) on large-scale data sets.
Experience with cloud vendors GCP, AWS, and Azure (the more the better)