Data modeling, design patterns, and building highly scalable and secured solutions. Knowledge of agile software development process and familiarity with performance metric tools.
Qualifications:
- 5-7 years of experience as a Data Engineer with extensive working experience using Pyspark, Advanced SQL, Snowflake, complex understanding of SQL and Skills, Spark, Snowflake, and Glue.
- AWS expertise (Azure, Google will work too) - Lambda, Glue, S3, etc.
- Experience in software development, CI/CD, and Agile methodology.
- Experienced on Big Data platforms like Hadoop, HBase, CouchDB, Hive, Pig, etc. Worked with TeraData, Oracle, MySQL, Tableau, QlikView, or similar reporting and BI packages.
- Used SQL, PL/SQL, and similar languages, UNIX shell scripting.
- Experience w/data modeling, design patterns, build highly scalable/secured solutions.
- Knowledge of agile software dev process and familiarity with performance metric tools.
- Experience working with JSON, YAML, XML, etc.
- Data Governance (Knowledge about metadata mgmt., data catalog/access). ETL scheduling tools like – Apache Airflow.
- Nice to have SAP success factor experience.
Skills
- Strong analytical and problem-solving skills paired with the ability to develop creative and efficient solutions; Distinct customer focus and quality mindset.
- Ability to work at an abstract level and gain consensus.
- Ability to see from and sell to multiple viewpoints.
- Excellent interpersonal, leadership, and communication skills and the ability to work both independently and in various team settings.
- Ability to work under pressure with a solid sense of setting priorities.
- Ability to manage own learning and contribute to domain knowledge building.
Education:
- BA or MS in a business-related or technology-related field.
- Relevant certifications in AWS, e.g., Cloud Practitioner, Solutions Architect Associate
- Migration and Data Integration strategies/certification
- Experience with any MPP data warehouses, e.g., Snowflake, Big Query, Redshift, etc.