Job Description
Job Purpose:
We are looking for smart, dynamic individual who has a passion for coding,
complex problem solving and a strong will to learn. He must be an expert data
engineer who can help us design and develop our next generation, cloud enabled
data capabilities.
Mandatory Skills:
● Proven experience working as a Data Engineer or similar role.
● Strong programming skills in Python and experience with data processing
frameworks like Spark.
● Hands-on experience with AWS cloud services, including but not limited to
EC2, S3, Glue, Redshift, and EMR, EKS.
● Proficiency in workflow management tools like Apache Airflow.
● Experience with containerization technologies such as Docker.
● Solid understanding of data modelling, ETL processes, and data
warehousing concepts.
● Excellent problem-solving skills and attention to detail.
● Strong communication and collaboration skills, with the ability to work
effectively in a team environment.
● Experience with big data technologies such as Hadoop, Hive, or Presto.
● Familiarity with data analytics tools and techniques.
Responsibilities
Role Responsibilities:
· Design, develop, and maintain scalable data pipelines and infrastructure
on AWS cloud platform.
· Implement and optimize data ingestion, processing, and storage solutions
using technologies such as Airflow, Spark, Python, and Docker.
· Collaborate with cross-functional teams to understand data requirements
and design solutions to meet business needs.
· Monitor and troubleshoot data pipeline performance and reliability issues,
ensuring data quality and integrity.
· Implement best practices for data security, compliance, and governance.
· Stay up-to-date with industry trends and emerging technologies in data engineering.
● 3+ years of hands-on experience as Full Stack Developer.