You are viewing a preview of this job. Log in or register to view more details about this job.

Data Engineer

Key Responsibilities:
 Design, build, and optimize ETL/ELT data pipelines for processing structured and unstructured data.
 Develop and maintain data models, data warehouses, and data lakes for efficient storage and retrieval.
 Work with large-scale relational and NoSQL databases (SQL, PostgreSQL, MongoDB, etc.).
 Implement data integration solutions across multiple systems, ensuring data consistency and quality.
 Develop and maintain batch and real-time data processing using tools like Apache Spark, Kafka, or Flink.
 Collaborate with Data Scientists, Analysts, and Software Engineers to support data-driven decision-making.
 Optimize database performance and ensure data security, governance, and compliance with industry standards.
 Automate data workflows and deployment using CI/CD pipelines and orchestration tools (Airflow, Prefect, etc.).
 Work with cloud platforms (AWS, GCP, Azure) to design and implement scalable data solutions.
 Troubleshoot and resolve data-related issues to maintain system reliability and accuracy.
Required Skills & Qualifications:
 5+ years of experience in data engineering, database management, or software development.
 Strong experience with SQL, Python, or Scala for data processing.
 Hands-on experience with ETL/ELT frameworks and data pipeline orchestration (Apache Airflow, DBT, etc.).
 Expertise in big data processing (Apache Spark, Hadoop, Kafka).
 Experience with cloud-based data services (AWS Redshift, Google BigQuery, Azure Synapse).
 Knowledge of data modeling, warehousing, and data lake architectures.
 Strong problem-solving skills and ability to work with cross-functional teams.
 Experience with containerization and DevOps practices (Docker, Kubernetes, Terraform) is a plus.