digixvalley

Data Engineer

2 - 3 Years

Total Experience

Full Time

Job Type

0

No. of Openings

We are seeking a talented and detail-oriented Data Engineer with 2–3 years of experience to join our growing data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support advanced analytics, reporting, and machine learning use cases. You should have a strong grasp of data engineering best practices, experience with modern data stack tools, and the ability to work independently on medium to complex projects.Key Responsibilities:
  • Design, develop, and maintain ETL/ELT pipelines to collect data from multiple sources.
  • Build and optimize data models (Star/Snowflake schema) for BI and analytical usage.
  • Work with cloud data platforms such as AWS (S3, Redshift, Glue), GCP (BigQuery, Dataflow), or Azure.
  • Collaborate with data analysts, scientists, and software engineers to understand data needs and deliver reliable, clean datasets.
  • Implement and maintain data quality checks, monitoring, and alerting systems.
  • Write efficient, reusable, and testable SQL and Python code.
  • Automate manual data-related processes for efficiency and scalability.
  • Document data pipelines, architecture, and operational workflows.
  • Support CI/CD pipelines for data infrastructure and deployment.
  • Ensure data governance, compliance, and security standards are met.
Required Skills & Qualifications:
  • Bachelor’s degree in Computer Science, Data Science, Engineering, or related field.
  • 2–3 years of hands-on experience in data engineering roles.
  • Proficiency in SQLPython, or Scala for data manipulation and pipeline development.
  • Experience with big data tools (e.g., Apache Spark, Kafka, Hive).
  • Familiarity with data warehousing concepts and tools like Snowflake, BigQuery, Redshift, etc.
  • Good understanding of data structures, algorithms, and system design.
  • Experience working with AirflowDBT, or similar workflow orchestration tools.
  • Exposure to containerization (Docker) and version control (Git).
  • Understanding of data privacy laws (e.g., GDPR, HIPAA) and best practices in data governance.
  • Exposure to real-time data streaming frameworks (Kafka, Flink).
  • Familiarity with machine learning pipelines and MLOps tools.
  • Experience with BI tools such as Power BI, Tableau, or Looker.
  • Certification in AWS/GCP/Azure is a plus.

Work Experience*

PKR
PKR