freelance
Big Data Engineer Reference number: 2091
Last update: 13-10-2021, 17:37
Education: Academic degree in Computer Science, Mathematics, Engineering... or equivalent work experience with specialized training/certification
Start: 14 Oct 2021    End: 31 Dec 2021
the contract might be extended
Work experience: 5+ years of experience supporting or working with Big Data/Data Warehousing environments
Job type: Temporary/Contract
Job description
As part of the Usage Experience Analytics team, you support data scientists and work closely with the source domain teams to provide data that is accurate, congruent and reliable. This to provide our stakeholders with insights and reporting about the customers' experience of the TV and internet services.
What you’ll do
  • You work with Data scientists to understand the service domain needs and translate these requirements into robust and scalable data solutions
  • You complete and structure the data that data scientists need to perform their tasks
  • You identify with the service domains of TV and Internet new data sources and make sure they become available in the Data Lake
  • You are proactively searching for possible improvements in our codebase, structure or pipelines
  • You troubleshoot reported data loads or reconciliation inconsistencies
  • You work on highly complex and cross-functional use cases, using data in different formats and from various platforms
  • You implement tools and frameworks for automating report generation and identification of data-quality issues
  • You create scalable, efficient, automated processes for large scale data analyses, model development, model validation, and model implementation
Profile
  • You are dynamic, flexible and proactive
  • Team player and good communicator
  • Positive & constructive mindset
  • Able to work independently and manage own time effectively
Required Skills
  • Strong knowledge of Big Data tools and technologies (Hadoop, Hive, Spark...)
  • Good experience in coding in SCALA and Python and knowledge of code versioning and CI/CD pipelines
  • Knowledge of data streaming and batch processing technologies (Kafka, NiFi...)
  • Knowledge of log collection (ex. SyslogNG) and API frameworks is a plus
  • General knowledge of Linux / Windows server, networking, firewalls...
  • Great sense of precision
Interested? Send us your resumé
To apply for this job, please complete the form below and join your resume. This instantly places your information into our database. Once we have received your information, we will be in touch by e-mail or phone. If you have not heard from us after 3 working days, please call us!

Thank you for your interest in working with Harvey Nash and we look forward to assisting you in your job search!

Only PDF, max. 10MB

Only PDF, max. 10MB