Job Description
● Data Pipeline Construction: Design, construct, install, test, and maintain highly scalable data management systems. This includes building and maintaining robust, error-free data pipelines.
● Data Flow Optimization: Work to enhance data flow and collection to improve data reliability, efficiency, and quality.
● Machine Learning Pipeline: Construct and maintain machine learning pipelines, ensuring they meet organization needs and adhere to necessary standards.
● Big Data Handling: Implement systems and tools to deal with big data. Leverage systems such as Kafka, Spark, and Flink for big data processing.
● Cloud Computing: Use cloud resources effectively to streamline data processing, especially Azure Cloud Services.
● Dashboarding: Build interactive, insightful dashboards to communicate data and results to stakeholders.
● Team Collaboration: Collaborate with data scientists and architects on several projects. Translate complex functional and technical requirements into detailed design.
● Documentation: Create and maintain optimal data pipeline architecture and create data tools for analytics and data scientist team members.