MA Tech have partnered with a global healthcare organisation to assist them with a strategic hire. I am seeking a hands on Senior Infrastructure Engineer to join their analytics team. In this role, you will work closely with Data Scientists, Developers, System Administrators and Data Architects to ensure the platform meets changing business requirements and evolving portfolio strategy.
- Support Big Data Applications and Infrastructure. (Hadoop, Hive, Spark, Kafka)
- Manage Kubernetes containers and build applications into docker containers
- Be part of the team as they begin their Azure Public Cloud Journey.
- Ensure integration of multiple analytic platforms to make our data science toolkit easier to work with for end users.
- Work hands on with technologies and mock-up designs during software evaluations, PoCs and pilots.
- Address performance and scalability issues and perform necessary capacity planning to meet new business initiatives
- Maintain servers for the Data Science Workbench using configuration management tools and have “Infrastructure as Code” mind set.
- Ensure security of the environment by managing/applying permissions and encryption standards.
- Deliver professional level technical work in support of the development of products, tools, platforms and services.
- Review issues, logs, errors, etc. to troubleshoot service tickets supporting a suite of tools used by end users.
- Operate within established methodologies, procedures, and guidelines. Apply knowledge of principles and techniques to solve technical problems through architecture design.
- Translates complex concepts in ways that can be understood by a variety of audiences.
- Provide hands on support to our Data Scientist and Data Engineer users.
- Significant experience with Linux system administration and support experience in an enterprise IT, service provider organization.
- Strong knowledge of IT/technical infrastructure practices and functional units
- Experience with CI/CD – Deployment pipelines, and automated build and configuration tools such as Jenkins and Chef
- Proficient with a scripting language (such as Bash, Python, Golang).
- Experience with big data processing/streaming/storage engines (e.g., Hadoop (Mapr), Hive, Spark, Kafka, Presto) is desirable but not essential.
- Interest or knowledge in using Kubernetes for scaling data and services infrastructure would be beneficial.
- Experience using open source data science tools (Jupyter, Rstudio, Zeppelin) is a nice to have
- Knowledge of job scheduling tools (Airflow, TWS, Autosys)
- Knowledge and experience in Public Cloud Platforms (Azure, AWS, GCP)
- Experience using Agile methodology.
For more information, please contact Ian Donnelly on 01 5222908 or apply below: