Digital Research Infrastructure Engineer - Linux Specialist
PML operations grade 4 £30000 - £45000 DOE
Full Time
Open Ended Appointment
The Role
We have an exciting opportunity at PML for an individual with skills in Linux system administration to join the PML’s Digital Innovation and Marine Autonomy (DIMA) group. The role provides a business critical link between scientists, PML Applications (commercial work) and our IT Group to support the Linux computing infrastructure as it continues to evolve, underpinning PML science in multiple areas and across all levels. This ranges from data generation, (storage technologies and data management), processing and analysis (high performance computing and technologies such as JupyterHub), to making visual outputs for end users (web technologies and virtualisation) to increase the reach and impact of PML science.
About You
You will enjoy working with others to help deliver a modern and reliable digital infrastructure to underpin the world leading research carried out at PML. You will understand the importance of stability from existing infrastructure but will also be keen to learn and try new technologies. You will have experience of administering Linux systems, ideally using Ubuntu, and will be able to make use of scripts and common tools such as ansible to manage this. You will understand the importance of taking a proactive approach to identify and resolve and problems and will be able to make use of monitoring software (e.g., Nagios, Grafana) to accomplish this. You will understand best practices in cybersecurity and be able to apply these.
Skills Required
Linux systems administration and monitoring
Linux scripting (e.g., bash and Python)
Experience in management of data at the Terrabyte to Petabyte scale and storage technologies such as NFS and S3.
Cybersecurity (Understand and apply best practices)
Container technologies (Docker and Kubernetes)
High performance Computing (Slurm)
Virtualisation (VMWare)
Key Deliverables
Maintain our storage infrastructure to ensure data is distributed across servers based on existing capacity and projected changes in data volumes. This includes regular data moves and liaising with stakeholders to ensure data is backed up and archiving projects are completes as needed.
Monitor high performance computing infrastructure to identify and resolve problems either on their own or by working with IT (depending on the nature of the problem).
Act of a point of contact between scientists and IT to answer questions, help identify solutions and provide training.
Work with the data architect to maintain and develop web infrastructure used to provide existing and planned data search and visualisation services.
Manage the NEODAAS GPU cluster (MAGEO), including liaising with IT, vendors and system users.
About PML
As a marine-focused charity we develop and apply innovative science with a view to ensuring ocean sustainability. With over 40 years of experience, we offer evidence-based solutions to societal challenges. Our impact spans from research publications to informing policies and training future scientists. The science undertaken at PML contributes to UN Sustainable Development Goals by promoting healthy, productive and resilient oceans and seas.
To support PML’s science it operates in house Linux infrastructure used for processing satellite data, running models and making outputs accessible through web visualisation tools. This infrastructure includes a large amount of storage (6 PB), a High-Performance Computing cluster with over 1500 cores, a 40 GPU cluster (the MAssive GPU cluster for Earth Observation; MAGEO) and a virtual machine cluster. The role will be part of the Digital Innovation and Marine Autonomy (DIMA) group within PML. DIMA is a pioneering digital science group dedicated to advancing PML’s world-class and cutting-edge environmental research through the utilisation of state-of-the-art digital and autonomous technologies. The team comprises research software engineers, research infrastructure engineers, marine technologists and scientists who work on a variety of projects using autonomous vessels, satellite data, drones, Artificial Intelligence, High Performance Computing and data visualisation tools to help deliver PML’s goals. The team have an enthusiasm for solving problems through collaboration and shared learning.
Apr 11, 2024
Full time
Digital Research Infrastructure Engineer - Linux Specialist
PML operations grade 4 £30000 - £45000 DOE
Full Time
Open Ended Appointment
The Role
We have an exciting opportunity at PML for an individual with skills in Linux system administration to join the PML’s Digital Innovation and Marine Autonomy (DIMA) group. The role provides a business critical link between scientists, PML Applications (commercial work) and our IT Group to support the Linux computing infrastructure as it continues to evolve, underpinning PML science in multiple areas and across all levels. This ranges from data generation, (storage technologies and data management), processing and analysis (high performance computing and technologies such as JupyterHub), to making visual outputs for end users (web technologies and virtualisation) to increase the reach and impact of PML science.
About You
You will enjoy working with others to help deliver a modern and reliable digital infrastructure to underpin the world leading research carried out at PML. You will understand the importance of stability from existing infrastructure but will also be keen to learn and try new technologies. You will have experience of administering Linux systems, ideally using Ubuntu, and will be able to make use of scripts and common tools such as ansible to manage this. You will understand the importance of taking a proactive approach to identify and resolve and problems and will be able to make use of monitoring software (e.g., Nagios, Grafana) to accomplish this. You will understand best practices in cybersecurity and be able to apply these.
Skills Required
Linux systems administration and monitoring
Linux scripting (e.g., bash and Python)
Experience in management of data at the Terrabyte to Petabyte scale and storage technologies such as NFS and S3.
Cybersecurity (Understand and apply best practices)
Container technologies (Docker and Kubernetes)
High performance Computing (Slurm)
Virtualisation (VMWare)
Key Deliverables
Maintain our storage infrastructure to ensure data is distributed across servers based on existing capacity and projected changes in data volumes. This includes regular data moves and liaising with stakeholders to ensure data is backed up and archiving projects are completes as needed.
Monitor high performance computing infrastructure to identify and resolve problems either on their own or by working with IT (depending on the nature of the problem).
Act of a point of contact between scientists and IT to answer questions, help identify solutions and provide training.
Work with the data architect to maintain and develop web infrastructure used to provide existing and planned data search and visualisation services.
Manage the NEODAAS GPU cluster (MAGEO), including liaising with IT, vendors and system users.
About PML
As a marine-focused charity we develop and apply innovative science with a view to ensuring ocean sustainability. With over 40 years of experience, we offer evidence-based solutions to societal challenges. Our impact spans from research publications to informing policies and training future scientists. The science undertaken at PML contributes to UN Sustainable Development Goals by promoting healthy, productive and resilient oceans and seas.
To support PML’s science it operates in house Linux infrastructure used for processing satellite data, running models and making outputs accessible through web visualisation tools. This infrastructure includes a large amount of storage (6 PB), a High-Performance Computing cluster with over 1500 cores, a 40 GPU cluster (the MAssive GPU cluster for Earth Observation; MAGEO) and a virtual machine cluster. The role will be part of the Digital Innovation and Marine Autonomy (DIMA) group within PML. DIMA is a pioneering digital science group dedicated to advancing PML’s world-class and cutting-edge environmental research through the utilisation of state-of-the-art digital and autonomous technologies. The team comprises research software engineers, research infrastructure engineers, marine technologists and scientists who work on a variety of projects using autonomous vessels, satellite data, drones, Artificial Intelligence, High Performance Computing and data visualisation tools to help deliver PML’s goals. The team have an enthusiasm for solving problems through collaboration and shared learning.
Description
We are looking for a Data Engineer to help us build and maintain scalable and resilient pipelines that will ingest, process, and deliver the data needed for predictive and descriptive analytics. These data pipelines will further connect to machine learning pipelines to facilitate automatic retraining of our models.
We are a diverse group of data scientists, data engineers, software engineers, machine learning engineers from over 30 different countries. We are smart and fast moving, operating in small teams, with freedom for independent work and fast decision making.
To empower scientists and radically improve how science is published, evaluated and disseminated to researchers, innovators and the public, we have built our own state-of-the-art Artificial Intelligence Review Assistant (AIRA), backed by cutting-edge machine learning algorithms.
Key Responsibilities
Work in a team of machine learning engineers responsible for the productization of prototypes developed by data scientists.
Collaborate with data scientists, machine learning engineers, and other data engineers to design scalable, reliable, and maintainable ETL processes that ensure data scientists and automated ML processes have the necessary data available
Research and adopt the best DataOps & MLOps standards to design and develop scalable end-to-end data pipelines.
Identify opportunities for data process automation.
Establish and enforce best practices (e.g. in development, quality assurance, optimization, release, and monitoring).
Requirements
Degree in Computer Science or similar
Proven experience as a Data Engineer
Proficiency in Python
Experience with a Cloud Platform (e.g. Azure, AWS, GCP)
Experience with a workflow engine (e.g. Data Factory, Airflow)
Experience with SQL and NoSQL (e.g. MongoDB) databases
Experience with Hadoop & Spark
Great communication, teamwork, problem-solving, and organizational skills.
Nice To Have
Understanding of supervised and unsupervised machine learning algorithms
Stream-processing frameworks (e.g. Kafka)
Benefits
Competitive salary.
Participation in Frontiers annual bonus scheme
25 leave days + 4 well-being days (pro rata and expiring each year on 31st of December)
Great work-life balance.
Opportunity to work remotely
Fresh fruit, snacks and coffee.
English classes.
Team building/sport activities and monthly social events.
Lots of opportunities to work with exciting technologies and solve challenging problems
Who we are
Frontiers is an award-winning open science platform and leading open access scholarly publisher. We are one of the largest and most cited publishers globally. Our journals span science, health, humanities and social sciences, engineering, and sustainability and we continue to expand into new academic disciplines so more researchers can publish open access.
Dec 23, 2021
Full time
Description
We are looking for a Data Engineer to help us build and maintain scalable and resilient pipelines that will ingest, process, and deliver the data needed for predictive and descriptive analytics. These data pipelines will further connect to machine learning pipelines to facilitate automatic retraining of our models.
We are a diverse group of data scientists, data engineers, software engineers, machine learning engineers from over 30 different countries. We are smart and fast moving, operating in small teams, with freedom for independent work and fast decision making.
To empower scientists and radically improve how science is published, evaluated and disseminated to researchers, innovators and the public, we have built our own state-of-the-art Artificial Intelligence Review Assistant (AIRA), backed by cutting-edge machine learning algorithms.
Key Responsibilities
Work in a team of machine learning engineers responsible for the productization of prototypes developed by data scientists.
Collaborate with data scientists, machine learning engineers, and other data engineers to design scalable, reliable, and maintainable ETL processes that ensure data scientists and automated ML processes have the necessary data available
Research and adopt the best DataOps & MLOps standards to design and develop scalable end-to-end data pipelines.
Identify opportunities for data process automation.
Establish and enforce best practices (e.g. in development, quality assurance, optimization, release, and monitoring).
Requirements
Degree in Computer Science or similar
Proven experience as a Data Engineer
Proficiency in Python
Experience with a Cloud Platform (e.g. Azure, AWS, GCP)
Experience with a workflow engine (e.g. Data Factory, Airflow)
Experience with SQL and NoSQL (e.g. MongoDB) databases
Experience with Hadoop & Spark
Great communication, teamwork, problem-solving, and organizational skills.
Nice To Have
Understanding of supervised and unsupervised machine learning algorithms
Stream-processing frameworks (e.g. Kafka)
Benefits
Competitive salary.
Participation in Frontiers annual bonus scheme
25 leave days + 4 well-being days (pro rata and expiring each year on 31st of December)
Great work-life balance.
Opportunity to work remotely
Fresh fruit, snacks and coffee.
English classes.
Team building/sport activities and monthly social events.
Lots of opportunities to work with exciting technologies and solve challenging problems
Who we are
Frontiers is an award-winning open science platform and leading open access scholarly publisher. We are one of the largest and most cited publishers globally. Our journals span science, health, humanities and social sciences, engineering, and sustainability and we continue to expand into new academic disciplines so more researchers can publish open access.
As a Senior Data Scientist, the candidate will work closely with Product and Engineering teams and will play a significant role in team responsible for building the AI and Analytics capabilities that power the Insurwave platform. The team is self-sufficient and fully responsible for design, development, testing, delivery, and support of the solutions. The candidate will be working across the full ML development lifecycle: data wrangling, model build, model evaluation, model deployment and model monitoring. The candidate will actively participate in these processes and will be leading and making technology and design decisions. The candidate will build solutions aligned with company-wide rules of engagement and standards and will work closely with Head of Data and AI to improve them when needed. The candidate will support team members growth and promote an open, learning culture. Responsibilities Lead and manage complex data science projects from conception to deployment, including defining project scope, timelines, and deliverables. Build high-performing AI/ML models that meet business-defined performance metrics, ensuring scalability, efficiency, and reliability. Develop and deploy production-ready data science code and models using fully automated processes, including Continuous Integration/Continuous Deployment (CI/CD) and testing frameworks. Continuously improve the performance, security, architecture, and maintainability of owned services through iterative development and optimization. Work closely with data analysts, data engineers, data scientists, and other business areas to ensure solutions are aligned with requirements, delivered according to plans, and developed to expected quality and security standards. Work closely with AI product manager to review model monitoring reports and analyse datasets in order to inform model improvement needs. Provide technical leadership and mentorship to junior data scientists, fostering a culture of learning, collaboration, and continuous improvement. Ensure the team adheres to defined best practices, standards, and processes, promoting excellence in technical execution and project delivery. Stay current with the latest advancements in data science and machine learning research and propose innovative solutions to address business challenges. Insurwave is where insurance buyers consolidate and visualise their data to understand their risk and make smarter transfer decisions. Our platform offers an integrated insurance management experience, from collecting and consolidating risk data to its distribution to all parties involved, keeping everyone in the insurance value chain connected and up-to-date. In one place, companies buying and selling risk can harness insightful data, view business exposure changes in real-time and automate time-consuming tasks to focus on what they do best. We are looking forward to hearing from you! Thank you for your interest in Insurwave. Please fill out the following short form. Should you have difficulties with the upload of your data, please send an email to Please add all mandatory information with a to send your application.
Apr 28, 2024
Full time
As a Senior Data Scientist, the candidate will work closely with Product and Engineering teams and will play a significant role in team responsible for building the AI and Analytics capabilities that power the Insurwave platform. The team is self-sufficient and fully responsible for design, development, testing, delivery, and support of the solutions. The candidate will be working across the full ML development lifecycle: data wrangling, model build, model evaluation, model deployment and model monitoring. The candidate will actively participate in these processes and will be leading and making technology and design decisions. The candidate will build solutions aligned with company-wide rules of engagement and standards and will work closely with Head of Data and AI to improve them when needed. The candidate will support team members growth and promote an open, learning culture. Responsibilities Lead and manage complex data science projects from conception to deployment, including defining project scope, timelines, and deliverables. Build high-performing AI/ML models that meet business-defined performance metrics, ensuring scalability, efficiency, and reliability. Develop and deploy production-ready data science code and models using fully automated processes, including Continuous Integration/Continuous Deployment (CI/CD) and testing frameworks. Continuously improve the performance, security, architecture, and maintainability of owned services through iterative development and optimization. Work closely with data analysts, data engineers, data scientists, and other business areas to ensure solutions are aligned with requirements, delivered according to plans, and developed to expected quality and security standards. Work closely with AI product manager to review model monitoring reports and analyse datasets in order to inform model improvement needs. Provide technical leadership and mentorship to junior data scientists, fostering a culture of learning, collaboration, and continuous improvement. Ensure the team adheres to defined best practices, standards, and processes, promoting excellence in technical execution and project delivery. Stay current with the latest advancements in data science and machine learning research and propose innovative solutions to address business challenges. Insurwave is where insurance buyers consolidate and visualise their data to understand their risk and make smarter transfer decisions. Our platform offers an integrated insurance management experience, from collecting and consolidating risk data to its distribution to all parties involved, keeping everyone in the insurance value chain connected and up-to-date. In one place, companies buying and selling risk can harness insightful data, view business exposure changes in real-time and automate time-consuming tasks to focus on what they do best. We are looking forward to hearing from you! Thank you for your interest in Insurwave. Please fill out the following short form. Should you have difficulties with the upload of your data, please send an email to Please add all mandatory information with a to send your application.
Machine Learning Engineer III page is loaded Machine Learning Engineer III Apply locations The Adelphi, London, GB time type Full time posted on Posted 23 Days Ago job requisition id R-15358 Condé Nast is a global media company producing the highest quality content with a footprint of more than 1 billion consumers in 32 territories through print, digital, video and social platforms. The company's portfolio includes many of the world's most respected and influential media properties including Vogue, Vanity Fair, Glamour, Self, GQ, The New Yorker, Condé Nast Traveler/Traveller, Allure, AD, Bon Appétit and Wired, among others. Job Description Location: London, GB Condé Nast is a global media company, home to iconic brands including Vogue, GQ, Glamour, AD, Vanity Fair and Wired, among many others. Our award-winning content reaches 84 million consumers in print, 367 million in digital and 379 million across social platforms, and generates more than 1 billion video views each month. We are headquartered in London and New York, and operate in 31 markets worldwide, including China, France, Germany, India, Italy, Japan, Mexico & Latin America, Spain, Taiwan, the U.K. and the U.S., with local licence partners across the globe. What will you be doing? Participate in model design, optimization, testing, quality assurance, and defect resolution Design and implement scalable systems to convert large volumes of data into useful model features, reports, and datasets Deliver and orchestrate machine learning infrastructure within production environments Collaborate with Engineers and Scientists to architect and implement a shared vision Participate in the entire software development lifecycle, from concept to release Provide technical expertise to junior team members Who you are: Applicants should have a degree (M.S or Ph.D. preferred) in Computer Science or a related discipline or relevant professional experience Years of software development experience designing scalable systems related to machine learning or more general statistical analysis Strong software development skills with proficiency in Python or C++ Experience with analytics frameworks such as Pandas, Apache Spark, Dask, or Flink Experience with machine learning frameworks such as TensorFlow, JAX, PyTorch, Spark MLlib, Keras, or scikit-learn Experience in cloud-based infrastructures such as AWS or GCP Exposure to orchestration platforms such as Apache Airflow or Kubeflow Proven attention to detail, critical thinking, and the ability to work independently within a cross-functional team What benefits do we offer ? Condé Nast Learning Hub where you'll find you'll find all Condé Nast-developed learning courses and trainings, and over 16,000+ courses in seven local languages 25 days holiday and extra days of annual leave life events like moving house or wanting to volunteer with a charity Hybrid working and core hours Competitive pension scheme Bupa Private Healthcare Season ticket loans Cycle to work Employee Assistance programme Bring your dog to work A wide variety of wellness benefits including gym discounts Discounts and Magazine Subscriptions Employee Resource Groups to provide a platform for employees to identify shared objectives, exchange ideas, and work on community priorities for our global workforce What happens next? If you are interested in this opportunity, please apply below, and we will review your application as soon as possible. You can update your resume or upload a cover letter at any time by accessing your candidate profile. Condé Nast is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, age, familial status and other legally protected characteristics. About Us Condé Nast is a global media company home to iconic brands including Vogue, GQ, AD, Condé Nast Traveler, Vanity Fair, Wired, The New Yorker, Glamour, Allure, Bon Appétit, Self and many more. Headquartered in New York and London, the company produces award-winning journalism, content and entertainment for every platform today and operates in 32 markets worldwide including China, France, Germany, India, Italy, Japan, Mexico, Spain, the U.K. and U.S., and Taiwan. At Condé Nast we value diversity of background, views and cultures. We celebrate people for their personal qualities, their skills and contributions. And we recognize the power our brands have to influence and shape culture, catalyze action and help make our world a better place for all. For more information, please visit and for Twitter for Instagram.
Apr 26, 2024
Full time
Machine Learning Engineer III page is loaded Machine Learning Engineer III Apply locations The Adelphi, London, GB time type Full time posted on Posted 23 Days Ago job requisition id R-15358 Condé Nast is a global media company producing the highest quality content with a footprint of more than 1 billion consumers in 32 territories through print, digital, video and social platforms. The company's portfolio includes many of the world's most respected and influential media properties including Vogue, Vanity Fair, Glamour, Self, GQ, The New Yorker, Condé Nast Traveler/Traveller, Allure, AD, Bon Appétit and Wired, among others. Job Description Location: London, GB Condé Nast is a global media company, home to iconic brands including Vogue, GQ, Glamour, AD, Vanity Fair and Wired, among many others. Our award-winning content reaches 84 million consumers in print, 367 million in digital and 379 million across social platforms, and generates more than 1 billion video views each month. We are headquartered in London and New York, and operate in 31 markets worldwide, including China, France, Germany, India, Italy, Japan, Mexico & Latin America, Spain, Taiwan, the U.K. and the U.S., with local licence partners across the globe. What will you be doing? Participate in model design, optimization, testing, quality assurance, and defect resolution Design and implement scalable systems to convert large volumes of data into useful model features, reports, and datasets Deliver and orchestrate machine learning infrastructure within production environments Collaborate with Engineers and Scientists to architect and implement a shared vision Participate in the entire software development lifecycle, from concept to release Provide technical expertise to junior team members Who you are: Applicants should have a degree (M.S or Ph.D. preferred) in Computer Science or a related discipline or relevant professional experience Years of software development experience designing scalable systems related to machine learning or more general statistical analysis Strong software development skills with proficiency in Python or C++ Experience with analytics frameworks such as Pandas, Apache Spark, Dask, or Flink Experience with machine learning frameworks such as TensorFlow, JAX, PyTorch, Spark MLlib, Keras, or scikit-learn Experience in cloud-based infrastructures such as AWS or GCP Exposure to orchestration platforms such as Apache Airflow or Kubeflow Proven attention to detail, critical thinking, and the ability to work independently within a cross-functional team What benefits do we offer ? Condé Nast Learning Hub where you'll find you'll find all Condé Nast-developed learning courses and trainings, and over 16,000+ courses in seven local languages 25 days holiday and extra days of annual leave life events like moving house or wanting to volunteer with a charity Hybrid working and core hours Competitive pension scheme Bupa Private Healthcare Season ticket loans Cycle to work Employee Assistance programme Bring your dog to work A wide variety of wellness benefits including gym discounts Discounts and Magazine Subscriptions Employee Resource Groups to provide a platform for employees to identify shared objectives, exchange ideas, and work on community priorities for our global workforce What happens next? If you are interested in this opportunity, please apply below, and we will review your application as soon as possible. You can update your resume or upload a cover letter at any time by accessing your candidate profile. Condé Nast is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, age, familial status and other legally protected characteristics. About Us Condé Nast is a global media company home to iconic brands including Vogue, GQ, AD, Condé Nast Traveler, Vanity Fair, Wired, The New Yorker, Glamour, Allure, Bon Appétit, Self and many more. Headquartered in New York and London, the company produces award-winning journalism, content and entertainment for every platform today and operates in 32 markets worldwide including China, France, Germany, India, Italy, Japan, Mexico, Spain, the U.K. and U.S., and Taiwan. At Condé Nast we value diversity of background, views and cultures. We celebrate people for their personal qualities, their skills and contributions. And we recognize the power our brands have to influence and shape culture, catalyze action and help make our world a better place for all. For more information, please visit and for Twitter for Instagram.
Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office A pioneering reinsurance agency based in Bristol, UK, specializing in the cutting-edge domain of cyber risk. Leveraging advanced analytics and proprietary models, my client aims to redefine the landscape of cyber risk assessment and management. Our commitment to innovation and excellence has positioned us as a leader in the cyber reinsurance market.Position Overview for Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office A highly technical and experienced Data Scientist to join their dynamic modelling team. Reporting into our Head of Data Science & Modelling, the successful candidate will play a pivotal role in the development and operationalization of 's proprietary stochastic cyber risk model. This position offers the opportunity to contribute significantly to the advancement of our analytical capabilities and to the broader field of cyber risk modelling.Key Responsibilities Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office Model Development and Operation: Be a key member of the team responsible for the design, development, refinement, and execution of 's stochastic cyber risk model, ensuring its accuracy, performance, and scalability.Data Analysis: Perform complex data analysis to extract insights and identify trends in cyber risk, utilizing a variety of statistical and machine learning techniques.Operationalization: Translate model insights into actionable strategies and tools for internal and external stakeholders.Collaboration: Work closely with other team members, including underwriters, engineers, and cyber risk analysts, to integrate the cyber risk model with other systems and processes.Innovation: Stay abreast of the latest developments in data science, cyber security, and risk modelling, and incorporate innovative techniques and technologies into our models.Qualifications for Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office Senior Data Scientist Experience: At least 5 years of experience working as a data scientist/quantitative risk modeller, with a proven track record of operationalizing complex models and analytics.Education: A degree in Computer Science, Engineering, Statistics, Mathematics, or a related field. Advanced degrees (MSc or PhD) are preferred.Technical Skills: Expertise in applied machine learning, probability, statistics, and quantitative risk modelling. High level of proficiency in Python & SQL. Experience with big data technologies and tools. Databricks and Pyspark are highly desirable. Experience of working in agile software development processes.Industry Knowledge: Experience in insurance, cyber, or a related domain is ideal. Understanding of the reinsurance industry and its challenges is a plus.Soft Skills: Excellent problem-solving abilities, strong communication skills, and the capacity to work effectively in a team-oriented environment.How to apply for Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office Please apply by sending your CV to the following links below. Mid level Data Scientist and Data Engineer roles also available
Apr 26, 2024
Full time
Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office A pioneering reinsurance agency based in Bristol, UK, specializing in the cutting-edge domain of cyber risk. Leveraging advanced analytics and proprietary models, my client aims to redefine the landscape of cyber risk assessment and management. Our commitment to innovation and excellence has positioned us as a leader in the cyber reinsurance market.Position Overview for Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office A highly technical and experienced Data Scientist to join their dynamic modelling team. Reporting into our Head of Data Science & Modelling, the successful candidate will play a pivotal role in the development and operationalization of 's proprietary stochastic cyber risk model. This position offers the opportunity to contribute significantly to the advancement of our analytical capabilities and to the broader field of cyber risk modelling.Key Responsibilities Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office Model Development and Operation: Be a key member of the team responsible for the design, development, refinement, and execution of 's stochastic cyber risk model, ensuring its accuracy, performance, and scalability.Data Analysis: Perform complex data analysis to extract insights and identify trends in cyber risk, utilizing a variety of statistical and machine learning techniques.Operationalization: Translate model insights into actionable strategies and tools for internal and external stakeholders.Collaboration: Work closely with other team members, including underwriters, engineers, and cyber risk analysts, to integrate the cyber risk model with other systems and processes.Innovation: Stay abreast of the latest developments in data science, cyber security, and risk modelling, and incorporate innovative techniques and technologies into our models.Qualifications for Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office Senior Data Scientist Experience: At least 5 years of experience working as a data scientist/quantitative risk modeller, with a proven track record of operationalizing complex models and analytics.Education: A degree in Computer Science, Engineering, Statistics, Mathematics, or a related field. Advanced degrees (MSc or PhD) are preferred.Technical Skills: Expertise in applied machine learning, probability, statistics, and quantitative risk modelling. High level of proficiency in Python & SQL. Experience with big data technologies and tools. Databricks and Pyspark are highly desirable. Experience of working in agile software development processes.Industry Knowledge: Experience in insurance, cyber, or a related domain is ideal. Understanding of the reinsurance industry and its challenges is a plus.Soft Skills: Excellent problem-solving abilities, strong communication skills, and the capacity to work effectively in a team-oriented environment.How to apply for Senior Data Scientist - Cyber Risk Modelling (Python/SQL/Databricks/ PySpark/ DBRX - Bristol based office Please apply by sending your CV to the following links below. Mid level Data Scientist and Data Engineer roles also available
Are you looking to work for a company doing something truly incredible, disruptive and exciting? Now is your chance! Intelligent Growth Solutions (IGS) was founded in 2013 and has brought together decades of farming and engineering experience to create a market-leading agritech business with a vision to revolutionise indoor growing. Over the last decade, IGS has built its reputation as a leading global provider of vertical farming technology, and is looking to recruit an experienced Data Engineer to join the team. The purpose of this role is to create and productionise data-centric applications and features, data pipelines and data analysis tools in an Azure cloud environment. You will also provide real time control and information for our market leading vertical farming solution. Responsibilities: Build new data-centric applications that are tested at multiple levels and ready for our Azure / kubernetes production environment Update and add new features to existing applications Work closely with data scientists to automate and productionise machine learning and other complex solutions to business problems Collaborate and communicate with other team members including product managers, crop scientists, hardware, software and IoT engineers Use, update and contribute to IGS standards and methods for modular, agile, continuous delivery, software development Build tools to capture data from multiple sources including operational products and develop appropriate pipelines for those data, leveraging appropriate technologies Develop simple customer facing reporting tools Participate in and promote a culture of learning and improvement including code reviews and tech talks The team: Part of the wider software team, Data Applications is a cross-functional team of data scientists, data engineers, site-reliability and frontend engineers, data visualisers and product managers, who use data to add intelligence to IGS systems. Our work includes conceptualisation of difficult problems, research projects and proofs of concept, creation of machine learning models and building production-ready features and products. The person: You are a python coder with an aptitude and taste for data. You know how to decide which tools and approaches to choose. You can assist the Data Applications team to gather data in the best way and you can build the right solutions to implement ML, DL and AI models at scale. You are committed to quality, testing and the importance of good engineering practice. You can work with a team of Data Engineers, Data Scientists, Data Visualisers and the wider business to deliver enterprise grade products for both internal and external use. You are at home in a business that leads the world in a field that combines plant science, hardware engineering and software. You are all that and you are a real person too, one who remembers that we all are, displaying kindness and patience. We have a list of skills and experiences that we need you to have. Essential (you need all of these): At least 2 years experience working with Python At least 6 months experience working with SQL Some exposure to AI, ML, DL or similar analysis tools Experience in building production-ready data solutions Experience working with git or other version control systems Excellent communication skills University degree in data/computer related subject or industry equivalent Highly valued (you should have at least three of these): Experience working in Linux environments Experience building AI, ML, DL or similar analysis tools Experience with microservice architecture Design and implementation skills for unit, integration and UI testing Experience with CI/CD pipelines Experience working with graph data Experience with Azure data offerings, especially Data Grid, Data Explorer and Kusto query language Experience working with Docker and Kubernetes High coding standards with skills in code review and static code analysis Desirable (you probably have at least one of these): Experience with other software or analytical languages; C#, Angular and Typescript would be especially useful Data visualisation skills Software design and architecture experience Experience with or exposure to Computer Vision tools Presentation and documentation skills Experience with PLC control systems Experience with cloud based (Azure) SaaS user interfaces (e.g. Angular) IGS is focused on delivering innovative solutions to enable our customers to sustainably grow high-quality crops all year round. This is made possible by a highly inclusive, empowered, constructive, challenging and team-driven culture. However, we are still a business and people like you deserve to be well rewarded for your passion, energy, commitment and effort. Your base salary is accompanied by core benefits including; 7 weeks' holiday, solid pension, opt in Private Health Care, company sick pay, Income Protection, Life Assurance x 4 of basic salary, Lifestyle & Recognition benefits and personal development/training funding. Please apply via our website.
Apr 26, 2024
Full time
Are you looking to work for a company doing something truly incredible, disruptive and exciting? Now is your chance! Intelligent Growth Solutions (IGS) was founded in 2013 and has brought together decades of farming and engineering experience to create a market-leading agritech business with a vision to revolutionise indoor growing. Over the last decade, IGS has built its reputation as a leading global provider of vertical farming technology, and is looking to recruit an experienced Data Engineer to join the team. The purpose of this role is to create and productionise data-centric applications and features, data pipelines and data analysis tools in an Azure cloud environment. You will also provide real time control and information for our market leading vertical farming solution. Responsibilities: Build new data-centric applications that are tested at multiple levels and ready for our Azure / kubernetes production environment Update and add new features to existing applications Work closely with data scientists to automate and productionise machine learning and other complex solutions to business problems Collaborate and communicate with other team members including product managers, crop scientists, hardware, software and IoT engineers Use, update and contribute to IGS standards and methods for modular, agile, continuous delivery, software development Build tools to capture data from multiple sources including operational products and develop appropriate pipelines for those data, leveraging appropriate technologies Develop simple customer facing reporting tools Participate in and promote a culture of learning and improvement including code reviews and tech talks The team: Part of the wider software team, Data Applications is a cross-functional team of data scientists, data engineers, site-reliability and frontend engineers, data visualisers and product managers, who use data to add intelligence to IGS systems. Our work includes conceptualisation of difficult problems, research projects and proofs of concept, creation of machine learning models and building production-ready features and products. The person: You are a python coder with an aptitude and taste for data. You know how to decide which tools and approaches to choose. You can assist the Data Applications team to gather data in the best way and you can build the right solutions to implement ML, DL and AI models at scale. You are committed to quality, testing and the importance of good engineering practice. You can work with a team of Data Engineers, Data Scientists, Data Visualisers and the wider business to deliver enterprise grade products for both internal and external use. You are at home in a business that leads the world in a field that combines plant science, hardware engineering and software. You are all that and you are a real person too, one who remembers that we all are, displaying kindness and patience. We have a list of skills and experiences that we need you to have. Essential (you need all of these): At least 2 years experience working with Python At least 6 months experience working with SQL Some exposure to AI, ML, DL or similar analysis tools Experience in building production-ready data solutions Experience working with git or other version control systems Excellent communication skills University degree in data/computer related subject or industry equivalent Highly valued (you should have at least three of these): Experience working in Linux environments Experience building AI, ML, DL or similar analysis tools Experience with microservice architecture Design and implementation skills for unit, integration and UI testing Experience with CI/CD pipelines Experience working with graph data Experience with Azure data offerings, especially Data Grid, Data Explorer and Kusto query language Experience working with Docker and Kubernetes High coding standards with skills in code review and static code analysis Desirable (you probably have at least one of these): Experience with other software or analytical languages; C#, Angular and Typescript would be especially useful Data visualisation skills Software design and architecture experience Experience with or exposure to Computer Vision tools Presentation and documentation skills Experience with PLC control systems Experience with cloud based (Azure) SaaS user interfaces (e.g. Angular) IGS is focused on delivering innovative solutions to enable our customers to sustainably grow high-quality crops all year round. This is made possible by a highly inclusive, empowered, constructive, challenging and team-driven culture. However, we are still a business and people like you deserve to be well rewarded for your passion, energy, commitment and effort. Your base salary is accompanied by core benefits including; 7 weeks' holiday, solid pension, opt in Private Health Care, company sick pay, Income Protection, Life Assurance x 4 of basic salary, Lifestyle & Recognition benefits and personal development/training funding. Please apply via our website.
Our client has an exciting opportunity for a Senior Data Scientist to join the team. Location: London, UK (Remote) must be based in the UK as will require occasional travel into the office Salary: £60k - £85k PA (dependant on experience) Job Type: Full-time, Permanent About The Company: Our client is a leading EdTech firm specialising in child digital safety technology for use in primary and secondary schools in the UK. They aim to provide sophisticated advanced child digital safety by monitoring students' online activity, filtering content and providing alerts to school faculty regarding student safety. Senior Data Scientist The Role: Our client is looking for a Data Scientist to join their Data and AI team working closely with partners across the business to tackle challenging problems and enhance company performance through algorithms, experimentation, and interactive dashboards. As part of the Data and AI team, you will set the standard for data-driven decision-making. You will also have the chance to contribute to innovative research studies and the development of production-grade algorithms benefiting students and educators. Senior Data Scientist Key Responsibilities: - Take the lead in designing, developing, and refining advanced machine learning models aimed at personalising learning experiences, forecasting student outcomes, and improving the delivery of educational content - Employ statistical analysis and data mining techniques to sift through extensive datasets, extracting actionable insights that drive product development, user engagement strategies, and enhancements to educational content - Collaborate closely with product managers, software engineers, and educational experts to seamlessly integrate data science solutions into the platform -Provide mentorship and guidance to junior data scientists and analysts, fostering a culture of learning and ongoing improvement within the team - Regularly interface with senior management or executive leadership on matters pertaining to data science - Stay abreast of the latest advancements in data science, machine learning, generative AI, and educational technology - Uphold ethical standards in data usage, comply with privacy laws and regulations, and implement robust data governance practices to safeguard sensitive information pertaining to students and educators Senior Data Scientist You: - 5+ years of data scientist experience - Experience within EdTech or other similar field - Degree or Higher Education qualification in Computer Science or other relevant field - Advanced proficiency in Python and SQL is required - Demonstrated leadership in steering data science initiatives towards impactful outcomes - Proficiency in both written and verbal communication - Strong expertise in traditional statistics and machine learning methodologies - Familiarity with git and cloud computing platforms (such as AWS, GCP, or Azure) is desirable - Previous experience in mentoring and guiding junior to mid-level data scientists would be advantageous Senior Data Scientist Benefits: - Comprehensive pension scheme - 26 days holiday including an extra day for your birthday plus the option to buy additional days - Company car and expensed travel - Flexible working - Regular company events - Death in service 4 times annual salary - Annual personal learning budget - 7 hours of paid volunteering work Application Process: You must have the right to work in the UK in order to be eligible to apply for this position. This position is predominantly remote based but the role will require occasional travel into the office. To submit your CV for this exciting Senior Data Scientist opportunity, please click Apply
Apr 26, 2024
Full time
Our client has an exciting opportunity for a Senior Data Scientist to join the team. Location: London, UK (Remote) must be based in the UK as will require occasional travel into the office Salary: £60k - £85k PA (dependant on experience) Job Type: Full-time, Permanent About The Company: Our client is a leading EdTech firm specialising in child digital safety technology for use in primary and secondary schools in the UK. They aim to provide sophisticated advanced child digital safety by monitoring students' online activity, filtering content and providing alerts to school faculty regarding student safety. Senior Data Scientist The Role: Our client is looking for a Data Scientist to join their Data and AI team working closely with partners across the business to tackle challenging problems and enhance company performance through algorithms, experimentation, and interactive dashboards. As part of the Data and AI team, you will set the standard for data-driven decision-making. You will also have the chance to contribute to innovative research studies and the development of production-grade algorithms benefiting students and educators. Senior Data Scientist Key Responsibilities: - Take the lead in designing, developing, and refining advanced machine learning models aimed at personalising learning experiences, forecasting student outcomes, and improving the delivery of educational content - Employ statistical analysis and data mining techniques to sift through extensive datasets, extracting actionable insights that drive product development, user engagement strategies, and enhancements to educational content - Collaborate closely with product managers, software engineers, and educational experts to seamlessly integrate data science solutions into the platform -Provide mentorship and guidance to junior data scientists and analysts, fostering a culture of learning and ongoing improvement within the team - Regularly interface with senior management or executive leadership on matters pertaining to data science - Stay abreast of the latest advancements in data science, machine learning, generative AI, and educational technology - Uphold ethical standards in data usage, comply with privacy laws and regulations, and implement robust data governance practices to safeguard sensitive information pertaining to students and educators Senior Data Scientist You: - 5+ years of data scientist experience - Experience within EdTech or other similar field - Degree or Higher Education qualification in Computer Science or other relevant field - Advanced proficiency in Python and SQL is required - Demonstrated leadership in steering data science initiatives towards impactful outcomes - Proficiency in both written and verbal communication - Strong expertise in traditional statistics and machine learning methodologies - Familiarity with git and cloud computing platforms (such as AWS, GCP, or Azure) is desirable - Previous experience in mentoring and guiding junior to mid-level data scientists would be advantageous Senior Data Scientist Benefits: - Comprehensive pension scheme - 26 days holiday including an extra day for your birthday plus the option to buy additional days - Company car and expensed travel - Flexible working - Regular company events - Death in service 4 times annual salary - Annual personal learning budget - 7 hours of paid volunteering work Application Process: You must have the right to work in the UK in order to be eligible to apply for this position. This position is predominantly remote based but the role will require occasional travel into the office. To submit your CV for this exciting Senior Data Scientist opportunity, please click Apply
Senior Data Engineer Role purpose: The Senior Data Engineer plays a pivotal role within the Data team of the Digital and Information function, assuming a leadership position in ensuring the availability, integrity, quality, and accuracy of data. With a wealth of experience, the Senior Data Engineer leads the design, development, and maintenance of data pipelines and workflows, driving the organisation's data processing capabilities and enabling the seamless delivery of BI reporting and analytics. In addition to core responsibilities, the Senior Data Engineer takes a proactive role in advancing our Azure-based data platform, leveraging cutting-edge technologies like Azure Data Factory and Azure Databricks. This role presents exciting opportunities to spearhead digital transformation projects and innovate within the data landscape. Operating within Agile principles and methodologies, the Senior Data Engineer collaborates closely with fellow data engineers and key stakeholders, assuming a leading role in shaping and managing the portfolio of reporting and data requirements. Leveraging their extensive expertise, the Senior Data Engineer contributes significantly to refining the overall data strategy, ensuring alignment with organisational objectives and driving continuous improvement initiatives. Key Responsibilities: Ownership and delivery of the design, development and maintenance of highly scalable and robust data pipelines and workflows utilising Azure Data Factory and Databricks. Lead data transformation and integration efforts to augment data from diverse sources, ensuring data governance and regulatory compliance throughout the process. Establish and enforce data quality standards through the implementation of rigorous data validation processes, ensuring the utmost accuracy, completeness, and consistency of the data ecosystem. Serve as a key liaison between cross-functional teams, including data scientists, analysts, and business stakeholders, to comprehensively understand data requirements, conduct User Acceptance Testing (UAT), and deliver tailored solutions that align with business objectives. Proactively monitor pipeline performance, swiftly identifying and resolving any potential issues to uphold seamless data processing operations and minimize downtime. Drive comprehensive documentation of data pipelines, workflows, and procedures to foster knowledge sharing and ensure ongoing system maintainability and scalability. Demonstrate proficiency in managing and resolving 2nd line data and reporting support requests through designated ticketing systems, mitigating disruptions to business operations with timely and effective resolutions. Actively engage in daily stand-ups, weekly stakeholder meetings, and retrospectives, providing valuable insights and contributing to continuous improvement initiatives across the team and broader organization. Technical / Professional Qualifications / Requirements: Hands-on advanced experience in designing, developing, and managing data pipelines using Azure Data Factory. Strong working knowledge of Databricks for data processing, analytics and Machine learning. Advanced experience with data modelling techniques and data warehousing concepts. Advanced proficiency in SQL for data querying and manipulation and good knowledge of Python. Ability to work effectively in a fast-paced, dynamic environment and manage multiple priorities, following the Agile way of working Ability to lead and analyse complex data problems and propose effective solutions. Excellent communication skills with the ability to collaborate effectively with cross-functional teams. Benefits Include: 25 days AL plus bank holidays 1 Wellbeing day Private Medical Pension - 8% matched pension Life Assurance x4 basic salary Income protection Dental plan - voluntary benefit
Apr 26, 2024
Full time
Senior Data Engineer Role purpose: The Senior Data Engineer plays a pivotal role within the Data team of the Digital and Information function, assuming a leadership position in ensuring the availability, integrity, quality, and accuracy of data. With a wealth of experience, the Senior Data Engineer leads the design, development, and maintenance of data pipelines and workflows, driving the organisation's data processing capabilities and enabling the seamless delivery of BI reporting and analytics. In addition to core responsibilities, the Senior Data Engineer takes a proactive role in advancing our Azure-based data platform, leveraging cutting-edge technologies like Azure Data Factory and Azure Databricks. This role presents exciting opportunities to spearhead digital transformation projects and innovate within the data landscape. Operating within Agile principles and methodologies, the Senior Data Engineer collaborates closely with fellow data engineers and key stakeholders, assuming a leading role in shaping and managing the portfolio of reporting and data requirements. Leveraging their extensive expertise, the Senior Data Engineer contributes significantly to refining the overall data strategy, ensuring alignment with organisational objectives and driving continuous improvement initiatives. Key Responsibilities: Ownership and delivery of the design, development and maintenance of highly scalable and robust data pipelines and workflows utilising Azure Data Factory and Databricks. Lead data transformation and integration efforts to augment data from diverse sources, ensuring data governance and regulatory compliance throughout the process. Establish and enforce data quality standards through the implementation of rigorous data validation processes, ensuring the utmost accuracy, completeness, and consistency of the data ecosystem. Serve as a key liaison between cross-functional teams, including data scientists, analysts, and business stakeholders, to comprehensively understand data requirements, conduct User Acceptance Testing (UAT), and deliver tailored solutions that align with business objectives. Proactively monitor pipeline performance, swiftly identifying and resolving any potential issues to uphold seamless data processing operations and minimize downtime. Drive comprehensive documentation of data pipelines, workflows, and procedures to foster knowledge sharing and ensure ongoing system maintainability and scalability. Demonstrate proficiency in managing and resolving 2nd line data and reporting support requests through designated ticketing systems, mitigating disruptions to business operations with timely and effective resolutions. Actively engage in daily stand-ups, weekly stakeholder meetings, and retrospectives, providing valuable insights and contributing to continuous improvement initiatives across the team and broader organization. Technical / Professional Qualifications / Requirements: Hands-on advanced experience in designing, developing, and managing data pipelines using Azure Data Factory. Strong working knowledge of Databricks for data processing, analytics and Machine learning. Advanced experience with data modelling techniques and data warehousing concepts. Advanced proficiency in SQL for data querying and manipulation and good knowledge of Python. Ability to work effectively in a fast-paced, dynamic environment and manage multiple priorities, following the Agile way of working Ability to lead and analyse complex data problems and propose effective solutions. Excellent communication skills with the ability to collaborate effectively with cross-functional teams. Benefits Include: 25 days AL plus bank holidays 1 Wellbeing day Private Medical Pension - 8% matched pension Life Assurance x4 basic salary Income protection Dental plan - voluntary benefit
Unlock a Promising Career Opportunity in the Environmental Technology Sector. Our client, a forward-thinking company in the CleanTech industry, is seeking a talented Senior Data Scientist to join their team. If you have a strong background in data analysis, machine learning, and time series forecasting, this is the perfect opportunity for you to make a lasting impact in the renewable energy space. Don't miss out on this exciting chance to take your career to new heights. Key Responsibilities: Develop cutting-edge predictive models to analyse data related to energy consumption and renewable energy initiatives. Collaborate with skilled software engineers to effectively implement and deploy models. Establish and promote best practises in data science, ensuring continuous improvement and innovation. Dive deep into the vast ocean of available data sources and explore novel methodologies to uncover groundbreaking insights. Required Skills: Proficient in Python 3 and popular data science tools such as pandas, scikit-learn, and more. Extensive experience in creating and fine-tuning machine learning models. Knowledge of time series forecasting and optimisation techniques for accurate predictions. Strong analytical skills combined with a true passion for research-driven development. Excellent communication skills to effectively convey findings and drive meaningful change. - Bonus: If you have a background in forecasting, it will be a significant advantage. Our client's organisation offers a nurturing and supportive work environment where your expertise and contributions are highly valued. With a commitment to work-life balance, they provide remote work options and flexible hours that empower you to excel both professionally and personally. This is a full-time, permanent position working predominately remotely. As part of our client's visionary team, you'll enjoy a competitive salary ranging from 70,000 to 100,000 per year, commensurate with your experience and capabilities. Join our client's team today and become an essential force in shaping a more sustainable future. Let's work together to make a positive impact in the environmental technology sector. Ready to take the leap and secure this incredible opportunity? Submit your CV now and let us propel your career towards greatness. We can't wait to hear from you.
Apr 26, 2024
Full time
Unlock a Promising Career Opportunity in the Environmental Technology Sector. Our client, a forward-thinking company in the CleanTech industry, is seeking a talented Senior Data Scientist to join their team. If you have a strong background in data analysis, machine learning, and time series forecasting, this is the perfect opportunity for you to make a lasting impact in the renewable energy space. Don't miss out on this exciting chance to take your career to new heights. Key Responsibilities: Develop cutting-edge predictive models to analyse data related to energy consumption and renewable energy initiatives. Collaborate with skilled software engineers to effectively implement and deploy models. Establish and promote best practises in data science, ensuring continuous improvement and innovation. Dive deep into the vast ocean of available data sources and explore novel methodologies to uncover groundbreaking insights. Required Skills: Proficient in Python 3 and popular data science tools such as pandas, scikit-learn, and more. Extensive experience in creating and fine-tuning machine learning models. Knowledge of time series forecasting and optimisation techniques for accurate predictions. Strong analytical skills combined with a true passion for research-driven development. Excellent communication skills to effectively convey findings and drive meaningful change. - Bonus: If you have a background in forecasting, it will be a significant advantage. Our client's organisation offers a nurturing and supportive work environment where your expertise and contributions are highly valued. With a commitment to work-life balance, they provide remote work options and flexible hours that empower you to excel both professionally and personally. This is a full-time, permanent position working predominately remotely. As part of our client's visionary team, you'll enjoy a competitive salary ranging from 70,000 to 100,000 per year, commensurate with your experience and capabilities. Join our client's team today and become an essential force in shaping a more sustainable future. Let's work together to make a positive impact in the environmental technology sector. Ready to take the leap and secure this incredible opportunity? Submit your CV now and let us propel your career towards greatness. We can't wait to hear from you.
GRADUATE/JUNIOR SOFTWARE DEVELOPER - LONDON/REMOTE/HYBRID Graduate Software Developer, Junior Software Developer, Java, C++, C, Python, JavaScript, C#, .Net, SQL, Ruby on Rails, Machine Learning, Data Science, Data Engineering, Agile If you are driven by innovation and want to be at the forefront of ground breaking solutions that shape the future of finance, please read on. Join a team of brilliant Software Engineers, Data Scientists, and R&D Engineers in a collaborative environment to develop cutting-edge software solutions that address the most complex challenges in finance. From advanced algorithmic trading systems and predictive analytics platforms to blockchain-based solutions and decentralized finance (DeFi) applications, they leverage emerging technologies to transform the financial landscape. Agility and adaptability are the keys to success in a rapidly evolving industry. They embrace continuous integration and delivery practices, ensuring that products are always up-to-date and responsive to the ever-changing needs of clients and the market. A highly iterative approach allows them to rapidly prototype, test, and refine solutions. Data is the lifeblood of finance, and they specialize in developing sophisticated data analytics platforms and machine learning models that extract valuable insights from vast volumes of financial data. Their solutions empower businesses and individuals to make informed decisions, optimise operations, and seize new opportunities in the market. Fuelled by a culture of collaboration, innovation, and constant learning, the goal is to foster an environment that encourages team members to think outside the box, challenge conventions, and bring their unique perspectives to the table. They value diversity and believe that the best ideas come from embracing different backgrounds and experiences. You can expect ample opportunities for professional growth and development, empowering you to stay ahead of the curve and make a meaningful impact in the fintech space. Key Responsibilities: Collaborate with cross-functional teams to design, develop, and deliver high-quality software solutions that meet clients' needs. Write efficient, clean, and maintainable code while adhering to best practices and coding standards. Participate in all phases of the software development life cycle, from requirements gathering and analysis to testing, deployment, and maintenance. Identify and troubleshoot issues, debug code, and propose innovative solutions to optimize software performance. Stay up-to-date with the latest industry trends, technologies, and frameworks, and apply this knowledge to enhance our software offerings. Qualifications: Bachelor's, Master's or PhD degree a STEM field from a top 20 ranked University. A-Levels ABB or above Proficiency in multiple programming languages such as Java, Python, C++, or JavaScript. Excellent problem-solving skills and a strong attention to detail. Ability to work collaboratively in a team environment, as well as independently on projects. Effective communication skills to articulate complex ideas and technical concepts to non-technical stakeholders. Desirable: Understanding of software development principles, data structures, algorithms, and design patterns. Why Join Us: Be part of an innovative, forward-thinking company that is at the forefront of cutting-edge technology. Work in a dynamic, collaborative, and inclusive environment where your ideas and contributions are valued. Take on exciting, challenging projects that will stretch your skills and expand your expertise. Access to professional development opportunities to enhance your knowledge and grow your career. Competitive salary and benefits package, including health insurance, retirement plans, and more. Flexible work arrangements to support a healthy work-life balance. If you're ready to embark on an exhilarating journey and make a significant impact in fin-tech, we want to hear from you! Please apply as soon as possible! GRADUATE/JUNIOR SOFTWARE DEVELOPER - LONDON/REMOTE/HYBRID
Apr 26, 2024
Full time
GRADUATE/JUNIOR SOFTWARE DEVELOPER - LONDON/REMOTE/HYBRID Graduate Software Developer, Junior Software Developer, Java, C++, C, Python, JavaScript, C#, .Net, SQL, Ruby on Rails, Machine Learning, Data Science, Data Engineering, Agile If you are driven by innovation and want to be at the forefront of ground breaking solutions that shape the future of finance, please read on. Join a team of brilliant Software Engineers, Data Scientists, and R&D Engineers in a collaborative environment to develop cutting-edge software solutions that address the most complex challenges in finance. From advanced algorithmic trading systems and predictive analytics platforms to blockchain-based solutions and decentralized finance (DeFi) applications, they leverage emerging technologies to transform the financial landscape. Agility and adaptability are the keys to success in a rapidly evolving industry. They embrace continuous integration and delivery practices, ensuring that products are always up-to-date and responsive to the ever-changing needs of clients and the market. A highly iterative approach allows them to rapidly prototype, test, and refine solutions. Data is the lifeblood of finance, and they specialize in developing sophisticated data analytics platforms and machine learning models that extract valuable insights from vast volumes of financial data. Their solutions empower businesses and individuals to make informed decisions, optimise operations, and seize new opportunities in the market. Fuelled by a culture of collaboration, innovation, and constant learning, the goal is to foster an environment that encourages team members to think outside the box, challenge conventions, and bring their unique perspectives to the table. They value diversity and believe that the best ideas come from embracing different backgrounds and experiences. You can expect ample opportunities for professional growth and development, empowering you to stay ahead of the curve and make a meaningful impact in the fintech space. Key Responsibilities: Collaborate with cross-functional teams to design, develop, and deliver high-quality software solutions that meet clients' needs. Write efficient, clean, and maintainable code while adhering to best practices and coding standards. Participate in all phases of the software development life cycle, from requirements gathering and analysis to testing, deployment, and maintenance. Identify and troubleshoot issues, debug code, and propose innovative solutions to optimize software performance. Stay up-to-date with the latest industry trends, technologies, and frameworks, and apply this knowledge to enhance our software offerings. Qualifications: Bachelor's, Master's or PhD degree a STEM field from a top 20 ranked University. A-Levels ABB or above Proficiency in multiple programming languages such as Java, Python, C++, or JavaScript. Excellent problem-solving skills and a strong attention to detail. Ability to work collaboratively in a team environment, as well as independently on projects. Effective communication skills to articulate complex ideas and technical concepts to non-technical stakeholders. Desirable: Understanding of software development principles, data structures, algorithms, and design patterns. Why Join Us: Be part of an innovative, forward-thinking company that is at the forefront of cutting-edge technology. Work in a dynamic, collaborative, and inclusive environment where your ideas and contributions are valued. Take on exciting, challenging projects that will stretch your skills and expand your expertise. Access to professional development opportunities to enhance your knowledge and grow your career. Competitive salary and benefits package, including health insurance, retirement plans, and more. Flexible work arrangements to support a healthy work-life balance. If you're ready to embark on an exhilarating journey and make a significant impact in fin-tech, we want to hear from you! Please apply as soon as possible! GRADUATE/JUNIOR SOFTWARE DEVELOPER - LONDON/REMOTE/HYBRID
Title: Senior / Mid Level Data Scientists /Data Engineer Location: Bristol Contract Type: Permanent Salary Range: £50,000 - £80,000 per year Our client, a leading organisation in the reinsurance industry, is seeking talented and experienced Mid-Level Data Scientists /Data Engineers to join their team in Bristol. As a key player in the development and operations of their automated ingestion system, you will contribute to the enhancement of their data processing and analysis capabilities. Responsibilities: - Automated Ingestion System Development: Design, develop, and maintain efficient, scalable, and reliable data ingestion pipelines. - ML Component Integration: Integrate advanced machine learning components into data pipelines to optimise data processing and analysis. - Data Quality and Metrics: Implement data quality metrics and dashboards to ensure the integrity and accuracy of ingested data. - Collaboration: Work closely with data scientists, engineers, and other team members in a multifunctional agile development team to meet project goals. - Innovation: Utilise modern NLP techniques to improve data understanding and processing, staying updated with the latest developments in data science and engineering. Qualifications: - Experience: Minimum of 3 years of experience developing complex data ingestion pipelines with sophisticated ML components. Prior experience in the insurance sector is advantageous. - Education: Degree in Computer Science, Data Science, Engineering, or a related field. - Technical Skills: Strong proficiency in Python, SQL, PySpark, and Databricks. Demonstrated experience with modern NLP techniques and tools. Proven ability to create and manage data quality metrics and dashboards. Experience working in a multifunctional agile development team. Proficiency in using Git for version control. - Industry Knowledge: Familiarity with the insurance sector and its data challenges is highly desirable. - Soft Skills: Excellent problem-solving skills, strong communication abilities, and a collaborative spirit. Perks: Health insurance Hybrid working Life assurance Private Medical 5% pension 28 days annual leave plus bank holidays Why join our client? - Impact: Make a significant contribution to enhancing their data ingestion capabilities and improving their ability to manage cyber risk more effectively. - Innovation: Work with cutting-edge technologies and methodologies in the field of data science and engineering. - Growth: Enjoy opportunities for professional development in a supportive and forward-thinking environment. - Culture: Be part of a collaborative, innovative team that values each member's contribution towards collective goals. If you are passionate about crafting sophisticated data ingestion pipelines and thrive in a dynamic, agile environment, our client welcomes your application. Don't miss this exciting opportunity to join their team in revolutionising the way cyber risk is understood and mitigated in the reinsurance domain. Apply now! Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
Apr 26, 2024
Full time
Title: Senior / Mid Level Data Scientists /Data Engineer Location: Bristol Contract Type: Permanent Salary Range: £50,000 - £80,000 per year Our client, a leading organisation in the reinsurance industry, is seeking talented and experienced Mid-Level Data Scientists /Data Engineers to join their team in Bristol. As a key player in the development and operations of their automated ingestion system, you will contribute to the enhancement of their data processing and analysis capabilities. Responsibilities: - Automated Ingestion System Development: Design, develop, and maintain efficient, scalable, and reliable data ingestion pipelines. - ML Component Integration: Integrate advanced machine learning components into data pipelines to optimise data processing and analysis. - Data Quality and Metrics: Implement data quality metrics and dashboards to ensure the integrity and accuracy of ingested data. - Collaboration: Work closely with data scientists, engineers, and other team members in a multifunctional agile development team to meet project goals. - Innovation: Utilise modern NLP techniques to improve data understanding and processing, staying updated with the latest developments in data science and engineering. Qualifications: - Experience: Minimum of 3 years of experience developing complex data ingestion pipelines with sophisticated ML components. Prior experience in the insurance sector is advantageous. - Education: Degree in Computer Science, Data Science, Engineering, or a related field. - Technical Skills: Strong proficiency in Python, SQL, PySpark, and Databricks. Demonstrated experience with modern NLP techniques and tools. Proven ability to create and manage data quality metrics and dashboards. Experience working in a multifunctional agile development team. Proficiency in using Git for version control. - Industry Knowledge: Familiarity with the insurance sector and its data challenges is highly desirable. - Soft Skills: Excellent problem-solving skills, strong communication abilities, and a collaborative spirit. Perks: Health insurance Hybrid working Life assurance Private Medical 5% pension 28 days annual leave plus bank holidays Why join our client? - Impact: Make a significant contribution to enhancing their data ingestion capabilities and improving their ability to manage cyber risk more effectively. - Innovation: Work with cutting-edge technologies and methodologies in the field of data science and engineering. - Growth: Enjoy opportunities for professional development in a supportive and forward-thinking environment. - Culture: Be part of a collaborative, innovative team that values each member's contribution towards collective goals. If you are passionate about crafting sophisticated data ingestion pipelines and thrive in a dynamic, agile environment, our client welcomes your application. Don't miss this exciting opportunity to join their team in revolutionising the way cyber risk is understood and mitigated in the reinsurance domain. Apply now! Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
Head of Data Engineering The Head of Data will be a strategic leader responsible for overseeing all aspects of data management, analytics, and governance within the organisation. This individual will play a critical role in driving data-driven decision-making processes, optimising data infrastructure, and ensuring the integrity, security, and accessibility of data assets. The ideal candidate will possess strong leadership skills, deep technical expertise in data management and analytics, and a proven track record of implementing innovative data strategies to support business objectives. Key Responsibilities: Strategic Leadership: Lead the development and execution of the organisation's data strategy, aligning it with business goals and objectives. Provide strategic direction for the use of data to drive decision-making and improve operational efficiency. Data Management: Oversee the design, implementation, and maintenance of robust data management systems and processes, including data acquisition, storage, integration, quality assurance, and lifecycle management. Data Analytics: Drive the development and implementation of advanced analytics initiatives to extract insights from data, identify trends, and support predictive modelling and forecasting. Collaborate with business stakeholders to understand their analytical needs and develop solutions to address them. Data Governance: Establish and enforce data governance policies, standards, and best practices to ensure the accuracy, consistency, security, and privacy of data across the organization. Develop data quality metrics and monitor compliance with regulatory requirements. Data Architecture: Define and maintain the organization's data architecture, including data models, schemas, and taxonomies. Evaluate and select appropriate technologies and tools to support data management, analytics, and visualization requirements. Team Leadership: Build and lead a high-performing team of data professionals, including data engineers, analysts, scientists, and governance specialists. Provide mentorship, coaching, and professional development opportunities to foster a culture of continuous learning and growth. Cross-Functional Collaboration: Collaborate closely with other departments, including IT, finance, marketing, operations, and product development, to understand their data needs and priorities. Partner with business leaders to develop data-driven solutions that drive value and competitive advantage. Vendor Management: Evaluate and manage relationships with third-party data vendors, software providers, and consultants to ensure the successful implementation of data-related projects and initiatives. Negotiate contracts, oversee vendor performance, and assess emerging technologies and trends in the data management space. Qualifications: Bachelor's degree in computer science, engineering, mathematics, statistics, or a related field; advanced degree (e.g., MBA, MS, or PhD) preferred. 10+ years of experience in data management, analytics, and business intelligence, with at least 5 years in a leadership role. Proven track record of developing and implementing data strategies that drive business growth and innovation. Deep understanding of data governance principles, regulatory compliance requirements (e.g., GDPR, CCPA), and industry best practices. Strong technical proficiency in data modelling, SQL, ETL tools, data visualization tools (e.g., Tableau, Power BI), and advanced analytics techniques (e.g., machine learning, predictive modelling). Excellent leadership, communication, and interpersonal skills, with the ability to influence and collaborate effectively across all levels of the organization. Demonstrated experience in managing cross-functional teams and driving cultural change towards a data-driven mindset. Ability to thrive in a fast-paced, dynamic environment and effectively prioritize and manage multiple projects and initiatives. Interested? Please submit your updated CV to Lucy Morgan at Crimson for immediate consideration. Not interested? Do you know someone who might be a perfect fit for this role? Refer a friend and earn £250 worth of vouchers! Crimson is acting as an employment agency regarding this vacancy. Please see our website for Crimson's Privacy Statement, should you wish to view prior to applying for this vacancy.
Apr 26, 2024
Full time
Head of Data Engineering The Head of Data will be a strategic leader responsible for overseeing all aspects of data management, analytics, and governance within the organisation. This individual will play a critical role in driving data-driven decision-making processes, optimising data infrastructure, and ensuring the integrity, security, and accessibility of data assets. The ideal candidate will possess strong leadership skills, deep technical expertise in data management and analytics, and a proven track record of implementing innovative data strategies to support business objectives. Key Responsibilities: Strategic Leadership: Lead the development and execution of the organisation's data strategy, aligning it with business goals and objectives. Provide strategic direction for the use of data to drive decision-making and improve operational efficiency. Data Management: Oversee the design, implementation, and maintenance of robust data management systems and processes, including data acquisition, storage, integration, quality assurance, and lifecycle management. Data Analytics: Drive the development and implementation of advanced analytics initiatives to extract insights from data, identify trends, and support predictive modelling and forecasting. Collaborate with business stakeholders to understand their analytical needs and develop solutions to address them. Data Governance: Establish and enforce data governance policies, standards, and best practices to ensure the accuracy, consistency, security, and privacy of data across the organization. Develop data quality metrics and monitor compliance with regulatory requirements. Data Architecture: Define and maintain the organization's data architecture, including data models, schemas, and taxonomies. Evaluate and select appropriate technologies and tools to support data management, analytics, and visualization requirements. Team Leadership: Build and lead a high-performing team of data professionals, including data engineers, analysts, scientists, and governance specialists. Provide mentorship, coaching, and professional development opportunities to foster a culture of continuous learning and growth. Cross-Functional Collaboration: Collaborate closely with other departments, including IT, finance, marketing, operations, and product development, to understand their data needs and priorities. Partner with business leaders to develop data-driven solutions that drive value and competitive advantage. Vendor Management: Evaluate and manage relationships with third-party data vendors, software providers, and consultants to ensure the successful implementation of data-related projects and initiatives. Negotiate contracts, oversee vendor performance, and assess emerging technologies and trends in the data management space. Qualifications: Bachelor's degree in computer science, engineering, mathematics, statistics, or a related field; advanced degree (e.g., MBA, MS, or PhD) preferred. 10+ years of experience in data management, analytics, and business intelligence, with at least 5 years in a leadership role. Proven track record of developing and implementing data strategies that drive business growth and innovation. Deep understanding of data governance principles, regulatory compliance requirements (e.g., GDPR, CCPA), and industry best practices. Strong technical proficiency in data modelling, SQL, ETL tools, data visualization tools (e.g., Tableau, Power BI), and advanced analytics techniques (e.g., machine learning, predictive modelling). Excellent leadership, communication, and interpersonal skills, with the ability to influence and collaborate effectively across all levels of the organization. Demonstrated experience in managing cross-functional teams and driving cultural change towards a data-driven mindset. Ability to thrive in a fast-paced, dynamic environment and effectively prioritize and manage multiple projects and initiatives. Interested? Please submit your updated CV to Lucy Morgan at Crimson for immediate consideration. Not interested? Do you know someone who might be a perfect fit for this role? Refer a friend and earn £250 worth of vouchers! Crimson is acting as an employment agency regarding this vacancy. Please see our website for Crimson's Privacy Statement, should you wish to view prior to applying for this vacancy.
Position: Senior Data Scientist - Cyber Risk Quantitative Risk Modeller Location: Bristol Our client, a pioneering reinsurance agency based in Bristol, UK, is seeking a highly skilled and experienced Senior Data Scientist - Cyber Risk Quantitative Risk Modeller to join their dynamic modelling team. They specialise in the cutting-edge domain of cyber risk and aim to redefine the landscape of cyber risk assessment and management. As a Senior Data Scientist, reporting into the Head of Data Science & Modelling, you will play a pivotal role in the development and operationalization of our client's proprietary stochastic cyber risk model. This position offers an exciting opportunity to contribute significantly to the advancement of analytical capabilities and the broader field of cyber risk modelling. Responsibilities: - Model Development and Operation: Be a key member of the team responsible for designing, developing, refining, and executing the stochastic cyber risk model, ensuring its accuracy, performance, and scalability. - Data Analysis: Perform complex data analysis to extract insights and identify trends in cyber risk using statistical and machine learning techniques. - Operationalization: Translate model insights into actionable strategies and tools for internal and external stakeholders. - Collaboration: Work closely with other team members, including underwriters, engineers, and cyber risk analysts, to integrate the cyber risk model with other systems and processes. - Innovation: Stay updated with the latest developments in data science, cyber security, and risk modelling and incorporate innovative techniques and technologies into our models. Qualifications: - Experience: Minimum of 5 years of experience as a data scientist/quantitative risk modeller, with a proven track record of operationalizing complex models and analytics. - Education: A degree in Computer Science, Engineering, Statistics, Mathematics, or a related field. Advanced degrees (MSc or PhD) are preferred. - Technical Skills: Expertise in applied machine learning, probability, statistics, and quantitative risk modelling. High proficiency in Python and SQL, with experience in big data technologies and tools (Databricks and Pyspark preferred). Familiarity with agile software development processes. - Industry Knowledge: Experience in insurance, cyber risk, or related domains is ideal. Understanding of the reinsurance industry and its challenges is a plus. - Soft Skills: Excellent problem-solving abilities, strong communication skills, and the capacity to work effectively in a team-oriented environment. Benefits: - Impact: Make a tangible impact on the future of cyber risk management and reinsurance. - Innovation: Work at the forefront of data science and cyber security, with opportunities to innovate and challenge the status quo. - Growth: Benefit from opportunities for professional development and advancement in a rapidly growing company. - Culture: Join a collaborative, supportive, and forward-thinking team that values innovation and excellence. Salary: £70,000 to £80,000 per year Contract Type: Permanent Working Pattern: Full Time Additional Perks: Health insurance, Hybrid working, Life assurance, Private Medical, 5% pension, 28 days annual leave plus bank holidays, Collaborative working If you have the skills, experience, and passion to excel in this role, apply now and be a part of our client's groundbreaking work in the field of cyber risk assessment and management. Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
Apr 26, 2024
Full time
Position: Senior Data Scientist - Cyber Risk Quantitative Risk Modeller Location: Bristol Our client, a pioneering reinsurance agency based in Bristol, UK, is seeking a highly skilled and experienced Senior Data Scientist - Cyber Risk Quantitative Risk Modeller to join their dynamic modelling team. They specialise in the cutting-edge domain of cyber risk and aim to redefine the landscape of cyber risk assessment and management. As a Senior Data Scientist, reporting into the Head of Data Science & Modelling, you will play a pivotal role in the development and operationalization of our client's proprietary stochastic cyber risk model. This position offers an exciting opportunity to contribute significantly to the advancement of analytical capabilities and the broader field of cyber risk modelling. Responsibilities: - Model Development and Operation: Be a key member of the team responsible for designing, developing, refining, and executing the stochastic cyber risk model, ensuring its accuracy, performance, and scalability. - Data Analysis: Perform complex data analysis to extract insights and identify trends in cyber risk using statistical and machine learning techniques. - Operationalization: Translate model insights into actionable strategies and tools for internal and external stakeholders. - Collaboration: Work closely with other team members, including underwriters, engineers, and cyber risk analysts, to integrate the cyber risk model with other systems and processes. - Innovation: Stay updated with the latest developments in data science, cyber security, and risk modelling and incorporate innovative techniques and technologies into our models. Qualifications: - Experience: Minimum of 5 years of experience as a data scientist/quantitative risk modeller, with a proven track record of operationalizing complex models and analytics. - Education: A degree in Computer Science, Engineering, Statistics, Mathematics, or a related field. Advanced degrees (MSc or PhD) are preferred. - Technical Skills: Expertise in applied machine learning, probability, statistics, and quantitative risk modelling. High proficiency in Python and SQL, with experience in big data technologies and tools (Databricks and Pyspark preferred). Familiarity with agile software development processes. - Industry Knowledge: Experience in insurance, cyber risk, or related domains is ideal. Understanding of the reinsurance industry and its challenges is a plus. - Soft Skills: Excellent problem-solving abilities, strong communication skills, and the capacity to work effectively in a team-oriented environment. Benefits: - Impact: Make a tangible impact on the future of cyber risk management and reinsurance. - Innovation: Work at the forefront of data science and cyber security, with opportunities to innovate and challenge the status quo. - Growth: Benefit from opportunities for professional development and advancement in a rapidly growing company. - Culture: Join a collaborative, supportive, and forward-thinking team that values innovation and excellence. Salary: £70,000 to £80,000 per year Contract Type: Permanent Working Pattern: Full Time Additional Perks: Health insurance, Hybrid working, Life assurance, Private Medical, 5% pension, 28 days annual leave plus bank holidays, Collaborative working If you have the skills, experience, and passion to excel in this role, apply now and be a part of our client's groundbreaking work in the field of cyber risk assessment and management. Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
The Impact You'll Make With treatments for hundreds of diseases in our sights, we've built a data science team with domain expertise in computer science, bioinformatics, physics, biology, mathematics, applied statistics, and more. We work side-by-side with biologists, automation scientists, chemists, software engineers, and many others; together, we develop the tools and methods to turn our experimental data into treatments for pathologies that affect the lives of countless individuals. As a data scientist supporting the development of our industrialized workflows, you'll work with a highly dynamic team that is focused on improving how we move from ideation through to advanced candidate drugs in a way that accelerates decision-making and automates as much as possible to scale the impact that we can have. You'll have access to unbelievable scales of data: we currently run up to 2.2 million experiments run each week, our ground-breaking Phenom-1 foundation model, trained on > 1 billion in-house images, and our maps of biology and chemistry that contain > 5 trillion relationships across multiple biological and chemical contexts. In this role, you will leverage this data as you: Partner with chemists and biologists to understand their processes and the questions that they are asking at each stage of the drug discovery funnel Contribute to the development of LOWE, a natural language interface that connects wet- and dry-lab components of the Recursion OS to streamline drug-discovery tasks Develop methods, metrics, benchmarks, and models to help drive drug discovery in a standardized way. Convert exploratory analysis into production-quality functions that can be incorporated into in-house Python packages and that support at-scale generation of data packages to accelerate decisions on passing programs through internal stage gates. Create and analyze enormous sets of connected data for a variety of programs to learn how best to advance drug discovery in an industrialized way Collaborate with engineering teams to mature your models and analyses and put them into productionized flows Deliver quickly and iteratively, both supporting in-flight programs and building improvements for the long-term in short-lived, agile workstreams Learn to leverage new code packages and data science techniques as needed Location: Making London your home base is ideal, however, we will consider on-site work in our Salt Lake City, Utah or Toronto, Ontario offices as well. The Team You'll Join We are an application-oriented group whose goal is to discover drugs at scale, using the toolkit of computational science in collaboration with our counterparts in other engineering (software and data engineering, laboratory automation), scientific (biology, chemistry, clinical science), and operational (laboratory operations, regulatory affairs) disciplines. We are value-driving - data science at Recursion is not just an accelerating function; it is a core part of our value proposition. As data scientists, we are responsible for showing up as leaders and visionaries, helping to shape how Recursion delivers on our mission. We work on what matters and deliver in timescales of weeks not quarters. We focus on the impact that we are trying to make and the "why" of what we are trying to deliver and are resilient if the "how" of what we are doing needs to change. The Experience You'll Need 3-5+ years practical experience applying probability, statistics, and machine learning to real-world datasets in service of academic or business applications and recommendations. Strong preference for experience in the field of biosciences (particularly pharmaceuticals) or working on projects that require regular cross-disciplinary collaboration. Experience working within a fast-paced interdisciplinary team to solve business-relevant problems and communicating complex concepts and methods to audiences with diverse technical backgrounds. High fluency with the Python data stack (numpy, pandas, scikit-learn, etc). Experience in collaborative data product development and peer code review, including version control tools like git. Experience developing, releasing, and maintaining data products in a continuous-use production environment. Nice to have: experience in creating compelling visualizations of high-dimensional data that enable clear decision-making and interpretation, prompt engineering for LLMs, cheminformatics, OR analysis of RNA sequencing data. How You'll be Supported You will be assigned a peer trail guide to support you as you onboard and get familiar with Recursion systems Receive real-time feedback on code quality and best practices from a team of peers Ability to participate and learn from your colleagues in our regular all-hands, journal club & tech talks for Data Science Option to attend conferences to learn more from colleagues, networks, and more to better your skillset
Apr 25, 2024
Full time
The Impact You'll Make With treatments for hundreds of diseases in our sights, we've built a data science team with domain expertise in computer science, bioinformatics, physics, biology, mathematics, applied statistics, and more. We work side-by-side with biologists, automation scientists, chemists, software engineers, and many others; together, we develop the tools and methods to turn our experimental data into treatments for pathologies that affect the lives of countless individuals. As a data scientist supporting the development of our industrialized workflows, you'll work with a highly dynamic team that is focused on improving how we move from ideation through to advanced candidate drugs in a way that accelerates decision-making and automates as much as possible to scale the impact that we can have. You'll have access to unbelievable scales of data: we currently run up to 2.2 million experiments run each week, our ground-breaking Phenom-1 foundation model, trained on > 1 billion in-house images, and our maps of biology and chemistry that contain > 5 trillion relationships across multiple biological and chemical contexts. In this role, you will leverage this data as you: Partner with chemists and biologists to understand their processes and the questions that they are asking at each stage of the drug discovery funnel Contribute to the development of LOWE, a natural language interface that connects wet- and dry-lab components of the Recursion OS to streamline drug-discovery tasks Develop methods, metrics, benchmarks, and models to help drive drug discovery in a standardized way. Convert exploratory analysis into production-quality functions that can be incorporated into in-house Python packages and that support at-scale generation of data packages to accelerate decisions on passing programs through internal stage gates. Create and analyze enormous sets of connected data for a variety of programs to learn how best to advance drug discovery in an industrialized way Collaborate with engineering teams to mature your models and analyses and put them into productionized flows Deliver quickly and iteratively, both supporting in-flight programs and building improvements for the long-term in short-lived, agile workstreams Learn to leverage new code packages and data science techniques as needed Location: Making London your home base is ideal, however, we will consider on-site work in our Salt Lake City, Utah or Toronto, Ontario offices as well. The Team You'll Join We are an application-oriented group whose goal is to discover drugs at scale, using the toolkit of computational science in collaboration with our counterparts in other engineering (software and data engineering, laboratory automation), scientific (biology, chemistry, clinical science), and operational (laboratory operations, regulatory affairs) disciplines. We are value-driving - data science at Recursion is not just an accelerating function; it is a core part of our value proposition. As data scientists, we are responsible for showing up as leaders and visionaries, helping to shape how Recursion delivers on our mission. We work on what matters and deliver in timescales of weeks not quarters. We focus on the impact that we are trying to make and the "why" of what we are trying to deliver and are resilient if the "how" of what we are doing needs to change. The Experience You'll Need 3-5+ years practical experience applying probability, statistics, and machine learning to real-world datasets in service of academic or business applications and recommendations. Strong preference for experience in the field of biosciences (particularly pharmaceuticals) or working on projects that require regular cross-disciplinary collaboration. Experience working within a fast-paced interdisciplinary team to solve business-relevant problems and communicating complex concepts and methods to audiences with diverse technical backgrounds. High fluency with the Python data stack (numpy, pandas, scikit-learn, etc). Experience in collaborative data product development and peer code review, including version control tools like git. Experience developing, releasing, and maintaining data products in a continuous-use production environment. Nice to have: experience in creating compelling visualizations of high-dimensional data that enable clear decision-making and interpretation, prompt engineering for LLMs, cheminformatics, OR analysis of RNA sequencing data. How You'll be Supported You will be assigned a peer trail guide to support you as you onboard and get familiar with Recursion systems Receive real-time feedback on code quality and best practices from a team of peers Ability to participate and learn from your colleagues in our regular all-hands, journal club & tech talks for Data Science Option to attend conferences to learn more from colleagues, networks, and more to better your skillset
Machine Learning Engineer Analytics Centre of Excellence (ACOE) London/Hybrid The Analytics Centre of Excellence (ACOE) is positively impacting patient lives through the anticipation and delivery of Decision Intelligence solutions that increase clinical trial success, shorten drug development timelines, and reduce costs in bringing new drugs to market, getting much-needed drugs to patients faster through successful clinical trial delivery. Our vision at the ACOE is that every decision our users and clients make in R&D is made through Decision Intelligence, allowing speedy access to safe, novel, and effective treatments for all patients. ACOE Product Portfolio: In trial strategy, we are using Machine Learning (ML) to recommend countries and clinical trial sites and accurately predict clinical study timelines. We are deploying ML at clinical trial sites to read Electronic Medical Records data and find undiagnosed patients that are otherwise challenging to identify. We are optimizing our patient outreach targeting and are predicting participant dropout. In addition, problems in patient recruitment are being solved with ML. Further upstream in R&D we are predicting clinical trial outcomes, drug-protein interactions, repurposing drugs and even leveraging ML to optimize molecules. Job Overview Develop fit for purpose AIML models/algorithms/processes to address pharma/healthcare applications and innovative products upon completion of prototypes followed by the building of production grade algorithms/automation engines for client deliverables. Test for viability to deliver final products to clients. Able to bring newly researched ideas to reality quickly and on a large scale. Design, build, test, and deliver products from post-prototype to client delivery. Essential Functions Facilitates the transformation of machine learning research domain expertise in the areas of human data into viable prototypes Facilitates the development of features of models on individual projects and/or products with guidance and support from others Develops understanding of the creation of new algorithms through working alongside other Machine Learning Engineers and Machine Learning Research Scientists Facilitates the building and training of new production grade algorithms that can learn from complex, high dimensionality data to uncover patterns from which machine learning models and applications can be developed Uses a variety of techniques to improve the performance of individual natural language processing and/or machine learning algorithms Facilitates the testing and validation of models to determine viability for deployment with guidance and support from others Consult for internal and external clients, implement solution development and innovation to meet clients' needs, facilitate client AI project technical delivery What we're looking for Master's Degree Master's Degree in Machine Learning, Statistics, Computer Science, Physics, Math, or related field Several years' experience working on creating machine learning algorithms Programming experience using one or more of the following: Python, Java, C++, R, Go, Kubernetes, Deep learning or equivalent. Python (Scikit-learn, Tensor Flow, Pandas, NumPy, SciPy) SQL, Linux/mac command-line tools Experience with building, testing, measuring, and deploying machine learning models in production Familiarity with ML algorithms (classification, regression) and processes (how to build models, assess their goodness of fit, etc.) Familiarity with agile software development lifecycle (SCRUM, Kanban, etc.) Previous experience of owning, maintaining, and enhancing software data products Attention to clarity of code, ease of development, and correctness of implementations Good knowledge of software development best practices including testing, continuous integration, and DevOps tools Preferred Requirements: Knowledge and experience of Hierarchical Modelling Experience with clinical domain and with regulated data Used Deep Neural Network libraries such as Tensor Flow, especially with Bayesian Neural Networks Knowledge of cloud systems such as AWS, Azure, GCP and containerization such as Docker Experience working with large, real-world datasets Demonstrated in-depth understanding of product development lifecycle Demonstrated aptitude for and interest in peer mentorship Experience deploying code into production through CI/CD tools Knowledge of biostatistics/life sciences/healthcare technology Knowledge of UX principles Experience working in the Hadoop ecosystem Why Join? Those who join us become part of a recognized global leader still willing to challenge the status quo to improve patient care. You will have access to the most cutting-edge technology, the largest data sets, the best analytics tools and, in our opinion, some of the finest minds in the Healthcare industry. You can drive your career at IQVIA and choose the path that best defines your development and success. With exposure across diverse geographies, capabilities, and vast therapeutic and information and technology areas, you can seek opportunities to change and grow without boundaries. Regardless of your role, we invite you to reimagine healthcare with us. You will have the opportunity to play an important part in helping our clients drive healthcare forward and ultimately improve human health outcomes. It's an exciting time to join and reimagine what's possible in healthcare. IQVIA is a strong advocate of diversity and inclusion in the workplace. We believe that a work environment that embraces diversity will give us a competitive advantage in the global marketplace and enhance our success. We believe that an inclusive and respectful workplace culture fosters a sense of belonging among our employees, builds a stronger team, and allows individual employees the opportunity to maximize their personal potential. We thank all applicants for their interest; however only those selected for interview will be contacted. IQVIA is a leading global provider of advanced analytics, technology solutions and clinical research services to the life sciences industry. We believe in pushing the boundaries of human science and data science to make the biggest impact possible - to help our customers create a healthier world. Learn more at
Apr 25, 2024
Full time
Machine Learning Engineer Analytics Centre of Excellence (ACOE) London/Hybrid The Analytics Centre of Excellence (ACOE) is positively impacting patient lives through the anticipation and delivery of Decision Intelligence solutions that increase clinical trial success, shorten drug development timelines, and reduce costs in bringing new drugs to market, getting much-needed drugs to patients faster through successful clinical trial delivery. Our vision at the ACOE is that every decision our users and clients make in R&D is made through Decision Intelligence, allowing speedy access to safe, novel, and effective treatments for all patients. ACOE Product Portfolio: In trial strategy, we are using Machine Learning (ML) to recommend countries and clinical trial sites and accurately predict clinical study timelines. We are deploying ML at clinical trial sites to read Electronic Medical Records data and find undiagnosed patients that are otherwise challenging to identify. We are optimizing our patient outreach targeting and are predicting participant dropout. In addition, problems in patient recruitment are being solved with ML. Further upstream in R&D we are predicting clinical trial outcomes, drug-protein interactions, repurposing drugs and even leveraging ML to optimize molecules. Job Overview Develop fit for purpose AIML models/algorithms/processes to address pharma/healthcare applications and innovative products upon completion of prototypes followed by the building of production grade algorithms/automation engines for client deliverables. Test for viability to deliver final products to clients. Able to bring newly researched ideas to reality quickly and on a large scale. Design, build, test, and deliver products from post-prototype to client delivery. Essential Functions Facilitates the transformation of machine learning research domain expertise in the areas of human data into viable prototypes Facilitates the development of features of models on individual projects and/or products with guidance and support from others Develops understanding of the creation of new algorithms through working alongside other Machine Learning Engineers and Machine Learning Research Scientists Facilitates the building and training of new production grade algorithms that can learn from complex, high dimensionality data to uncover patterns from which machine learning models and applications can be developed Uses a variety of techniques to improve the performance of individual natural language processing and/or machine learning algorithms Facilitates the testing and validation of models to determine viability for deployment with guidance and support from others Consult for internal and external clients, implement solution development and innovation to meet clients' needs, facilitate client AI project technical delivery What we're looking for Master's Degree Master's Degree in Machine Learning, Statistics, Computer Science, Physics, Math, or related field Several years' experience working on creating machine learning algorithms Programming experience using one or more of the following: Python, Java, C++, R, Go, Kubernetes, Deep learning or equivalent. Python (Scikit-learn, Tensor Flow, Pandas, NumPy, SciPy) SQL, Linux/mac command-line tools Experience with building, testing, measuring, and deploying machine learning models in production Familiarity with ML algorithms (classification, regression) and processes (how to build models, assess their goodness of fit, etc.) Familiarity with agile software development lifecycle (SCRUM, Kanban, etc.) Previous experience of owning, maintaining, and enhancing software data products Attention to clarity of code, ease of development, and correctness of implementations Good knowledge of software development best practices including testing, continuous integration, and DevOps tools Preferred Requirements: Knowledge and experience of Hierarchical Modelling Experience with clinical domain and with regulated data Used Deep Neural Network libraries such as Tensor Flow, especially with Bayesian Neural Networks Knowledge of cloud systems such as AWS, Azure, GCP and containerization such as Docker Experience working with large, real-world datasets Demonstrated in-depth understanding of product development lifecycle Demonstrated aptitude for and interest in peer mentorship Experience deploying code into production through CI/CD tools Knowledge of biostatistics/life sciences/healthcare technology Knowledge of UX principles Experience working in the Hadoop ecosystem Why Join? Those who join us become part of a recognized global leader still willing to challenge the status quo to improve patient care. You will have access to the most cutting-edge technology, the largest data sets, the best analytics tools and, in our opinion, some of the finest minds in the Healthcare industry. You can drive your career at IQVIA and choose the path that best defines your development and success. With exposure across diverse geographies, capabilities, and vast therapeutic and information and technology areas, you can seek opportunities to change and grow without boundaries. Regardless of your role, we invite you to reimagine healthcare with us. You will have the opportunity to play an important part in helping our clients drive healthcare forward and ultimately improve human health outcomes. It's an exciting time to join and reimagine what's possible in healthcare. IQVIA is a strong advocate of diversity and inclusion in the workplace. We believe that a work environment that embraces diversity will give us a competitive advantage in the global marketplace and enhance our success. We believe that an inclusive and respectful workplace culture fosters a sense of belonging among our employees, builds a stronger team, and allows individual employees the opportunity to maximize their personal potential. We thank all applicants for their interest; however only those selected for interview will be contacted. IQVIA is a leading global provider of advanced analytics, technology solutions and clinical research services to the life sciences industry. We believe in pushing the boundaries of human science and data science to make the biggest impact possible - to help our customers create a healthier world. Learn more at
Job description Site Name: London The Stanley Building Posted Date: Apr At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking a highly skilled and experienced Manager for our Computing Platforms Products. In this role, you will be responsible for developing the product strategy of our Computing Platform to meet customer needs. You will partner closely with Onyx's organizations, including AI/ML, a diversity of R&D teams utilizing data to accelerate drug discovery (genomics sciences, computational biology, imaging, computational chemistry, to name a few), along with the Onyx portfolio management and engineering function heads to deliver industry-leading solutions that power R&D workloads. You will drive the product roadmap, guide product development initiatives, and ensure the successful launch and adoption of our Compute platform including the migration of existing GSK applications to the platform. Together, you will facilitate joint planning and execution of the product roadmap, ensuring a balance between strategic development and customer-facing deliverables. You will also play a key role in devising, tracking, and publicizing metrics that measure the impact and performance of Onyx Compute Platform Products. You will be responsible for understanding the business areas using Onyx, Platform capabilities, translating customer needs into requirements aligned with standard frameworks such as ontologies and engineering pipelines, and ensure our R&D teams receive the solutions they need to succeed. In this role you will Product Strategy: Develop and execute a comprehensive product strategy for our AI/ML compute platform, aligning with the Onyx's overall goals and objectives. Roadmap Development: Define and prioritize features, enhancements, and functionalities for the platform based on user analysis, customer feedback, and business requirements. Cross-functional Collaboration: Collaborate closely with engineering, AI/ML, and portfolio/program teams to ensure successful product development and deployment. Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders to understand their needs, gather feedback, and incorporate it into product planning and development processes. Product Launch: Plan and oversee product launches, ensuring effective communication, documentation, and training to drive product adoption and success. Performance Measurement: Define key product metrics, establish monitoring systems, and regularly evaluate and report on the performance and success of the compute platform. Product Ambassador: Serve as an ambassador of the compute platform, effectively communicating its value and benefits to GSK Research and Development leadership and identifying potential customers. Industry Expertise: Stay up to date with the latest advancements and trends in AI, machine learning, and compute platforms, applying industry knowledge to drive innovation and competitive advantage. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Experience of cloud computing management for scientific computing, data science and/or artificial intelligence model training with a major cloud provider (AWS, Google Cloud, Azure etc) Strong relevant experience in Data Science, Scientific Computing, Machine Learning/AI, Computer Science, Platform Engineering, or related discipline. Excellent communication, collaboration, and stakeholder management skills. Strong leadership abilities and a self-driven, proactive approach. Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Technical Knowledge: Experience with and strong understanding of on-prem and cloud computing, and software development practices; familiarity with MLOps and distributed computing is highly desirable. Experience with containers and virtual machines including Kubernetes, Slurm or other orchestration tools. Knowledge of modern infrastructure including Infrastructure-as-code tools (e.g. Terraform, Ansible ) Familiar with software engineering ways of working and engagement model Strong proficiency in utilizing various product management tools, including Jira and Confluence. Proven track record of managing developer platforms, tools, and services. Strong proficiency in utilizing various product management tools, including Jira and Confluence. Prior product management experience of enterprise AI/ML platform is strongly preferred. Experience with bioinformatics/genomics database, biological datasets, Pharma R&D is a plus, but not required. Strategic Thinker: Proven track record in developing and executing product strategies that drive business growth and customer satisfaction. Stakeholder Skills: Demonstrated ability to keep cross-functional teams, set clear objectives, and foster a collaborative and innovative work environment. Can lead without authority. Customer Focus: A customer-centric mindset with a deep understanding of customer needs and the ability to translate them into effective product solutions. Analytical and Data-Driven: Strong analytical skills with the ability to gather and interpret data, perform market research, and make data-driven decisions. Excellent Communication: Exceptional written and verbal communication skills, with the ability to effectively present complex ideas and concepts to both technical and non-technical audiences. Adaptability: Thrives in a fast-paced, dynamic environment and can adapt quickly to changing priorities and business needs. Closing Date for Applications: Monday 6th May 2024 (COB) Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a special purpose - to unite science, technology and talent to get ahead of disease together - so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns - as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology). Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it's also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can be. A place where they can be themselves - feeling welcome, valued, and included. Where they can keep growing and look after their wellbeing. So, if you share our ambition . click apply for full job details
Apr 25, 2024
Full time
Job description Site Name: London The Stanley Building Posted Date: Apr At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking a highly skilled and experienced Manager for our Computing Platforms Products. In this role, you will be responsible for developing the product strategy of our Computing Platform to meet customer needs. You will partner closely with Onyx's organizations, including AI/ML, a diversity of R&D teams utilizing data to accelerate drug discovery (genomics sciences, computational biology, imaging, computational chemistry, to name a few), along with the Onyx portfolio management and engineering function heads to deliver industry-leading solutions that power R&D workloads. You will drive the product roadmap, guide product development initiatives, and ensure the successful launch and adoption of our Compute platform including the migration of existing GSK applications to the platform. Together, you will facilitate joint planning and execution of the product roadmap, ensuring a balance between strategic development and customer-facing deliverables. You will also play a key role in devising, tracking, and publicizing metrics that measure the impact and performance of Onyx Compute Platform Products. You will be responsible for understanding the business areas using Onyx, Platform capabilities, translating customer needs into requirements aligned with standard frameworks such as ontologies and engineering pipelines, and ensure our R&D teams receive the solutions they need to succeed. In this role you will Product Strategy: Develop and execute a comprehensive product strategy for our AI/ML compute platform, aligning with the Onyx's overall goals and objectives. Roadmap Development: Define and prioritize features, enhancements, and functionalities for the platform based on user analysis, customer feedback, and business requirements. Cross-functional Collaboration: Collaborate closely with engineering, AI/ML, and portfolio/program teams to ensure successful product development and deployment. Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders to understand their needs, gather feedback, and incorporate it into product planning and development processes. Product Launch: Plan and oversee product launches, ensuring effective communication, documentation, and training to drive product adoption and success. Performance Measurement: Define key product metrics, establish monitoring systems, and regularly evaluate and report on the performance and success of the compute platform. Product Ambassador: Serve as an ambassador of the compute platform, effectively communicating its value and benefits to GSK Research and Development leadership and identifying potential customers. Industry Expertise: Stay up to date with the latest advancements and trends in AI, machine learning, and compute platforms, applying industry knowledge to drive innovation and competitive advantage. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Experience of cloud computing management for scientific computing, data science and/or artificial intelligence model training with a major cloud provider (AWS, Google Cloud, Azure etc) Strong relevant experience in Data Science, Scientific Computing, Machine Learning/AI, Computer Science, Platform Engineering, or related discipline. Excellent communication, collaboration, and stakeholder management skills. Strong leadership abilities and a self-driven, proactive approach. Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Technical Knowledge: Experience with and strong understanding of on-prem and cloud computing, and software development practices; familiarity with MLOps and distributed computing is highly desirable. Experience with containers and virtual machines including Kubernetes, Slurm or other orchestration tools. Knowledge of modern infrastructure including Infrastructure-as-code tools (e.g. Terraform, Ansible ) Familiar with software engineering ways of working and engagement model Strong proficiency in utilizing various product management tools, including Jira and Confluence. Proven track record of managing developer platforms, tools, and services. Strong proficiency in utilizing various product management tools, including Jira and Confluence. Prior product management experience of enterprise AI/ML platform is strongly preferred. Experience with bioinformatics/genomics database, biological datasets, Pharma R&D is a plus, but not required. Strategic Thinker: Proven track record in developing and executing product strategies that drive business growth and customer satisfaction. Stakeholder Skills: Demonstrated ability to keep cross-functional teams, set clear objectives, and foster a collaborative and innovative work environment. Can lead without authority. Customer Focus: A customer-centric mindset with a deep understanding of customer needs and the ability to translate them into effective product solutions. Analytical and Data-Driven: Strong analytical skills with the ability to gather and interpret data, perform market research, and make data-driven decisions. Excellent Communication: Exceptional written and verbal communication skills, with the ability to effectively present complex ideas and concepts to both technical and non-technical audiences. Adaptability: Thrives in a fast-paced, dynamic environment and can adapt quickly to changing priorities and business needs. Closing Date for Applications: Monday 6th May 2024 (COB) Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a special purpose - to unite science, technology and talent to get ahead of disease together - so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns - as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology). Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it's also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can be. A place where they can be themselves - feeling welcome, valued, and included. Where they can keep growing and look after their wellbeing. So, if you share our ambition . click apply for full job details
Senior Software Engineer (Python) Location: London - Hybrid (Three Days a Week in Office) Salary: Up to £70,000 + Benefits Overview: An exciting opportunity awaits a Senior Software Engineer to join a rapidly growing digital marketing agency in London. With complete autonomy over their work, the chosen candidate will be part of a team doubling in size within the next 12 months. Collaborating closely with data scientists and machine learning experts, this role promises exposure to cutting-edge projects and skill development opportunities. The Role: As a Senior Software Engineer, your responsibilities will include: Designing, developing, and maintaining software using Python. Advocating and implementing best practices throughout the software engineering life cycle. Managing cloud technologies, with a focus on Google Cloud Platform (GCP). Leading automation projects to enhance efficiency. Maintaining and developing data pipelines. Enhancing internal and external tools. Collaborating with data engineers and the wider team to build data platforms. Skills and Expertise: To be considered for this prestigious role, candidates must possess the following qualifications and attributes: A demonstrated proficiency in Python programming, coupled with a keen eye for detail and a passion for crafting elegant and efficient solutions to complex problems. Substantial commercial experience in working with cloud technologies, exemplified by a deep understanding of cloud platforms and their associated services (ideally GCP). Proven expertise in deploying and managing containerized applications using Docker within commercial environments, showcasing a knack for optimizing software delivery and deployment processes. A proactive approach to learning and contributing to front-end development, coupled with a strong commitment to staying abreast of emerging technologies, industry trends, and best practices. Salary: In this role, you will have the opportunity to earn a competitive salary of up to £70,000, complemented by a comprehensive benefits package designed to support your personal and professional well-being.
Apr 24, 2024
Full time
Senior Software Engineer (Python) Location: London - Hybrid (Three Days a Week in Office) Salary: Up to £70,000 + Benefits Overview: An exciting opportunity awaits a Senior Software Engineer to join a rapidly growing digital marketing agency in London. With complete autonomy over their work, the chosen candidate will be part of a team doubling in size within the next 12 months. Collaborating closely with data scientists and machine learning experts, this role promises exposure to cutting-edge projects and skill development opportunities. The Role: As a Senior Software Engineer, your responsibilities will include: Designing, developing, and maintaining software using Python. Advocating and implementing best practices throughout the software engineering life cycle. Managing cloud technologies, with a focus on Google Cloud Platform (GCP). Leading automation projects to enhance efficiency. Maintaining and developing data pipelines. Enhancing internal and external tools. Collaborating with data engineers and the wider team to build data platforms. Skills and Expertise: To be considered for this prestigious role, candidates must possess the following qualifications and attributes: A demonstrated proficiency in Python programming, coupled with a keen eye for detail and a passion for crafting elegant and efficient solutions to complex problems. Substantial commercial experience in working with cloud technologies, exemplified by a deep understanding of cloud platforms and their associated services (ideally GCP). Proven expertise in deploying and managing containerized applications using Docker within commercial environments, showcasing a knack for optimizing software delivery and deployment processes. A proactive approach to learning and contributing to front-end development, coupled with a strong commitment to staying abreast of emerging technologies, industry trends, and best practices. Salary: In this role, you will have the opportunity to earn a competitive salary of up to £70,000, complemented by a comprehensive benefits package designed to support your personal and professional well-being.
Senior Applied Data Scientist - Causal AI for Demand Forecasting Location: Offsite, London, United Kingdom Area of Interest Administrative and Business Support Job Type Professional AI or Artificial Intelligence Job Id Who we are:The post-pandemic years have exposed inherent biases and limitations in expert-driven and statistical/Traditional ML-based forecasting approaches.Cisco wasn't immune and saw a 4X increase in backlog, revenue impact, and a subsequent 3X inventory increase.The Forecasting Data Science team within Global Planning is solving this by pioneering the application of Causal AI to re-invent Demand Forecasting of Cisco's product portfolio to provide breakthrough levels of regime-resilient forecast accuracy, efficiency, and prescriptive insights that enable game-changing opportunities across Cisco and its Supply Chain.The team was recognized by Gartner in their Power of Profession 2024 Supply Chain awards as one of the top 5 in the world in the Process and Technology Innovation category. Who you will work with: A high caliber and engaged team plus an eco-system of world-leading AI partners chartered with developing and operationalizing an inspectable, multi-dimensional system of causal models that provides an integrated, comprehensive, and evidence-based point-of-view of Cisco's short and long-term demand at aggregated and product levels. This team is responsible for incorporating planning, product, sales, and customer intelligence from across the enterprise and from external global macro-economic and market data that relates to the demand for Cisco's products into the structure of this system of models. The team delivers and continuously improves AI-based forecasts, forecast ranges, and financial and prescriptive insights from this system through connections with Planning and other Supply Chain and Enterprise teams for the different facets of Cisco's business. The difference you will make: You will bring your expertise, experience, and innovation to play a significant role in solving the challenges which will enable developing and implementing an industry-leading Causal AI-based forecasting system that effectively enhances decision rigor and maximizes operational efficiencies across Enterprise and Supply Chain functions at Cisco. What you will do: Develop, evolve, and sustain key elements of the Causal-AI based Forecasting system for Aggregated Demand. Excel in developing high quality, accurate, parsimonious models that are robust and have a long shelf-life. Improve the efficiency and scalability of the Forecasting System. Monitor the forecasts and key forecast performance metrics, understanding root causes of changes in the forecast and metrics as a core part of continuously improvement and customer communication. Work closely with business leads and experts in Global Planning, other Supply Chain functions, Finance, Product Management, Sales, and other Cisco organizations to understand, discover, and characterize relationships and patterns between Cisco demand and its relation to product, technology, lifecycle, supply, customer, market, competitor, sales behavior, and macro factors. Engineer model features using these factors, discover and enhance the natural segmentation for Demand based on these factors, determine causality of the factors, and incorporate into structured causal models. Develop and evolve Dashboards to expose key insights from the causal Forecasts and their drivers to accelerate and continuously improve the solution and increase stakeholder engagement and adoption. Provide integrated, reconciled, and logically sound evidence-based views for different facets of Cisco's short and long-term demand. Develop and evolve reliable approaches for uncertainty quantification to enable scenario/range forecasts. Leverage and incorporate appropriate machine learning approaches including customization of recently published research as needed to build better Causal AI solutions. Connect with stakeholders to communicate the short-and-long term AI forecasts and the changes in these forecasts. Discern and articulate the story in the forecasts and forecast changes, areas of discrepancies or differences with expert forecasts, understanding and accounting for the confidence level of these forecasts. Continuously improve different elements of this system to improve forecast accuracy and incorporate learnings from formal and informal collaborations with stakeholders and other experts into the AI system. Work with our AI vendors to enhance their platforms to improve Causal Inference based forecasting, stakeholder engagement, and decision support. Provide technical direction and coaching to less experienced data scientists and data engineers in the team, and to interns and for collaborations with Universities. Minimum Qualifications: Extensive Advanced Analytics experience with a Masters degree or some experience with a Ph.D. in a Quantitative field leveraging statistical and machine learning methods in the thesis. Strong all-round foundation in AI and machine learning, with a theoretical and practical understanding of Causal machine learning approaches. Proven modeling skills that have delivered an effective predictive solution to solve a business problem with minimal supervision Expertise in Python, with advanced data analysis and data engineering skills, including using SQL Strong Computer Science foundation Strong critical thinking, with a sharp eye for patterns and the skills to draw out the story and conclusions from data and modeling experiments in real-time. Experience in developing and operationalizing scalable ML solutions in cloud environments based on large datasets. Demonstrated structured data wrangling and mining skills that extract actionable insights from data, including in real-time hackathon-like settings. Practical knowledge of the advantages and pitfalls of different machine learning approaches, as well as a strong grounding in the theoretical foundations Excellent communication and storytelling skills with an ability to unpack complex problems, and articulate AI/ML approaches, solutions, and results for non-technical audiences. Strong growth mindset and sense of ownership. Innate passion and curiosity to understand and improve the system and connect the dots. Preferred Qualifications: Advanced Analytics experience with a Masters degree or experience with a PhD in Statistics, Mathematics or Applied Mathematics, Physics, Engineering, or related quantitative field. Experience with global financial markets, macro-economics, micro-economics, econometrics, and Corporate Finance. Substantial experience using Causal AI and Structured Causal Models in Demand Forecasting and ideally also in other complex or dynamic domains like marketing/pricing. Practical expertise and deep understanding of statistics and causal inference in time series settings. Experience with NLP, Recommender Systems, and Deep Learning methods. Understanding of Gen AI/LLMs including RAGs and fine-tuning, and Reinforcement Learning. Experience in visualization design and development with Python based libraries. Project management skills, with an ability to deliver results in a fast-paced environment. A practical and effective approach to problem-solving using AI/ML and a knack for envisioning, translating business requirements into analytics requirements, and realizing feasible data science solutions. A strong bias for action, delivering iterative results quickly rather than waiting for perfection. Why Cisco? . We are all unique, but collectively we bring our talents to work as a team, to develop innovative technology and power a more inclusive, digital future for everyone. How do we do it? Well, for starters - with people like you! Nearly every internet connection around the world touches Cisco. We're the Internet's optimists. Our technology makes sure the data traveling at light speed across connections does so securely, yet it's not what we make but what we make happen which marks us out. We're helping those who work in the health service to connect with patients and each other; schools, colleges, and universities to teach in even the most challenging of times. We're helping businesses of all shapes and sizes to connect with their employees and customers in new ways, providing people with access to the digital skills they need and connecting the most remote parts of the world - whether through 5G, or otherwise. We tackle whatever challenges come our way. We have each other's backs, we recognize our accomplishments, and we grow together. We celebrate and support one another - from big and small things in life to big career moments. And giving back is in our DNA (we get 10 days off each year to do just that). We know that powering an inclusive future starts with us. Because without diversity and a dedication to equality, there is no moving forward. Our 30 Inclusive Communities, that bring people together around commonalities or passions, are leading the way. Together we're committed to learning, listening, caring for our communities, whilst supporting the most vulnerable with a collective effort to make this world a better place either with technology, or through our actions. So, you have colorful hair? Don't care. Tattoos . click apply for full job details
Apr 23, 2024
Full time
Senior Applied Data Scientist - Causal AI for Demand Forecasting Location: Offsite, London, United Kingdom Area of Interest Administrative and Business Support Job Type Professional AI or Artificial Intelligence Job Id Who we are:The post-pandemic years have exposed inherent biases and limitations in expert-driven and statistical/Traditional ML-based forecasting approaches.Cisco wasn't immune and saw a 4X increase in backlog, revenue impact, and a subsequent 3X inventory increase.The Forecasting Data Science team within Global Planning is solving this by pioneering the application of Causal AI to re-invent Demand Forecasting of Cisco's product portfolio to provide breakthrough levels of regime-resilient forecast accuracy, efficiency, and prescriptive insights that enable game-changing opportunities across Cisco and its Supply Chain.The team was recognized by Gartner in their Power of Profession 2024 Supply Chain awards as one of the top 5 in the world in the Process and Technology Innovation category. Who you will work with: A high caliber and engaged team plus an eco-system of world-leading AI partners chartered with developing and operationalizing an inspectable, multi-dimensional system of causal models that provides an integrated, comprehensive, and evidence-based point-of-view of Cisco's short and long-term demand at aggregated and product levels. This team is responsible for incorporating planning, product, sales, and customer intelligence from across the enterprise and from external global macro-economic and market data that relates to the demand for Cisco's products into the structure of this system of models. The team delivers and continuously improves AI-based forecasts, forecast ranges, and financial and prescriptive insights from this system through connections with Planning and other Supply Chain and Enterprise teams for the different facets of Cisco's business. The difference you will make: You will bring your expertise, experience, and innovation to play a significant role in solving the challenges which will enable developing and implementing an industry-leading Causal AI-based forecasting system that effectively enhances decision rigor and maximizes operational efficiencies across Enterprise and Supply Chain functions at Cisco. What you will do: Develop, evolve, and sustain key elements of the Causal-AI based Forecasting system for Aggregated Demand. Excel in developing high quality, accurate, parsimonious models that are robust and have a long shelf-life. Improve the efficiency and scalability of the Forecasting System. Monitor the forecasts and key forecast performance metrics, understanding root causes of changes in the forecast and metrics as a core part of continuously improvement and customer communication. Work closely with business leads and experts in Global Planning, other Supply Chain functions, Finance, Product Management, Sales, and other Cisco organizations to understand, discover, and characterize relationships and patterns between Cisco demand and its relation to product, technology, lifecycle, supply, customer, market, competitor, sales behavior, and macro factors. Engineer model features using these factors, discover and enhance the natural segmentation for Demand based on these factors, determine causality of the factors, and incorporate into structured causal models. Develop and evolve Dashboards to expose key insights from the causal Forecasts and their drivers to accelerate and continuously improve the solution and increase stakeholder engagement and adoption. Provide integrated, reconciled, and logically sound evidence-based views for different facets of Cisco's short and long-term demand. Develop and evolve reliable approaches for uncertainty quantification to enable scenario/range forecasts. Leverage and incorporate appropriate machine learning approaches including customization of recently published research as needed to build better Causal AI solutions. Connect with stakeholders to communicate the short-and-long term AI forecasts and the changes in these forecasts. Discern and articulate the story in the forecasts and forecast changes, areas of discrepancies or differences with expert forecasts, understanding and accounting for the confidence level of these forecasts. Continuously improve different elements of this system to improve forecast accuracy and incorporate learnings from formal and informal collaborations with stakeholders and other experts into the AI system. Work with our AI vendors to enhance their platforms to improve Causal Inference based forecasting, stakeholder engagement, and decision support. Provide technical direction and coaching to less experienced data scientists and data engineers in the team, and to interns and for collaborations with Universities. Minimum Qualifications: Extensive Advanced Analytics experience with a Masters degree or some experience with a Ph.D. in a Quantitative field leveraging statistical and machine learning methods in the thesis. Strong all-round foundation in AI and machine learning, with a theoretical and practical understanding of Causal machine learning approaches. Proven modeling skills that have delivered an effective predictive solution to solve a business problem with minimal supervision Expertise in Python, with advanced data analysis and data engineering skills, including using SQL Strong Computer Science foundation Strong critical thinking, with a sharp eye for patterns and the skills to draw out the story and conclusions from data and modeling experiments in real-time. Experience in developing and operationalizing scalable ML solutions in cloud environments based on large datasets. Demonstrated structured data wrangling and mining skills that extract actionable insights from data, including in real-time hackathon-like settings. Practical knowledge of the advantages and pitfalls of different machine learning approaches, as well as a strong grounding in the theoretical foundations Excellent communication and storytelling skills with an ability to unpack complex problems, and articulate AI/ML approaches, solutions, and results for non-technical audiences. Strong growth mindset and sense of ownership. Innate passion and curiosity to understand and improve the system and connect the dots. Preferred Qualifications: Advanced Analytics experience with a Masters degree or experience with a PhD in Statistics, Mathematics or Applied Mathematics, Physics, Engineering, or related quantitative field. Experience with global financial markets, macro-economics, micro-economics, econometrics, and Corporate Finance. Substantial experience using Causal AI and Structured Causal Models in Demand Forecasting and ideally also in other complex or dynamic domains like marketing/pricing. Practical expertise and deep understanding of statistics and causal inference in time series settings. Experience with NLP, Recommender Systems, and Deep Learning methods. Understanding of Gen AI/LLMs including RAGs and fine-tuning, and Reinforcement Learning. Experience in visualization design and development with Python based libraries. Project management skills, with an ability to deliver results in a fast-paced environment. A practical and effective approach to problem-solving using AI/ML and a knack for envisioning, translating business requirements into analytics requirements, and realizing feasible data science solutions. A strong bias for action, delivering iterative results quickly rather than waiting for perfection. Why Cisco? . We are all unique, but collectively we bring our talents to work as a team, to develop innovative technology and power a more inclusive, digital future for everyone. How do we do it? Well, for starters - with people like you! Nearly every internet connection around the world touches Cisco. We're the Internet's optimists. Our technology makes sure the data traveling at light speed across connections does so securely, yet it's not what we make but what we make happen which marks us out. We're helping those who work in the health service to connect with patients and each other; schools, colleges, and universities to teach in even the most challenging of times. We're helping businesses of all shapes and sizes to connect with their employees and customers in new ways, providing people with access to the digital skills they need and connecting the most remote parts of the world - whether through 5G, or otherwise. We tackle whatever challenges come our way. We have each other's backs, we recognize our accomplishments, and we grow together. We celebrate and support one another - from big and small things in life to big career moments. And giving back is in our DNA (we get 10 days off each year to do just that). We know that powering an inclusive future starts with us. Because without diversity and a dedication to equality, there is no moving forward. Our 30 Inclusive Communities, that bring people together around commonalities or passions, are leading the way. Together we're committed to learning, listening, caring for our communities, whilst supporting the most vulnerable with a collective effort to make this world a better place either with technology, or through our actions. So, you have colorful hair? Don't care. Tattoos . click apply for full job details
Intuit's Small Business Group (SBG), serves the needs of small businesses and self-employed through Quickbooks and our ecosystem of attached products. Our mission is to power prosperity around the world, which means dramatically rethinking how we enable small businesses and individuals to run their businesses with confidence. Intuit UK is searching for a talented, hands-on Data Scientist to help deliver data science initiatives across the UK Business. We are an exciting, fast paced, and innovative team, leveraging industry-leading tools and best practices. As a Data Scientist, you will collaborate with cross functional teams to develop ML models and customer facing AI solutions. You will take complex concepts and communicate them to both technical and non-technical leaders in order to effect change for our customers. You will play an instrumental part in shaping the strategy and future of the UK Data Science Program. Working across Intuit AI, Data Engineering, Product and Marketing you will be involved in every step of the modeling process from creating and maintaining model data pipelines to A/B testing the models in production with white space opportunities to drive innovation and personal development. Responsibilities Perform hands-on data analysis and modeling with large data sets Lead experimental designs and measurement plans Apply data mining, NLP, and machine learning (both supervised and unsupervised) to improve relevance and personalization algorithms Work side-by-side with product managers, software engineers, and designers in designing experiments and minimum viable products Explore new design or technology shifts in order to determine how they might connect with the customer benefits we wish to deliver Advanced in SQL and a statistical programming language such as Python or R 2+ years experience in data mining algorithms and statistical modeling techniques such as clustering, classification, regression, decision trees, neural nets, support vector machines, anomaly detection, recommender systems, sequential pattern discovery, and text mining Expertise in experimental design and Multivariate/A-B testing Solid communication skills: Demonstrated ability to explain complex technical issues to both technical and non-technical audiences. Undergraduate Degree in Quantitative Field, or equivalent experience
Apr 23, 2024
Full time
Intuit's Small Business Group (SBG), serves the needs of small businesses and self-employed through Quickbooks and our ecosystem of attached products. Our mission is to power prosperity around the world, which means dramatically rethinking how we enable small businesses and individuals to run their businesses with confidence. Intuit UK is searching for a talented, hands-on Data Scientist to help deliver data science initiatives across the UK Business. We are an exciting, fast paced, and innovative team, leveraging industry-leading tools and best practices. As a Data Scientist, you will collaborate with cross functional teams to develop ML models and customer facing AI solutions. You will take complex concepts and communicate them to both technical and non-technical leaders in order to effect change for our customers. You will play an instrumental part in shaping the strategy and future of the UK Data Science Program. Working across Intuit AI, Data Engineering, Product and Marketing you will be involved in every step of the modeling process from creating and maintaining model data pipelines to A/B testing the models in production with white space opportunities to drive innovation and personal development. Responsibilities Perform hands-on data analysis and modeling with large data sets Lead experimental designs and measurement plans Apply data mining, NLP, and machine learning (both supervised and unsupervised) to improve relevance and personalization algorithms Work side-by-side with product managers, software engineers, and designers in designing experiments and minimum viable products Explore new design or technology shifts in order to determine how they might connect with the customer benefits we wish to deliver Advanced in SQL and a statistical programming language such as Python or R 2+ years experience in data mining algorithms and statistical modeling techniques such as clustering, classification, regression, decision trees, neural nets, support vector machines, anomaly detection, recommender systems, sequential pattern discovery, and text mining Expertise in experimental design and Multivariate/A-B testing Solid communication skills: Demonstrated ability to explain complex technical issues to both technical and non-technical audiences. Undergraduate Degree in Quantitative Field, or equivalent experience
The Alan Turing Institute Named in honour of Alan Turing, the Institute is a place for inspiring, exciting work and we need passionate, sharp, and innovative people who want to use their skills to contribute to our mission to make great leaps in data science and AI research to change the world for the better. BACKGROUND The Defence & Security programme at the Turing is looking to expand a newly formed team of data scientists working on real-world problems in the radio frequency domain aligned with defending and securing the UK. As a team, we bring together cutting-edge research and motivating mission challenges, using our data science, software engineering and stakeholder management skills to create next generation capabilities for our partners. Your role will be to work both independently and collaboratively with the Private Investigators (PIs), and other researchers in the Defence Artificial Intelligence Research (DARe) centre in domains as diverse as: future sensing, space systems, human-machine teaming, synthetic environments, and edge AI. CANDIDATE PROFILE The ideal candidate is inquisitive, enjoys solving complex, challenging problems, and thinks creatively to find non-obvious solutions. We are a cross-disciplinary team and encourage applications from both generalists and specialists including those who self-identify as software engineers, computer scientists, machine learning practitioners, physicists, mathematicians, statisticians or more widely as data scientists or data engineers. DUTIES AND AREAS OF RESPONSIBILITY Engaging with scientists from the EME's Defence and Security partners Appling ML, Data Science, and radio frequency to problems faced by EME partners, both as part of unclassified EME projects and on partner systems The application of modern AI techniques to the RF domain Developing novel multi-modal AI approaches to the fusion of data from multiple sensors Developing new techniques for the detection, recognition, identification, localisation, and exploitation (DRILE) of radio frequency signals. Person Specification PhD or equivalent professional experience in a field with significant use of both computer programming and advanced statistical or numerical methods Practical experience or strong theoretical knowledge and academic experience with ML and adjacent topics or demonstrated experience developing algorithms for transmitting, processing and analysing radio frequency signals Fluency in one or more modern programming languages such as Python Experience leading a research project with a focus on AI or Radar or Communications and Networks. Please see our portal for a full breakdown of the Job Description. Terms and Conditions This full-time post is offered on a fixed term basis for 3 years. The annual salary is £51,476 to £58,000 plus excellent benefits, including flexible working and family friendly policies, The Alan Turing Institute is based at the British Library, in the heart of London's Knowledge Quarter. We expect staff to come to our office at least 4 days per month. Some roles may require more days in the office; the hiring manager will be able to confirm this during the interview. Application procedure Please see our jobs portal for full details on how to apply and the interview process. Equality Diversity and Inclusion We are committed to making sure our recruitment process is accessible and inclusive. This includes making reasonable adjustments for candidates who have a disability or long-term condition. Please contact us at to find out how we can assist you.
Apr 23, 2024
Full time
The Alan Turing Institute Named in honour of Alan Turing, the Institute is a place for inspiring, exciting work and we need passionate, sharp, and innovative people who want to use their skills to contribute to our mission to make great leaps in data science and AI research to change the world for the better. BACKGROUND The Defence & Security programme at the Turing is looking to expand a newly formed team of data scientists working on real-world problems in the radio frequency domain aligned with defending and securing the UK. As a team, we bring together cutting-edge research and motivating mission challenges, using our data science, software engineering and stakeholder management skills to create next generation capabilities for our partners. Your role will be to work both independently and collaboratively with the Private Investigators (PIs), and other researchers in the Defence Artificial Intelligence Research (DARe) centre in domains as diverse as: future sensing, space systems, human-machine teaming, synthetic environments, and edge AI. CANDIDATE PROFILE The ideal candidate is inquisitive, enjoys solving complex, challenging problems, and thinks creatively to find non-obvious solutions. We are a cross-disciplinary team and encourage applications from both generalists and specialists including those who self-identify as software engineers, computer scientists, machine learning practitioners, physicists, mathematicians, statisticians or more widely as data scientists or data engineers. DUTIES AND AREAS OF RESPONSIBILITY Engaging with scientists from the EME's Defence and Security partners Appling ML, Data Science, and radio frequency to problems faced by EME partners, both as part of unclassified EME projects and on partner systems The application of modern AI techniques to the RF domain Developing novel multi-modal AI approaches to the fusion of data from multiple sensors Developing new techniques for the detection, recognition, identification, localisation, and exploitation (DRILE) of radio frequency signals. Person Specification PhD or equivalent professional experience in a field with significant use of both computer programming and advanced statistical or numerical methods Practical experience or strong theoretical knowledge and academic experience with ML and adjacent topics or demonstrated experience developing algorithms for transmitting, processing and analysing radio frequency signals Fluency in one or more modern programming languages such as Python Experience leading a research project with a focus on AI or Radar or Communications and Networks. Please see our portal for a full breakdown of the Job Description. Terms and Conditions This full-time post is offered on a fixed term basis for 3 years. The annual salary is £51,476 to £58,000 plus excellent benefits, including flexible working and family friendly policies, The Alan Turing Institute is based at the British Library, in the heart of London's Knowledge Quarter. We expect staff to come to our office at least 4 days per month. Some roles may require more days in the office; the hiring manager will be able to confirm this during the interview. Application procedure Please see our jobs portal for full details on how to apply and the interview process. Equality Diversity and Inclusion We are committed to making sure our recruitment process is accessible and inclusive. This includes making reasonable adjustments for candidates who have a disability or long-term condition. Please contact us at to find out how we can assist you.