Job DescriptionJob Description Do you want to be part of the core team building truly AI-native helpful experiences across the consumer space? Do you want to be at the cutting edge of what is next in the AI space but apply it to something of true value in the real world? At Verneek, we are on a mission to build the most helpful AI that augments the knowledge of anyone, anywhere, at any time! As opposed to the mainstream, we believe that the way to bring domain-general AI to the masses is to apply it one domain at a time, through AIs with deep domain expertise. We were on this journey before it got the hottest thing on the face of the planet! Come join some OGs in this so-called "generative AI" space and invent what is yet to be the future! If you are craving to learn something new every day while working at the cutting edge of AI, Verneek could be a perfect opportunity for you: a deep-tech AI startup, where you'd get to learn, innovate, and leave your mark every single hour of every day. We are looking for a stellar & highly ambitious ML engineer as core employees to help build complex AI/NLP models supporting the Verneek AI platform! You'll get to work on fundamental AI research problems, but all grounded on our proprietary AI platform. It is all much more rewarding and influential than working on beating AI benchmarks! :) Every day, you'll get to solve very unique, highly complex, and socially impactful problems. This is an early-stage startup, so we'll be moving super-fast and there will be no legacy obstacles on your way to make a significant impact. Whatever you do every hour of every day counts RESPONSIBILITIES Implement, scale, and maintain complex AI/NLP models supporting the Verneek AI platform Requirements MINIMUM QUALIFICATIONS BSc. degree in Computer Science or related fields 3+ years of experience with Python 3+ years hands-on experience developing architectures with machine learning frameworks such as PyTorch Demonstrated AI/NLP engineering skillset through having deployed largescale AI/NLP systems to production Work authorization in the USA at the time of hire Continuing work authorization during employment can be sponsored by Verneek PREFERRED QUALIFICATIONS MSc. degree in Computer Science or related fields Experience in Natural Language Understanding, Dialogue Systems, Semantic Parsing, Transfer Learning and learning with limited data Experience with building models in the commerce/retail domain Working knowledge of Scala Benefits Stellar medical, dental, vision, disability, and life insurance Daily private Chef lunch, curated to personal diets Transportation Benefits 401K matching contributions Flexible PTO Visa/Green Card Sponsorship Career growth support through sponsoring learning opportunities and mentorship About Verneek Verneek is an early-stage deep-tech AI startup, based in the NYC area, founded by a team of leading AI research scientists and backed by a group of world-renowned business and scientific luminaries. Our mission is to build the most helpful AI for anyone, anywhere, at anytime. We are obsessed with what we do and we have fun doing it. Read more about verneek here: and make sure to watch all our introductory videos and yearly recaps here: Verneek Culture It's often hard to put "culture" into words, perhaps you can get a visual sense of our culture here: We all obsessively love what we do, care about each other, share all sorts of meals together, celebrate all kinds of events together, and work tirelessly with the excitement of making a difference through AI innovation. We are enjoying the journey, and going through all the ups and downs together. Although we have come a very long way in setting the foundations of our unique company, but we still have ways to go and you can help shape our culture! The core Verneek team plays a crucial role in further shaping the culture of the company moving forward. We are looking for highly ambitious and tremendously driven individuals who can take the lead in driving various aspects of the company, and help us shape its lasting impact. Annual Salary Range: $40K-$200K
04/26/2026
Full time
Job DescriptionJob Description Do you want to be part of the core team building truly AI-native helpful experiences across the consumer space? Do you want to be at the cutting edge of what is next in the AI space but apply it to something of true value in the real world? At Verneek, we are on a mission to build the most helpful AI that augments the knowledge of anyone, anywhere, at any time! As opposed to the mainstream, we believe that the way to bring domain-general AI to the masses is to apply it one domain at a time, through AIs with deep domain expertise. We were on this journey before it got the hottest thing on the face of the planet! Come join some OGs in this so-called "generative AI" space and invent what is yet to be the future! If you are craving to learn something new every day while working at the cutting edge of AI, Verneek could be a perfect opportunity for you: a deep-tech AI startup, where you'd get to learn, innovate, and leave your mark every single hour of every day. We are looking for a stellar & highly ambitious ML engineer as core employees to help build complex AI/NLP models supporting the Verneek AI platform! You'll get to work on fundamental AI research problems, but all grounded on our proprietary AI platform. It is all much more rewarding and influential than working on beating AI benchmarks! :) Every day, you'll get to solve very unique, highly complex, and socially impactful problems. This is an early-stage startup, so we'll be moving super-fast and there will be no legacy obstacles on your way to make a significant impact. Whatever you do every hour of every day counts RESPONSIBILITIES Implement, scale, and maintain complex AI/NLP models supporting the Verneek AI platform Requirements MINIMUM QUALIFICATIONS BSc. degree in Computer Science or related fields 3+ years of experience with Python 3+ years hands-on experience developing architectures with machine learning frameworks such as PyTorch Demonstrated AI/NLP engineering skillset through having deployed largescale AI/NLP systems to production Work authorization in the USA at the time of hire Continuing work authorization during employment can be sponsored by Verneek PREFERRED QUALIFICATIONS MSc. degree in Computer Science or related fields Experience in Natural Language Understanding, Dialogue Systems, Semantic Parsing, Transfer Learning and learning with limited data Experience with building models in the commerce/retail domain Working knowledge of Scala Benefits Stellar medical, dental, vision, disability, and life insurance Daily private Chef lunch, curated to personal diets Transportation Benefits 401K matching contributions Flexible PTO Visa/Green Card Sponsorship Career growth support through sponsoring learning opportunities and mentorship About Verneek Verneek is an early-stage deep-tech AI startup, based in the NYC area, founded by a team of leading AI research scientists and backed by a group of world-renowned business and scientific luminaries. Our mission is to build the most helpful AI for anyone, anywhere, at anytime. We are obsessed with what we do and we have fun doing it. Read more about verneek here: and make sure to watch all our introductory videos and yearly recaps here: Verneek Culture It's often hard to put "culture" into words, perhaps you can get a visual sense of our culture here: We all obsessively love what we do, care about each other, share all sorts of meals together, celebrate all kinds of events together, and work tirelessly with the excitement of making a difference through AI innovation. We are enjoying the journey, and going through all the ups and downs together. Although we have come a very long way in setting the foundations of our unique company, but we still have ways to go and you can help shape our culture! The core Verneek team plays a crucial role in further shaping the culture of the company moving forward. We are looking for highly ambitious and tremendously driven individuals who can take the lead in driving various aspects of the company, and help us shape its lasting impact. Annual Salary Range: $40K-$200K
Job DescriptionJob Description Company Description The Ikigai platform unlocks the power of generative AI for tabular data. We enable business users to connect disparate data, leverage no-code AI/ML, and build enterprise-wide AI apps in just a few clicks. Ikigai is built on top of its three proprietary foundation blocks developed from years of MIT research - aiMatch, for data reconciliation, aiCast, for prediction, and aiPlan, for scenario planning and optimization. Our platform enables eXpert-in-The-Loop (XiTL) for model reinforcement learning and refinement, at scale. With a combination of enterprise expertise and deep research in the field of AI, Ikigai Labs helps scale enterprises with AI by solving data engineering and modeling problems for business users and data scientists alike. Our unique ability to unlock value in tabular and time series data through AI-powered data harmonization, forecasting, dynamic learning and planning, is our Ikigai, our purpose in the world of AI. As an AI/ML Engineer at Ikigai Labs, you will be part of a high-performing team responsible for optimizing and deploying ML solutions to maximize performance and scalability. We seek a dynamic and passionate engineer with strong software fundamentals and a keen interest in collaborative problem-solving. Key Responsibilities: ML Optimization and Deployment: Develop and deploy machine learning models for optimal performance and scalability. Productivity Tools Development: Build tools and services to enhance the ML platform, utilizing technologies like Kubernetes, Helm, and EKS. Model Architecture: Apply a strong understanding of deep learning architectures (CNNs, RNNs, etc.) to solve complex problems. Research Adaptation: Stay abreast of recent ML and deep learning literature and adapt findings to real-world applications. Collaborative Development: Work with cross-functional teams to integrate AI and ML solutions that drive business value. Data Handling: Manage large datasets and build ML pipelines for data processing and training. ETL/ELT Processes: Design and develop scalable data integration processes. Predictive Modeling Platform: Develop an on-demand predictive modeling platform using gRPC. Cloud and Containerization: Utilize Kubernetes for managing Docker containers and various cloud services (AWS, Azure) to solve cloud-native challenges. Stakeholder Management: Provide occasional support to our customer success team. Technologies We Use: Languages: Python3, C++, Rust, SQL Frameworks: PyTorch, TensorFlow, Docker Databases: Postgres, Elasticsearch, DynamoDB, RDS Cloud: Kubernetes, Helm, EKS, Terraform, AWS Data Engineering: Apache Arrow, Dremio, Ray Miscellaneous: Git, Jupyterhub, Apache Superset, Plotly Dash Qualifications: Bachelor's degree in Computer Science, Math, Engineering, or related field (Master's preferred) with 0-5+ years of experience (depending on the level) Strong understanding of data structures, data modeling, algorithms, and software architecture. Proficient in probability, statistics, and algorithm development. Hands-on experience with ML and deep learning libraries (Scikit Learn, Keras, TensorFlow, PyTorch, Theano, DyLib). (Bonus) Experience with big data and distributed computing (Hadoop, MapReduce, Spark, Storm). Proficiency in Python, AWS services, and ETL/ELT pipelines. Understanding of key software design principles, design patterns, and testing best practices. Experience with Kubernetes and/or EKS is a plus. Ability to learn quickly in a fast-paced, agile environment. Excellent organizational, time management, and communication skills. Willingness to engage in pair programming, share knowledge, and provide and receive constructive feedback. Strong problem-solving skills and the ability to take initiative. Location Requirement: Candidates must reside in or near Cambridge, MA or San Mateo, CA. This role is not open to other locations at this time. Equal Opportunity Employment: Ikigai Labs is committed to equal employment opportunity and non-discrimination for all employees and qualified applicants. We value diversity and are dedicated to fostering an inclusive environment for all employees, regardless of race, color, sex, gender identity or expression, age, religion, national origin, ancestry, citizenship, disability, military or veteran status, genetic information, sexual orientation, marital status, or any other characteristic protected under applicable law. If you are passionate about machine learning and eager to make an impact, we would love to hear from you. Apply today to join the Ikigai Labs team and help us build the future of AI. Powered by JazzHR PIleYeWvCd
04/26/2026
Full time
Job DescriptionJob Description Company Description The Ikigai platform unlocks the power of generative AI for tabular data. We enable business users to connect disparate data, leverage no-code AI/ML, and build enterprise-wide AI apps in just a few clicks. Ikigai is built on top of its three proprietary foundation blocks developed from years of MIT research - aiMatch, for data reconciliation, aiCast, for prediction, and aiPlan, for scenario planning and optimization. Our platform enables eXpert-in-The-Loop (XiTL) for model reinforcement learning and refinement, at scale. With a combination of enterprise expertise and deep research in the field of AI, Ikigai Labs helps scale enterprises with AI by solving data engineering and modeling problems for business users and data scientists alike. Our unique ability to unlock value in tabular and time series data through AI-powered data harmonization, forecasting, dynamic learning and planning, is our Ikigai, our purpose in the world of AI. As an AI/ML Engineer at Ikigai Labs, you will be part of a high-performing team responsible for optimizing and deploying ML solutions to maximize performance and scalability. We seek a dynamic and passionate engineer with strong software fundamentals and a keen interest in collaborative problem-solving. Key Responsibilities: ML Optimization and Deployment: Develop and deploy machine learning models for optimal performance and scalability. Productivity Tools Development: Build tools and services to enhance the ML platform, utilizing technologies like Kubernetes, Helm, and EKS. Model Architecture: Apply a strong understanding of deep learning architectures (CNNs, RNNs, etc.) to solve complex problems. Research Adaptation: Stay abreast of recent ML and deep learning literature and adapt findings to real-world applications. Collaborative Development: Work with cross-functional teams to integrate AI and ML solutions that drive business value. Data Handling: Manage large datasets and build ML pipelines for data processing and training. ETL/ELT Processes: Design and develop scalable data integration processes. Predictive Modeling Platform: Develop an on-demand predictive modeling platform using gRPC. Cloud and Containerization: Utilize Kubernetes for managing Docker containers and various cloud services (AWS, Azure) to solve cloud-native challenges. Stakeholder Management: Provide occasional support to our customer success team. Technologies We Use: Languages: Python3, C++, Rust, SQL Frameworks: PyTorch, TensorFlow, Docker Databases: Postgres, Elasticsearch, DynamoDB, RDS Cloud: Kubernetes, Helm, EKS, Terraform, AWS Data Engineering: Apache Arrow, Dremio, Ray Miscellaneous: Git, Jupyterhub, Apache Superset, Plotly Dash Qualifications: Bachelor's degree in Computer Science, Math, Engineering, or related field (Master's preferred) with 0-5+ years of experience (depending on the level) Strong understanding of data structures, data modeling, algorithms, and software architecture. Proficient in probability, statistics, and algorithm development. Hands-on experience with ML and deep learning libraries (Scikit Learn, Keras, TensorFlow, PyTorch, Theano, DyLib). (Bonus) Experience with big data and distributed computing (Hadoop, MapReduce, Spark, Storm). Proficiency in Python, AWS services, and ETL/ELT pipelines. Understanding of key software design principles, design patterns, and testing best practices. Experience with Kubernetes and/or EKS is a plus. Ability to learn quickly in a fast-paced, agile environment. Excellent organizational, time management, and communication skills. Willingness to engage in pair programming, share knowledge, and provide and receive constructive feedback. Strong problem-solving skills and the ability to take initiative. Location Requirement: Candidates must reside in or near Cambridge, MA or San Mateo, CA. This role is not open to other locations at this time. Equal Opportunity Employment: Ikigai Labs is committed to equal employment opportunity and non-discrimination for all employees and qualified applicants. We value diversity and are dedicated to fostering an inclusive environment for all employees, regardless of race, color, sex, gender identity or expression, age, religion, national origin, ancestry, citizenship, disability, military or veteran status, genetic information, sexual orientation, marital status, or any other characteristic protected under applicable law. If you are passionate about machine learning and eager to make an impact, we would love to hear from you. Apply today to join the Ikigai Labs team and help us build the future of AI. Powered by JazzHR PIleYeWvCd
Job DescriptionJob Description Tiger Analytics is looking for experienced Machine Learning Engineer with Gen AI experience to join our fast-growing advanced analytics consulting firm. Our employees bring deep expertise in Machine Learning, Data Science, and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. Requirements We are looking for an experienced AI/ML Lead with deep expertise in designing and deploying high-performance APIs and microservices on AWS Fargate (ECS). The ideal candidate will have hands-on experience in generative AI integration, LLM API development, and AWS Bedrock services, contributing to building scalable GenAI and Agentic AI applications. Key Responsibilities: Design, build, and optimize high-performance APIs and microservices using Python (Fast API) deployed on AWS Fargate (ECS). Integrate LLM and Generative AI APIs using providers such as AWS Bedrock, OpenAI, and others. Collaborate with ML and DevOps teams to design CI/CD and MLOps pipelines within the AWS ecosystem. Contribute to architectural decisions around scalability, latency management, and backend efficiency for AI-powered systems. (Preferred) Leverage familiarity with Bedrock Agent Core services to integrate intelligent agent capabilities. Develop and maintain JSON RESTful APIs, adhering to OpenAI API conventions and best practices. Required Skills & Experience: 5+ years of hands-on software development experience with Python. Proven expertise in FastAPI and microservice architecture. Strong understanding of cloud-native applications, container orchestration (ECS, Docker), and AWS tools. Proficiency in LLM API integration and working with Generative AI frameworks. Experience implementing CI/CD, IaC, and ML pipelines across AWS environments. Familiarity with Bedrock AgentCore or other agentic systems (nice to have). Why Join Us: You'll be part of an innovative team building the next generation of AI-driven applications, where scalability, performance, and intelligent automation converge. This is an opportunity to push boundaries in Agentic AI infrastructure development in a supportive, fast-moving environment. Benefits Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, fast-growing, challenging and entrepreneurial environment, with a high degree of individual responsibility. Tiger Analytics provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, national origin, ancestry, marital status, protected veteran status, disability status, or any other basis as protected by federal, state, or local law.
04/24/2026
Full time
Job DescriptionJob Description Tiger Analytics is looking for experienced Machine Learning Engineer with Gen AI experience to join our fast-growing advanced analytics consulting firm. Our employees bring deep expertise in Machine Learning, Data Science, and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. Requirements We are looking for an experienced AI/ML Lead with deep expertise in designing and deploying high-performance APIs and microservices on AWS Fargate (ECS). The ideal candidate will have hands-on experience in generative AI integration, LLM API development, and AWS Bedrock services, contributing to building scalable GenAI and Agentic AI applications. Key Responsibilities: Design, build, and optimize high-performance APIs and microservices using Python (Fast API) deployed on AWS Fargate (ECS). Integrate LLM and Generative AI APIs using providers such as AWS Bedrock, OpenAI, and others. Collaborate with ML and DevOps teams to design CI/CD and MLOps pipelines within the AWS ecosystem. Contribute to architectural decisions around scalability, latency management, and backend efficiency for AI-powered systems. (Preferred) Leverage familiarity with Bedrock Agent Core services to integrate intelligent agent capabilities. Develop and maintain JSON RESTful APIs, adhering to OpenAI API conventions and best practices. Required Skills & Experience: 5+ years of hands-on software development experience with Python. Proven expertise in FastAPI and microservice architecture. Strong understanding of cloud-native applications, container orchestration (ECS, Docker), and AWS tools. Proficiency in LLM API integration and working with Generative AI frameworks. Experience implementing CI/CD, IaC, and ML pipelines across AWS environments. Familiarity with Bedrock AgentCore or other agentic systems (nice to have). Why Join Us: You'll be part of an innovative team building the next generation of AI-driven applications, where scalability, performance, and intelligent automation converge. This is an opportunity to push boundaries in Agentic AI infrastructure development in a supportive, fast-moving environment. Benefits Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, fast-growing, challenging and entrepreneurial environment, with a high degree of individual responsibility. Tiger Analytics provides equal employment opportunities to applicants and employees without regard to race, color, religion, age, sex, sexual orientation, gender identity/expression, pregnancy, national origin, ancestry, marital status, protected veteran status, disability status, or any other basis as protected by federal, state, or local law.
Candidates must have an active secret clearance or higher. We are seeking an experienced Staff or Senior Machine Learning Engineer with deep expertise in Large Language Models (LLMs), Mixture of Experts (MoEs), and Natural Language Processing (NLP). The ideal candidate will have a proven track record of developing, fine-tuning, and deploying advanced AI models at production scale. You'll collaborate closely with research and engineering teams to design, build, and optimize models that work with both structured and unstructured language data. This role spans cutting-edge research, hands-on model development, and MLOps-an opportunity to put your name on meaningful, real-world impact. Key Responsibilities Lead the design, development, and optimization of Large Language Models, Mixture of Experts models, and NLP systems for tasks including language understanding and generation. Extend existing LLM frameworks and libraries, incorporating the latest research in language models and transformer architectures. Preprocess and prepare text datasets for training, including tokenization, feature extraction, and data pipeline development. Implement MLOps best practices for deploying scalable, production-level models on cloud platforms. Collaborate with cross-functional teams to integrate ML models into the broader platform. Conduct cutting-edge research in machine learning, with a focus on improving model performance, efficiency, and scalability. Stay current with the latest advancements in AI and ML and apply that knowledge to improve models and methodologies. Mentor junior engineers and contribute to knowledge sharing and team best practices. Required Qualifications Bachelor's degree (Master's or Ph.D. preferred) in Computer Science, Machine Learning, or a related field. 5+ years of experience in machine learning, with specific expertise in LLMs, NLP, and/or Mixture of Experts architectures. Expertise in transformer architectures (e.g., GPT, BERT) and text preprocessing / feature engineering. Strong programming skills in Python and ML frameworks such as TensorFlow and/or PyTorch. Proficiency with NLP libraries and tooling (e.g., Hugging Face Transformers). Experience with MLOps tools and workflows (MLFlow, Kubeflow, or similar). Demonstrated ability to lead complex projects and work collaboratively in a team environment. Excellent problem-solving skills and a passion for innovation. Strong communication skills and a desire for continuous learning. Preferred Skills Experience with cloud computing services (AWS, Azure, GCP). Knowledge of Big Data technologies (Hadoop, Spark). Familiarity with containerization and orchestration technologies (Docker, Kubernetes). Proven track record of innovation through publications, patents, or industry contributions. Publications or presentations at recognized machine learning journals or conferences.
04/23/2026
Full time
Candidates must have an active secret clearance or higher. We are seeking an experienced Staff or Senior Machine Learning Engineer with deep expertise in Large Language Models (LLMs), Mixture of Experts (MoEs), and Natural Language Processing (NLP). The ideal candidate will have a proven track record of developing, fine-tuning, and deploying advanced AI models at production scale. You'll collaborate closely with research and engineering teams to design, build, and optimize models that work with both structured and unstructured language data. This role spans cutting-edge research, hands-on model development, and MLOps-an opportunity to put your name on meaningful, real-world impact. Key Responsibilities Lead the design, development, and optimization of Large Language Models, Mixture of Experts models, and NLP systems for tasks including language understanding and generation. Extend existing LLM frameworks and libraries, incorporating the latest research in language models and transformer architectures. Preprocess and prepare text datasets for training, including tokenization, feature extraction, and data pipeline development. Implement MLOps best practices for deploying scalable, production-level models on cloud platforms. Collaborate with cross-functional teams to integrate ML models into the broader platform. Conduct cutting-edge research in machine learning, with a focus on improving model performance, efficiency, and scalability. Stay current with the latest advancements in AI and ML and apply that knowledge to improve models and methodologies. Mentor junior engineers and contribute to knowledge sharing and team best practices. Required Qualifications Bachelor's degree (Master's or Ph.D. preferred) in Computer Science, Machine Learning, or a related field. 5+ years of experience in machine learning, with specific expertise in LLMs, NLP, and/or Mixture of Experts architectures. Expertise in transformer architectures (e.g., GPT, BERT) and text preprocessing / feature engineering. Strong programming skills in Python and ML frameworks such as TensorFlow and/or PyTorch. Proficiency with NLP libraries and tooling (e.g., Hugging Face Transformers). Experience with MLOps tools and workflows (MLFlow, Kubeflow, or similar). Demonstrated ability to lead complex projects and work collaboratively in a team environment. Excellent problem-solving skills and a passion for innovation. Strong communication skills and a desire for continuous learning. Preferred Skills Experience with cloud computing services (AWS, Azure, GCP). Knowledge of Big Data technologies (Hadoop, Spark). Familiarity with containerization and orchestration technologies (Docker, Kubernetes). Proven track record of innovation through publications, patents, or industry contributions. Publications or presentations at recognized machine learning journals or conferences.