logologo
Hunt UK Visa Sponsors
Jobs
logologoHunt UK Visa Sponsors

Find jobs from UK licensed visa sponsors — Companies House verified, updated daily.

About

How does it workContact Us

Find Work

JobsJobs by RoleRegister of Licensed SponsorsVisa TypesSponsor StatisticsInternational Student

Resources

BlogGlossaryOccupation EligibilityIncome Tax CalculatorILR TrackerDeveloper API & MCPSponsorship by Nationality

Content on this site is for general information only and does not constitute legal advice. Always consult a regulated UK immigration solicitor for advice specific to your situation.

Copyright © 2026. All rights reserved.

  1. Home
  2. Jobs
  3. Alice (Formerly ActiveFence)
  4. GenAI Analyst
Alice (Formerly ActiveFence)

GenAI Analyst

CompanyAlice (Formerly ActiveFence)
Location
England, United Kingdom
Employment TypeFull-time
Posted At4/20/2026

UK Visa Sponsorship Analytics

Occupation TypeData analysts
Occupation Code Skill LevelMedium Skilled
Sponsorship Salary Threshold
£41,700 (£21.38 per hour)
Standard minimum applies

Above analytics are generated algorithmically based on job titles and may not always be the same as the company's job classification. You can also check detailed occupation eligibility, and salary criteria on our UK Visa Eligible Occupations & Salary Thresholds page.

Disclaimer: Hunt UK Visa Sponsors aggregates job listings from publicly available sources, such as search engines, to assist with your job hunting. We do not claim affiliation with Alice (Formerly ActiveFence). For the most up-to-date job details, please visit the official website by clicking "Apply Now."

Description

Alice is seeking a driven, detail-focused professional to become a vital part of our team as a Generative AI Analyst. In this role, you'll dive into the cutting-edge of technology, meticulously analyzing various content infringements to secure the new wave of Generative AI tools. Your duties will include collaborating with experts in diverse fields such as Hate Speech, Misinformation, Intellectual Property and Copyright, Child Safety, among others.

Your tasks will involve writing adversarial; prompts to identify weaknesses in various AI models, including Large Language Models (LLMs), Text-to-Image, Text-to-Video, and beyond. You'll also oversee data management to guarantee the highest quality of outputs.


Responsibilities:

  • Developing adversarial and risky prompt strategies across several areas of abuse to expose potential vulnerabilities in models.
  • Managing projects end-to-end, from initial planning and oversight through quality assurance to final delivery.
  • Handling extensive datasets across multiple languages and areas of abuse, ensuring precision and meticulous attention to detail.
  • Ongoing investigation into new tactics for circumventing foundational models' safety measures.
  • Working alongside diverse teams, engineering, product, policy, to tackle new challenges and craft forward-thinking strategies and resolutions.
  • Promoting a culture of knowledge exchange and continual learning within the team.


Must have:

  • Background in AI Safety and/or Responsible AI and/or AI Ethics
  • Familiarity with recent Generative AI models and agents is essential, though direct technical experience is not a prerequisite.
  • Command of English at a near-native level.
  • Attention to detail, organizational capabilities, and the capacity to juggle numerous tasks concurrently.

Additional Wants:

  • Experience with various model types (Text-to-Text, Text-to-Image) is desirable.
  • Prior experience with OSINT (Open Source Intelligence) will be considered an asset.
  • A self-starter attitude, with the energy to excel in a fast-moving and variable environment.


Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines.

In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.