Above analytics are generated algorithmically based on job titles and may not always be the same as the company's job classification. You can also check detailed occupation eligibility, and salary criteria on our UK Visa Eligible Occupations & Salary Thresholds page.
Disclaimer: Hunt UK Visa Sponsors aggregates job listings from publicly available sources, such as search engines, to assist with your job hunting. We do not claim affiliation with HENI. For the most up-to-date job details, please visit the official website by clicking "Apply Now."
About Us
HENI is an international art services business working with leading artists and estates across printmaking, marketplaces for physical artworks, NFTs, publishing, digital, video production, art research and analysis. HENI is at the cutting edge of art and tech, using the latest and best technologies to make art accessible to audiences worldwide.
The Role
We’re looking for a pragmatic, versatile full stack engineer to join our Data team. You’ll own problems end-to-end, from the applications and tools used across the business through to the data pipelines and infrastructure that sit behind them, working on a genuinely varied set of projects.
The work spans customer analytics, art and art-news aggregation, ML-driven products (for example, surfacing relevant legal cases from news coverage), and a range of internal applications used by commercial teams and the C-suite. The team is small, the problem domain is interesting, and you’ll see the things you build actually used.
We care about solid engineering fundamentals: clean code, sensible architecture, testing, code review. We also build in an AI-native way, with agentic coding tools (Claude Code, Cursor, Codex, or similar) a core part of daily workflow on the team.
What You’ll Be Doing
Building Internal Tools & Applications
• Develop internal applications and tools for use across the business
• Build dashboards in Apache Superset for self-serve business intelligence
• Ship web applications and data apps (Streamlit, Dash, or similar) as needs arise
Data Engineering & Platform
• Build and maintain data pipelines
• Integrate third-party data sources (HubSpot, Facebook Business, and others) into the customer data platform
• Implement data quality checks and validation to keep pipelines reliable
• Contribute to data architecture decisions and broader platform improvements
Analytics & Ad-hoc Work
• Write analytical SQL to support accounts, client liaison, and commercial teams
• Contribute to customer analytics work: segmentation, retention, behavioural insights
• Support Customer Data Reports for C-suite stakeholders
• Respond to ad-hoc data requests from across the business
• Contribute to HENI News and art-aggregation data initiatives, including occasional ML/AI-driven projects
What We’re Looking For
Core Engineering
• Strong general software engineering skills: clean, maintainable, well-structured code
• Comfort with Python, which is the primary language for data work on the team
• Solid SQL for analytical queries and database work
• Git and version control workflows, code review, automated testing
• Experience building and maintaining production applications or services
Data & Infrastructure
• Some experience with data pipelines, ETL/ELT work, or working with databases at scale
• Familiarity with pandas / numpy, or willingness to pick them up quickly
• Comfort with Docker and CI/CD
• Some cloud experience (AWS, GCP, or Azure)
Ways of Working
• Familiarity with agentic coding tools (Claude Code, Cursor, Codex, or similar) as part of your day-to-day workflow
• Able to work autonomously on ambiguous problems, ship iteratively, and coordinate with a small team
• Comfortable talking to non-technical stakeholders and translating business problems into working software
Nice to Have
• Experience with orchestration tools (Airflow, Dagster, Prefect) or ingestion tools like Airbyte
• Experience with BI/dashboarding tools (Superset, Looker, Metabase)
• Experience integrating LLM APIs (Anthropic, OpenAI, Gemini) into real products or pipelines
• Experience with CRM / marketing platform APIs (HubSpot, Salesforce or similar)
• Experience with distributed data processing (PySpark, Spark SQL) or columnar formats (Parquet, Delta Lake)
• Experience with infrastructure as code (CDK, Terraform) and container orchestration (ECS, Kubernetes)
• Some statistical modelling or machine learning background (scikit-learn, scipy, statsmodels)
• Node.js / TypeScript experience for internal tooling work
Our Stack
• Languages: Python (primary), SQL, some Node.js / TypeScript
• Data: PostgreSQL, Delta Lake, Airbyte, Apache Airflow
• Infrastructure: Docker, Kubernetes, AWS CDK
• Cloud: AWS (S3, RDS, Glue, ECS, EC2)
• Apps & BI: Streamlit, Apache Superset
• AI tooling: Claude Code and similar agentic coding tools used across the team
Experience
• 2–5 years of professional software engineering experience, ideally with some exposure to data work (pipelines, analytics, internal tools, or similar)
• A degree in a technical field, or equivalent practical experience. We care more about what you’ve built than where you studied
• A track record of shipping, whether internal tools, side projects, open source, or production systems