Website GR8 Tech
Before you apply: Here is an interview Q&A for you: Click here
NOTE: Here is why some companies may not hire you.
Hey!! Update Your CV Like a Pro. HERE are Tips from an Experienced Recruiter
Artificial Intelligence Specialist at GR8 Tech, Remote (Global)
The purpose of this role is to build and operate LLM-based systems used in production. You will develop and integrate Generative AI solutions, including chatbots, AI assistants, MCP-based services, and other LLM-powered components, into an existing ML and data platform. The focus is on practical, observable, and reliable systems that support knowledge-driven workflows, AI-assisted interactions, and automation. This is a hands-on engineering role with clear ownership boundaries and guidance from senior engineers.
What you’ll drive:
LLM-based features and conversational systems
- Implement and maintain LLM-based features and services under guidance.
- Build and improve chatbots and conversational AI systems with predictable behavior.
- Contribute to AI-assisted and AI-automated workflows that reduce manual effort.
RAG pipelines, agent workflows, and production integration
- Work on RAG pipelines, including document ingestion, embeddings, retrieval, and generation.
- Implement parts of agent-style workflows, including tool usage and step orchestration.
- Integrate GenAI components with existing backend services and APIs.
System reliability and cross-functional collaboration
- Help monitor system behavior, performance, and basic quality metrics.
- Participate in debugging issues and supporting systems in production.
- Collaborate with Product, ML Engineering, and Backend teams.
- Follow team standards for code quality, testing, and documentation.
What makes you a GR8 fit:
Must-have
- Solid background in software engineering.
- Strong engineering fundamentals, including clean code, testing, and debugging.
- Practical experience with Large Language Models, RAG architectures, and embedding-based retrieval.
- Experience building chatbots or conversational systems, including tool or function calling and structured LLM outputs.
- Familiarity with context or tool-serving patterns such as MCP or similar concepts.
- Experience contributing to and supporting production systems.
- Basic understanding of performance, latency, and cost considerations.
- Experience working with AWS-based cloud infrastructure, including ECS, EKS, Lambda, S3, OpenSearch, and API Gateway.
- Experience with Python and basic SQL.
- Experience integrating third-party LLM APIs such as OpenAI or Anthropic, and working with Amazon Bedrock.
- Experience working with open-source models such as LLaMA, Mistral, Mixtral, Qwen, or similar, and the Hugging Face ecosystem.
- Experience with LangChain, LangGraph, Pydantic, and Langfuse.
- Experience implementing embeddings and working with vector stores such as Qdrant, FAISS, OpenSearch, or pgvector.
- Experience using Docker, Git, and CI/CD pipelines.
- Familiarity with evaluation or monitoring of AI systems.
Why you’ll love working here:Â
Benefits Cafeteria — annual budget you allocate to:
Sports • Medical • Mental health • Home office • Languages.
Work-life & support
- Paid maternity/paternity leave + monthly childcare allowance.
- 20+ vacation days, unlimited sick leave, emergency time off.
- Remote-first + tech support + coworking compensation.
- Team events (online/offline/offsite).
- Learning culture with internal courses + growth programs.
Our culture & core values:
GR8 Tech culture is how we win — through trust, ownership, and a growth mindset. We move fast, stay curious, and keep it real, with open feedback, room to experiment, and a team that’s got your back.
- FUELLED BY TRUST: we’re open, honest, and have each other’s backs.
- OWN YOUR GAME: we take initiative and own what we do.
- ACCELER8: we move fast, focus smart, and keep it simple.
- CHALLENGE ACCEPTED: we grow through challenges and stay curious.
- BULLETPROOF: we’re resilient, ready, and always have a plan.
To apply for this job please visit job-boards.eu.greenhouse.io.
