We are seeking a highly skilled Technology Engineer specializing in AI/ML Platforms, MLOps, and GenAI infrastructure to design, build, and scale next-generation AI systems. The ideal candidate will have strong experience in containerized environments, model serving, and cloud-based AI architecture, with a focus on performance, scalability, and resilience.
RequirementsKey Responsibilities
- Design, build, and maintain containerized applications using OpenShift, OpenShift AI, Kubernetes, and Helm Charts
- Deploy and optimize AI inference engines such as Triton Inference Server and vLLM for high-performance model serving
- Lead end-to-end model lifecycle management, including deployment, monitoring, scaling, and retraining workflows
- Implement monitoring, logging, and alerting systems using Prometheus and Grafana
- Collaborate on GenAI and LLM-based projects, including Agentic AI solutions
- Build and automate CI/CD pipelines using Jenkins, Groovy, Ansible, and Terraform
- Develop automation scripts and internal tools using Python
- Architect and manage AI/ML solutions on AWS, leveraging services like SageMaker and Bedrock (preferred)
- Build and enhance AI platforms across on-premise and cloud environments
- Ensure systems are highly scalable, fault-tolerant, and performance-optimized
- Contribute to architecture design, platform roadmap, and strategic technical decisions
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- Strong hands-on experience with:
- Kubernetes / OpenShift ecosystem
- MLOps and AI/ML deployment pipelines
- Inference optimization (TensorRT / ONNX / Triton / vLLM)
- Experience with CI/CD tools (Jenkins, Groovy, Ansible, Terraform)
- Proficiency in Python scripting and automation
- Experience with monitoring tools like Prometheus and Grafana
- Solid understanding of distributed systems, microservices, and cloud-native architecture
- Hands-on experience with AWS Cloud services (SageMaker, Bedrock preferred)
- Experience working on GenAI / LLM / Agentic AI use cases
- Knowledge of GPU acceleration and performance tuning
- Exposure to hybrid cloud (on-prem + cloud) AI platforms
- Familiarity with enterprise-scale AI platform engineering
- Opportunity to work on cutting-edge AI/GenAI platforms
- Exposure to large-scale enterprise AI deployments
- Collaborative and innovation-driven engineering environment
- Competitive compensation and growth opportunities
Skills Required
- Bachelor's or Master's degree in Computer Science, Engineering, or related field
- Strong hands-on experience with Kubernetes / OpenShift ecosystem
- Experience with MLOps and AI/ML deployment pipelines
- Experience with CI/CD tools (Jenkins, Groovy, Ansible, Terraform)
- Proficiency in Python scripting and automation
- Experience with monitoring tools like Prometheus and Grafana
- Solid understanding of distributed systems, microservices, and cloud-native architecture
- Hands-on experience with AWS Cloud services (SageMaker, Bedrock preferred)
What We Do
Global Software Solutions Group Veracious product line is a series of robust banking platforms that provide core banking, payment systems, custom process automation, and document management solutions for banks and financial institutions in Middle East & Africa. This cutting-edge product line features the Veracious Payments Hub, Digital Banking and the DMS, all built on the Torus Lowcode development platform software. Global Software Solutions Group is a software solutions provider that aims to solve mission-critical problems that financial institutions face today. Our software solutions bring together our Low Code platform, the payments product line and customized service offerings to solve mission-critical statements in core banking, payments, process automation, and document management. The Payments Hub is GSS's flagship product.







