DevOps Engineer, Remote Worker

If you meet our requirements click here to start the application process.

We guarantee discretion to all candidates.

About Us

Kynetec is a global leader in market research for animal health and agriculture, helping companies around the world understand the dynamics of their marketplace, turning data into business opportunities, and enabling our clients to create winning strategies.

We’re looking for a like-minded, “battle-tested” DevOps Engineer to join our globally distributed but close-knit team in creating, delivering, and maintaining our software-based solutions and platform. This position is for a remote worker based in Poland, Bulgaria, Romania, Hungary, or Czechia.

About You

You’ll have come from a Software Engineering background with SysOps experience because you’ve recognised the need to grow and be multi-disciplined.

You’re someone who is pro-active and not afraid to jump into the technology stack at any level. You’re someone who applies best practice based on knowledge and experience in implementing and operating software in a Production environment.

Ideally, you’ll have spent at least 5 years as a DevOps Engineer with at least 2-3 years supporting and collaborating with technical leads, working across functional teams to implement and instil the DevOps culture in an organisation.

You’re self-motivated, a stickler for quality, with an eye for detail; Performance and security are just 2 things which run through your mind as you shift focus across the different parts of the SDLC and delivery mechanics in a Cloud-based environment.

You don’t believe in a “comfort zone”; you yearn for those challenges others would find difficult and you succeed because your logical approach and lateral thinking serves you well!

You’ll want to work for us at Kynetec because you want to make a difference and make your effort count – you want to create something amazing and know your fingerprints are all over it. You’re looking to work on something meaningful which you’d never compromise on, even if it means sacrificing an evening or a weekend. You don’t do it because you must, you do it because you believe it will add up to something. Something which you know wouldn’t be possible elsewhere.

The Job

You can expect to be hands-on coding, scripting, and process (re-)engineering.

The role requires you to be able to work on your own as much as with others in the business, and for that you’ll need to be a good communicator to convey your solutions and updates.

As a global, data centric company, we want to ensure we handle data “properly”. You’ll be expected to work with others to architect and implement solutions which are asynchronous and distributed in nature, predominantly focusing on security, segmentation, availability, and performance.

You’ll be expected to use your skills and knowledge to help choose and shape the use of tools, applications, and cloud based services in such a way that we’re continuously improving our pipelines in a relentless mission to speed up feedback loops, shorten release cycles, and strengthen resiliency because as we grow, so do our challenges.  This means you won’t be the sort of person who’s fazed by unfamiliar technologies because you’ll be expected to evaluate them and pick them up quickly where needed.


  • Must-Have:
    • AWS Certified DevOps Engineer (Professional)
  • Nice-to-Have:
    • AWS Certified Big Data
    • AWS Certified Advanced Networking
    • Tableau Server Qualified Associate

Skills & Technologies

  • Tool agnostic – you recognise one shoe does not fit all and you’re not afraid of gathering information to support in making informed decisions to pick the most appropriate language or technology
  • Design, build, and operationally manage a technology stack
  • Configure and manage health checks, single points of failure, alerts, and notifications
  • Working with Linux and Windows based environments interchangeably
  • Languages: Python, Perl, C#, Java, Bash, Powershell
  • Technologies: Docker, Jenkins, Git, Tableau, AWS
  • Databases: Aurora, Redshift, PostgreSQL, SQL Server, MySQL
  • Machine Learning: TensorFlow, PyTorch, Caffe2, scikit-learn, Gluon with Apache MxNet, SageMaker
  • Spoken language: English



  • A strong cloud systems operations and networking background
  • Up to date with creating and operating AWS in a Production environment. Multi-account setups, EC2, Fargate, S3, VPCs, IAMs, ELBs, Route 53, CloudWatch, CloudTrail, Lambda, Step Functions, API Gateway, EMR
  • Use of AWS CloudFront as a content delivery network
  • Working with CloudFormation or preferably, Terraform to plan and create infrastructure as code
  • Working with Vagrant to manage environments
  • Working with large (Exabyte) volumes of data – exporting, transforming, migrating, sanitising, backing up, archiving, and retrieving using batches and streaming
  • Identifying and resolving bottlenecks and weaknesses in execution/process flows and pipelines
  • Working with and supporting other Engineers at the code and configuration level
  • Previously mentored and coached Developers and junior DevOps and SysOps staff
  • Scripting the hooks and the glue to ensure systems communicate with each other seamlessly
  • Actively participate in technical forums
  • Common security protocols and identity and access management systems
  • Architecture: PaaS; IaaS; distributed; event driven; message driven;

To excel at the role, you’ll need to:

  • Code, script, and test when needed
  • Have strong practical experience using a wide variety of DevOps technologies and tools
  • Have experience with IT systems administration and operations
  • Have a strong sense of quality assurance
  • Be experienced with configuration management tools
  • Have a deep understanding of and experience with managing data and data pipelines
  • Experienced with configuring, maintaining, and administrating relational and non-relational databases
  • Understand the concepts of machine learning and predictive analytics with previous hands-on practical experience
  • Be able to conceptualise logical approaches to complex problems
  • Have an attention to detail
  • Be intellectually curious and have a proactive approach towards self-learning
  • Be self-motivated to work efficiently and effectively
  • Have great passion for delivering to commitment and ensuring business and users’ needs are met
  • Be comfortable with collaboration, open communication, and reaching across functional borders

To Apply

If you meet our requirements click here to start the application process.

We guarantee discretion to all candidates.