Data DevOps Engineer-Remote
Remote, Vancouver, BC
Tucows (NASDAQ:TCX, TSX:TC) is on a mission to make the Internet better by providing people everywhere with online access to be empowered to make individual contributions. As a company, we embrace a people-first philosophy that is rooted in respect, trust, and understanding to encourage freedom, inspire innovation, and promote inclusivity; creating an environment for everyone to thrive!
Tucows has been working on the Internet since the days when people unironically called it the Information Superhighway. We are a 25-year-old global start-up embracing agility and creativity in order to continually seize opportunity for growth. We have evolved from a start-up domain service provider to becoming the second-largest domain wholesaler in the world while expanding our business with Ting, an Internet services company partnering with towns and cities to change what customers expect from their Internet Service Provider. We are building fiber networks across the US and have already launched Gigabit speed service in Maryland, Virginia, North Carolina, Colorado, Idaho and California, laying the groundwork for rapid scale.
Our growth has been incredible, smart, and measured, built on a solid technical and financial foundation. We have doubled our workforce in the last 4 years and continue in rapid expansion mode, providing services to millions of customers around the world.
About the role:
At Tucows, data is essential for the organization. We are in the midst of building a world-class analytics, reporting and data science platform. The Data Team is looking for the right DevOps engineer with a strong system administration background to join our team. The data team is currently building it’s platform around the Airflow, DBT, Kafka and Snowflake stack.
You will have the opportunity to participate and be part of our team and will contribute to several aspects such as deployment process, testing, integration with platforms, performance and capacity management, disaster and recovery.
In this role, you can expect to:
- Help facilitate agile development by using CI/CD practices to run pipelines to manage and scale our growing data practice.
- Own and ensure the availability, performance, scalability and security of the data platforms.
- Work with the data teams to implement modern development standards and tools.
- Work with the data engineering team to architect solutions for data applications.
- Contribute back to upstream OSS when appropriate.
You may be a good fit for our team if you have:
- Comfortable in object-oriented programming languages like Python, Java.
- Experience operating and maintaining production systems in a Linux computing environment.
- Configuration management (SaltStack, Ansible, Chef, Puppet etc.) experience
- Experience with Infrastructure as Code principles and standard methodologies.
- Experience with Cloud Computing platforms like Amazon AWS and Openstack and understanding of scaling and reliability concerns.
- Data orchestration (Airflow) experience.
- Kubernetes or other orchestration experience.
- Excellent verbal and written communication skills.
- Ability to learn quickly and comprehend new technologies.
- Strong organizational and interpersonal skills, with an ability to build relationships.
- Work effectively within a team environment.
Nice to have skills and experience with:
- Elastic stack
- HashiCorp tools such as Terraform, Nomad, Vault, Consul
- Predictive and Machine learning model deployment.
We believe diversity drives innovation. We are committed to inclusion across race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status or disability status. We celebrate multiple approaches and diverse points of view.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.