Staff Database Infrastructure Engineer
Calix
Staff Database Infrastructure Engineer: Cloud Platform
This is a remote position in US.
We, the Cloud Platform Engineering team at Calix are responsible for the Platforms, Tools, and CI/CD pipelines at Calix. Our mission is to enable Calix engineers to accelerate the delivery of world-class products while ensuring the high availability,
We are seeking a skilled and experienced Staff Database Infrastructure Engineer – Cloud Platform with expertise in Google Cloud Platform (GCP) to join our team. The ideal candidate will have a strong background in Database Administration, cloud infrastructure automation and SQL Query optimization. This role will involve collaborating with development, data engineering, and operations teams to design, implement, and maintain scalable and reliable data pipelines and infrastructure
Responsibilities:
- Design/rearchitect our infrastructure platform components to be highly available, scalable, reliable, and secure, with a strong focus on database-backed services.
- Own and manage database infrastructure including BigQuery, Redis, AlloyDB, and Cassandra, ensuring high performance, availability, and cost-efficiency.
- BigQuery Management: Design, implement, and optimize datasets, tables, and queries for large-scale analytics.
- Monitor and troubleshoot database performance and query optimization, including cost management and efficient data storage techniques.
- Manage and monitor data ingestion pipelines using tools like Dataflow or Kafka.
- Ensure data security, access control, and compliance for all managed database platforms, including IAM policies and encryption.
- Manage schema changes and data migrations using automation tools such as Liquibase or similar.
- Ensure observability is an integral part of the infrastructure platforms, providing adequate visibility into health, utilization, and cost—especially across database workloads.
- Implement IaC using tools like Terraform/Terragrunt.
- Build tools that predict saturations/failures and take preventive actions through automation.
- Collaborate extensively with cross-functional teams to understand data access patterns and infrastructure requirements; educate them through documentation/trainings and improve the adoption of the platforms/tools.
Qualifications:
- Bachelor’s degree in Computer Science or equivalent.
- 8+ years of experience in building large-scale distributed systems in an always-available production environment.
- 5+ years of experience building Infrastructure Platforms and CI/CD pipelines in a major public cloud provider – GCP preferred; hands-on expertise in commonly used Cloud infrastructure/platforms services.
- Deep expertise in Alloy DB, Big Query or similar database technologies
- Experience managing large-scale data pipelines and stream processing systems using tools like Dataflow, Kafka, or Pub/Sub.
- Strong programming skills in Python, Shell/Bash, or similar scripting languages.
- Fast learner with the ability to troubleshoot complex scenarios while processing large volumes of data (Terabytes and Petabytes).
- Hands-on experience with observability platforms/tools like Grafana/Prometheus.
- Experience coaching and mentoring junior engineers; strong verbal and written communication skills.
- GCP certification is a plus.
#LI-Remote
The base pay range for this position varies based on the geographic location. More information about the pay range specific to candidate location and other factors will be shared during the recruitment process. Individual pay is determined based on location of residence and multiple factors, including job-related knowledge, skills and experience.
San Francisco Bay Area:
156,400 - 265,700 USD AnnualAll Other US Locations:
As a part of the total compensation package, this role may be eligible for a bonus. For information on our benefits click here.