CME Group: Where Futures Are Made
CME Group is the world’s leading and most diverse derivatives marketplace. But who we are goes deeper than that, here you can impact markets worldwide, transform industries and build a career shaping tomorrow. We invest in your success and you own it, all the while working alongside a team of leading experts who inspire you in ways big and small. Joining our company gives you the opportunity to make a difference in global financial markets every day, whether you work on our industry-leading technology and risk management services, our benchmark products or in a corporate services area that helps us serve our customers better. We’re small enough for you and your contributions to be known. But big enough for your ideas to make an impact. The pace is dynamic, the work is unlike any other firm in the business, and the possibilities are endless. Problem solvers, difference makers, trailblazers. Those are our people. And we’re looking for more.
To learn more about what a career at CME Group can offer you, visit us at www.wherefuturesaremade.com .
Role Overview:
A crucial role in CME’s Cloud data transformation, the data SRE will be aligned to data product pods ensuring the firm’s data infrastructure is reliable, scalable, and efficient as the GCP data footprint expands rapidly. Responsibilities include optimizing data pipelines, ensuring data integrity and consistency, enhancing system resiliency where applicable, maintaining and improving data security, proactive alerting and monitoring for data pipelines, and automating repetitive data-oriented tasks.
Accountabilities:
- 6+ years of related to the role experience.
- We are looking for person who can work independently.
- Automate data tasks on GCP
- Work with data domain owners, data scientists and other stakeholders to ensure that data is consumed effectively on GCP
- Design, build, secure and maintain data infrastructure, including data pipelines, databases, data warehouses, and data processing platforms on GCP
- Measure and monitor the quality of data on GCP data platforms
- Implement robust monitoring and alerting systems to proactively identify and resolve issues in data systems. Respond to incidents promptly to minimize downtime and data loss.
- Develop automation scripts and tools to streamline data operations and make them scalable to accommodate growing data volumes and user traffic.
- Optimize data systems to ensure efficient data processing, reduce latency, and improve overall system performance.
- Collaborate with data and infrastructure teams to forecast data growth and plan for future capacity requirements.
- Ensure data security and compliance with data protection regulations. Implement best practices for data access controls and encryption.
- Collaborate with data engineers, data scientists, and software engineers to understand data requirements, troubleshoot issues, and support data-driven initiatives.
- Continuously assess and improve data infrastructure and data processes to enhance reliability, efficiency, and performance.
- Maintain clear and up-to-date documentation related to data systems, configurations, and standard operating procedures.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Data Science or related field, or equivalent practical experience
- Proven experience as a Data Site Reliability Engineer or a similar role, with a strong focus on data infrastructure management
- Good understanding of SRE practice.
- Proficiency in data technologies, such as relational databases, data warehousing, big data platforms (e.g., Hadoop, Spark), data streaming (e.g., Kafka), and cloud services (e.g., AWS, GCP, Azure).
- Strong programming skills in languages like Python (numpy, pandas, pyspark), Java (Core Java, Spark with Java, functional interface, lambda, java collections), or Scala, with experience in automation and scripting.
- Experience with containerization and orchestration tools like Docker and Kubernetes is a plus.
- Experience with data governance(data plex), data security, and compliance best practices on GCP
- Solid understanding of software development methodologies and best practices, including version control (e.g., Git) and CI/CD pipelines.
- Strong background in cloud computing and data-Intensive applications and services, with a focus on Google Cloud Platform
- Experience with data quality assurance and testing on GCP
- Proficiency with GCP data services (BigQuery; Dataflow; Data Fusion; Dataproc; Cloud Composer; Pub/Sub; Google Cloud Storage)
- Strong understanding of logging and monitoring using tools such as Cloud Logging, ELK Stack, AppDynamics, New Relic, Splunk, etc.
- Knowledge of AI and ML tools is a plus
- Google Associate Cloud Engineer or Data Engineer certification is a plus
- 2+ years of experience in data engineering or data science on GCP
#LI-Hybrid
#LI-DS
#dice
CME Group : Where Futures are Made
CME Group is the world’s leading and most diverse derivatives marketplace. But who we are goes deeper than that. Here, you can impact markets worldwide. Transform industries. And build a career by shaping tomorrow. We invest in your success and you own it – all while working alongside a team of leading experts who inspire you in ways big and small. Problem solvers, difference makers, trailblazers. Those are our people. And we’re looking for more.
More Information
- Address 20 S Wacker DrChicago, IL, USA
- Salary Offer $100.000 ~
- Experience Level Senior
- Total Years Experience 5-10