Directly linked with the CTO, I am in charge of all the infrastructure at LALALAB. and all the migration though OVH to AWS (EKS). My main missions can be splitted between maintaining an old stack in place and developing the new infrastructure. As LALALAB. is a B2C business model, we are always on call and always available to fix the service.
Old stack: => Insure a 24/7 uptime of the API => Maintain an old legacy monolith server in OVH, running a lot of different stuff like php, apache2, nginx, fresque, logstash, redis, rundeck, datadog agents... => Monitor the server with Datadog (fresque workers, redis, ...) => Develop little hotfix in php to fix little issues => Maintain the main SQL database (Aurora)
New stack: => Manage, monitor and improve several Kubernetes clusters to migrate the old php code to a new node.js code (EKS, datadog, prometheus, grafana) => Deploy a robust solution for Magento (inside Kubernetes, multi zone availability) => Take care of all the backups and failover strategy => Take care of all the data in our company (GDPR) by using S3, Glacier, etc. => Put in place all the CI/CD to the different Kubernetes clusters => Develop some bash scripts to automate a lot of repeatable actions => Maintain all the core components inside Kubernetes in production (more than 150 containers): elasticsearch, logstash, rabbitmq, nodejs and python servers, cronjob, etc.
Data: => Maintain the main data warehouse (Redshift) => Develop some python scripts to consolidate the data between several databases Management: => Scrum => Asana => Weekly meeting => Team of 3 people
Conferences: => Speaker @POSS2019 => Speaker @CNCF meetups => Went to AWS re:Invent 2019 (Las Vegas)
Being an ops is quite interesting, being responsable of the production is key. However I did not have long term project which was enough challenging. I wanted to be part of the amazing Everoad adventure even more than just being a DevOps. So I joined the Growth and Performances team @Everoad to build the first data wharehouse of the company.
Also, I am still the only DevOps of the company which allows me to have two important roles and contribute even more to the emancipation of this beautiful startup.
Owner of the pricing algorithm: => Update the algorithm to be closer from reality => Develop tools to track publication/acceptation ratio => Develop and maintain a decision algorithm to modify the algorithm => Add new metrics to be closer from the reality (load factor, huge load...) => Centralise all the departments to be always up to date about seasonality rate, lake of supply, etc. => Document all the python scripts
Owner of the data wharehouse automation: => Develop some Python scripts to extract the data from a lot of different sources (salesforce, mongo, sql databases...) => Install and maintain the data infrastructure (Airflow, BigQuery) => Provide a way to the teams to query directly the data wharehouse (restrict access to some dataset...) => Provide some internal formation to request our data wharehouse => Produce usable tables for our data visualisation artist (data studio) => Team management (3 people)
Participate to the product improvement: => Input data requirements to project design documents => Think about the metrics which will be needed by our operational managers and teams => Track new metrics
Product discovery: => POC some temporary ad hoc tools for our operational team => POC geo localisation systems with our carriers
As the first DevOps of the company, my role is closely linked to all the subjects of the domain. From the management of the whole infrastructure to the security aspect of it, including CI/CD, backups automation and so on.
Directly linked to the CTO, I am actively participating to the involvement of the platform and to the operational run of it. My autonomous position allows me to take active and important decisions and to hold the management part of my work.
Owner of all the cloud infrastructure: => Migration from Heroku to Kubernetes (GCP) => Dockerise all the applications => Reduce the number of SaaS services to include them inside Kubernetes or GCP => Installation and Management of all the databases (Elasticsearch, MongoDB, Prometheus, PostgreSQL...) => Installation and Management of all the monitoring part (Grafana) => Installation and Management of all the logging part (Bunyan/Kubernetes logs) => Implementation of a release manager for Kubernetes (Helm) => Management of all the backups and restore tests => Management of all the DNS (Godaddy, OVH, ...) => Redaction of technical documentations to onboard the new people / how to troubleshoot the Kubernetes clusters
Improve the tools for the developers: => Migration from github to gitlab (self hosted) => Management of all the continuous integration of all the micro services (gitlab runners inside Kubernetes) => Management of all the continuous deployment of all the micro services (put in production with one button) => Development of ephemeral environnement called "review-app", with the whole stack for product demonstration (on-demand deployment) Owner of the cloud security: => SSL certificates => Installation and Management of Vault => Internal security presentation => Management of the firewalls => Management of RBAC rules inside Kubernetes
Taking part of the future: => RGDP compliant => ISO security certifications => Technical meeting to scale up the infrastructure
Mobile Devices IngenierieFebruary 2017 - August 2017
My goal was to come up with a plan within the next six months to start the migration of all the legacy infrastructure to a container-oriented infrastructure, running with Kubernetes.
Main missions: => Benchmark all the possible installations regarding the deployment of a new cluster on Debian, CentOs, GCE, AWS, Ubuntu, CoreOs etc. => Benchmark the easiest ways to deploy a k8s cluster (own scripts, kargo, ...) => Administrate all the clusters (running Kafka, databases, private containers, redis ...) => Make a presentation of the available solutions to the server team => Follow the Kubernetes main project (currently released every 4/6 months) => Make dynamic provisioning and stateful apps => Find an easy way to expose pods to the outside world (external LB)
As the first Continuity Plan Officer of the company I had to set up the first Business Continuity Plan and Disaster Recovery Plan.
Main tasks I handled during the 4 months: => Interview key users (more or less 50 people) => Analyze the services provided by the IT team => Determine a list of critical services and the RTO linked to each of them => Analyze of all the different backup strategies (SQL, Veeam, ...) => Build a dashboard to follow KPIs linked with BCP / DRP (RTO RPO etc.) => Run new projects to improve KPIs => Write specifications to develop an intern application to follow real time RPO (double SCRUM role: Product Owner and Project Master) => Make a few presentation to summarize my work and present the new objectives => Prepare new projects, tests, etc to improve BCP / DRP => Set up the first Business Continuity and Disaster Recovery Plan for all Amaris offices => Organize a Business Continuity test and a crisis test
General Engineer, Information Technology - Montreal university2016 - 2017
Engineering computer Science - National Institute of Applied Sciences of Lyon2014 - 2017
University Diploma in Technology, Computer Science - Nice Sophia Antipolis University2012 - 2014
You need this profile next month?Contact us!
You’re about to be redirected to SkillValue
As Pentalog’s Talent Sourcing branch, SkillValue relies on a pool of 400,000+ Tech & Marketing Specialists – including 15,000+ Freelancers, a comprehensive catalog of IT assessments, available projects and job opportunities. Our SkillValue consultants are always ready and willing to help you boost your career.
Stay in the know with Pentalog tech & business updates
WHAT WE'RE ABOUT
Pentalog is a digital services platform dedicated to helping companies access world-class software engineering and product talent. With a global workforce spanning more than 16 locations, our staffing solutions and digital services power client success.