Garland K.

DEVOPS ENGINEER

977 dollar
Freelancer
22 years
San Francisco, UNITED STATES

My experience

More

ManagedKubeAugust 2018 - Present

Building Kubernetes cloud infrastructures
More

DevOps Services LLCJanuary 2015 - Present

- International team of consultants with 10+ years of Docker/Kubernetes experience
- Work with multiple clients to develop the infrastructure needed to run large-scale, reliable applications
- Work both in San Francisco and remotely
- Developed Managed Kubernetes Services product, a SaaS that runs the cluster and all supporting infrastructure for your application
More

HealthTapFebruary 2019 - Present

Kubernetes consultant. Moving the Chef base infrastructure to a Terraform and container base infrastructure running on Kubernetes.
More

LeanplumJanuary 2018 - March 2019

- Worked with client to architect and implement a plan that moved them from Google App Engine to a GKE environment.
- Worked with all teams from DevOps, engineering, to Data Science on how each will construct their application and migrate live traffic to the new platform
- Educated the entire company on best practices on how to use the cloud and Kubernetes
More

LucidworksOctober 2017 - September 2018

We collaboratively worked with LucidWorks’ management and development teams to map out the problem, establish what success would look like, and to ultimately build the best solution to achieve that end result. We answered key questions about how to build the infrastructure, such as:
- Do we go with a configuration management tool such as Chef, Puppet, or Ansible?
- Do we use Cloudformation or Terraform to build the infrastructure?
- What OS should we use?
- What does the development life cycle look like?

The most critical question that we helped LucidWorks tackle was, should Lucidworks build an infrastructure model with a configuration management base or Kubernetes? In 2017, there were still a lot of folks who believed that using configuration management to create programmatic infrastructure as code was the best solution. However, we firmly believed that containers and Kubernetes are a better way of creating and managing infrastructure (which the passage of time has proved out) and were able to guide Lucidworks to building a highly scalable infrastructure on AWS.
More

Tillster, Inc.November 2016 - June 2018

- Worked with various product management teams on transforming their application from a Chef based infrastructure to a Kubernetes infrastructure. This accomplished a few goals:

* Decreased Cost: Prior infrastructure utilization was very low (~25%). Increased server utilization rate to 60% on a Kubernetes cluster by running multiple workloads on the same server and was able to run fewer nodes in aggregate.

*Increased deployment reliability: Prior deployment required the entire application team to stand by during a deployment. With the new automated CI/CD pipeline, they can now schedule when a deploy happens with only one non-technical person.

*Improved testability: Prior to this change, the application was only 50% testable. Local developers’ machines looked nothing like what was in production and pre-production environments were only similar to each other. This is a dangerous situation where some items could not be tested before deploying into production. Now they have pre-production environments that looks 100% the same as the production environment so they can test every aspect of the app.
More

Apteligent (acq. VMW)January 2017 - December 2017

Apteligent was looking to move from a Chef based infrastructure to a container based infrastructure. The current system they had on AWS had a few problems. The Chef based infrastructure was not well suited to handle continuous deployment and the local development environment did not resemble anything like what was running in the cloud mostly due to the fact that in the cloud infrastructure was created by Chef and the local environment was not.

ManagedKube designed and implemented a Kubernetes based infrastructure that addressed all of the issues of the old system. The new Kubernetes infrastructure is created all from code with a GitOps style workflow, full CI/CD (continuous integration and continuous delivery) via Jenkins running in Kubernetes, and local development environments running the same containers as the cloud infrastructure with Docker Compose.
More

Guardant HealthJanuary 2015 - June 2017

I designed the process and led the implementation team to copy local genome sequencing data to AWS via AWS Snowball and over the internet (700TB of total data moved at 2TB/day), taking great care to make these data transfers secure and reliable.

We designed and built infrastructure with:
- Multi-region data at rest design with rules to age data out to lower tiered storage to make holding this amount of data cost effective
- Automation constructs on how specific data can be retrieved and loaded into an AWS compute environment to re-run the data pipeline on it
- A Kubernetes platform for running web type workloads for development and production environments on AWS.
- Full monitoring, logging, and visibility packages
- A fully automated CI/CD pipeline to build and test the software, containerize it, perform integration tests, and a deployment sequence into: dev, qa, staging, and production.
More

Vungle.comJanuary 2015 - May 2017

* Converted existing applications to run in Docker containers and developing workflows on how to develop, test, and deploy these containers

* Key team member architecting and deploying Kubernetes into production on AWS. An all CoreOS cluster deployed with CloudFormation and Ansible with Kubernetes on top. Full externally exposed Kubernetes Ingress (Nginx loadbalancers) for a high volume of inbound traffic. Use of Kube namespaces to separate out compute resources for prod, dev, test.

* Deployed of a large Zookeeper and Kafka cluster on CoreOS using Docker containers and Ansible for deployment and maintenance

* Utilized Datadog for system and metric monitoring of all aspects of the various clusters (Kubernetes, ZK, Kafka)
More

WerckerJanuary 2016 - February 2017

* Maintained custom built Kubernetes cluster on AWS via Cloudformation

* Adding features into the Kubernetes cluster such as: separate Docker daemon for normal cluster functions versus for external facing build pipeline usage, metrics monitoring for the entire cluster, testing, and overall maintenance of the entire cluster
More

PalerraAugust 2014 - February 2016

* Refactored entire infrastructure buildout and deployment. It started as a bunch of loosely tied together scripts to perform a system build and deploy. Transitioned them to an AWS CloudFormation template to create the complex environment and Ansible to perform system configurations and updates.

* Fully automated the entire system build out

* Big data with Datastax (Cassandra, Analytic, Solr) clusters

* CoreOS cluster running custom application for large data ingestion. Everything is fully Dockerized and automated on this cluster

* Being an information security company the environments and build process has a lot of security built in. Separate AWS accounts for production and development environments. Usage of AWS HSM for key management. Secure VPN access into environments. Tight identity management controls. Rigorous environment auditing on logs and access.
More

ToutNovember 2013 - July 2014

* Improved Opscode Chef environment by introducing cookbook versioning, moving key/values into cookbook attributes and data bags, refactored most cookbooks (upwards of 30 different unit types) to follow best practices.

* Refactored and extended current deployment system using Python and Fabric.

* Refactored deployment framework to support arbitrary environments. Code base originally assumed only prod, stage, qa, and test environments.

* Introduced Docker containers and CoreOS as the application platform

* Rolled out Sumo Logic log monitoring for all units and environments. Created dashboards that gives business insights and performance metrics into Tout.com publishing video workflow

* Extensive automation on Amazon AWS utilizing the API for these services: EC2, ELB, EIP, ASG, LC, CloudFormation, S3, DynamoDB, CloudFront, RDS, Elastic Map Reduce
More

Algorithms.ioMarch 2011 - November 2013

* Acquired by http://lumendata.com

* Development of frontend and backend system

* Webpage and dashboard design and development: http://www.algorithms.io/

* Backend development: creating RESTful APIs with PHP Zend Framework with authentication and paid subscriber billing

* DevOps: Automation of all servers in Rackspace and AWS with full 3 tier web architecture using Puppet and Ansible.

* Big Data: creating automation framework for how user can use Hadoop Map Reduce jobs from a web interface and automating everything that is needed to ingest data, process data, and deliver the end resutls

* Setting up QA test frameworks and process with Frisby.js and Selenium
More

IneoQuest Technologies, Inc.June 2009 - March 2011

· Work with customers (large ISPs, Telcos, rural IPTV providers, etc) to design, deploy, and tune their IPTV
monitoring solution.
· Taking main role in troubleshooting video performance problems due to network, encoder/transcoder,
encryptor issues.
· Custom Automation using Perl, PHP, Selenium, TCL
· Building internal proof of concepts for various project. Cobbling open source tools together to writing
custom web applications.
· Perform training sessions with the customer on how to use our system effectively and advance metrics
monitoring
More

Cisco SystemsOctober 2008 - June 2009

· Designed and implemented a VMware virtualization environment. HP Blade Centers and Cisco UCS servers.
Fiber Channel SAN with Promise Vtrak and EMC Clariion CX4-240 array.
· Designed and Implemented the first production VMware View virtual desktop environment on Cisco UCS
servers
· Implemented Zyrion Traverse monitoring system with custom automation plugins for corner automation/
monitoring cases
· Implemented Splunk server which is taking in syslog, CDRs, IOS configs for config management, debug
output from automation, etc
· Monitoring network using Lancope Stealthwatch for Netflow data
· Custom CDR Parsing and reporting for Call Managers
· Prototyping new software to solve business needs
· Managing over 20 business unit customers with all their network application/automation needs
· Managing a team of 3 people
· Utilizing Cisco Nexus 7000, Cisco MDS 9000, FCoE
· Provide custom solutions to the business units to solve their network and business needs.
· Using various technology, tools, and automation to fulfill business units needs in network management,
monitoring, virtualization, and automation
· Gained valuable experience working with over 20 Cisco Business Units such as Webex, ASG
More

ArcsightOctober 2006 - October 2008

· Verifing Arcsight ESM solution content. Content including PCI, SOX, User Monitoring, Insider Threat.
Writing custom rules to populate Active Lists.
· Implement Arcsight ESM on Linux, Solaris, Windows, and AIX platforms
· Install and verify various Arcsight Connectors
· Verify Arcsight Logger functionalities
· Writing test plans for the NSP product line. This includes manual and automated test plans
· Create PERL modules to perform various NSP functions. Using these modules, I implement a flexible
automated test environment
· Created test cases using SILK to test the NSP product running in a real web browser (Firefox and IE 6)
· Created a web front end to save test cases, run, and collect result of SILK test running on a Windows
machines. The web front end is written in PHP and calling command line function to push and execute test
to remote SILK windows machines.
More

Spirent CommunicationsMarch 2004 - October 2006

· Provide onsite custom testing services
· Pre-sale ­ work with customers to find out their testing needs and sell them on ideas of how we can test it
which helped close multiple deals in the ranges of $30K-200K
· Post-sale - work further with the customer to complete the agreed upon testing gaining a sign-off to the
project and up selling other product and professional services
· On site execution of thMe test plans. Testing can last anywhere from 1 to 3 weeks.
· Create Statements of Work prior to the engagement and write final reports after the engagement with
details on the testing and analysis of the results
· Work with leading magazines (Network world and Infoworld), forums, and standards groups in writing test
plans and performing tests with bleeding edge technology
· Perform functional and performance testing on Softswitches, Media Gateways, and Triple Play Networks
running various protocols (SIP, H.323, H.248/MEGACO, MGCP, SCCP, T1, E1, SS7, IPTV, HTTP, SMTP, etc)
· Perform interoperability tests between different vendors using the same protocols. Using the specifications
from the RFCs, determine which vendor is not following which section and determine what needs to be
changed in order for the two devices to work together.
More

Choice TranscriptionsApril 2003 - March 2004

· Support all network operations: routed network, firewalls, network resources, security, VPN, Secure FTP,
PCs
· Assist remote medical offices connecting into our network and setting up email encryption
· Implement new technologies to be HIPAA compliant
· Design a dictation system which can be accessed by phone, web, or PDAs
More

SprintDecember 2000 - November 2001

· Protocol conformance tests (ISIS, OSPF, BGP) on Avici, Cisco, and Juniper OC192 routers
· Work closely with Principal Engineers in testing Alcatel 1640 SONET DWDM system (OC48 & OC192
modules)
· Support internal lab network connectivity which spans 6 buildings (T1, ATM, Fast Ethernet, POS), access
servers, and design IP schemes
· Monitor power usage on DC powered network equipment that is linked to batteries (10000W) for backup
More

DeVry UniversityJune 2000 - March 2001

· Instruct Linux in a hands on lab environment
· Teaching students with no previous experience with the OS to perform administrative functions
· Set up and administer multiple Windows NT 4.0 and Linux (Red Hat 6.2) servers with over 50 PCs in the
Domain. The Windows NT server has DHCP, WINS, and Web server services running on it. The Linux box
was running BIND 9 for DNS services (caching and recursive look ups) and also Samba and NFS for
storage.
· Resolve problems with DeVry's T1 connection and enhanced security of border route
More

SpeederaOctober 2000 - December 2000

· Monitor Web and Streaming servers co-located with different ISPs around the world with HP OpenView and
internally developed probing tools on Linux servers
· Trouble shoot and resolve issues with ISPs and production severs
· Technical customer support
More

Auto TownMay 1999 - June 2000

· Isolate and troubleshoot a enterprise LAN connecting car dealer ships around the bay via Frame Relay
· Helpdesk duties, which include setting up NT4.0 servers for internal applications, assist end users with
Outlook, Office suite, network connections, and etc.

My stack

Zend Framework, WINS, WINDOWS NT 4, Windows, WebEx, VMWare, Virtualization, Terraform, TCL, SS7, Spark, Solr, Solaris, SMTP, SIP, Selenium, Samba, SaaS, RESTful API, RedHat, Rackspace Cloud, Python, POS, PHP, Perl, OSPF, Nginx, NFS, MGCP, Linux, Kubernetes, Juniper, ISIS, IPTV, HTTP, Hadoop, Google Cloud Platform (GCP), Golang, GitOps, Ethernet, DynamoDB, DWDM, Docker Compose, Docker, DNS, DevOps, DataDog, Data Science, CoreOS, Cloud Computing, Clariion, Cisco Nexus, Cisco, Cassandra, Bind9, Big Data, BGP, Azure Cloud, AWS, ATM, Apache Kafka, Ansible, Amazon Web Services S3, Amazon Web Services EC2, Amazon Relational Database Service (RDS), Amazon Elastic MapReduce (EMR), Amazon Elastic Load Balancer (ELB), Amazon CloudFront, Amazon CloudFormation, AIX