- I work as a Data Engineer on an IoT project, where measurements data from pools are read from multiple transactional databases into a Delta Lake in Azure. - The Databricks Delta Lake contains multiple Databricks notebooks with Spark code that ingest data from transactional MySQL databases and process this data to output information which is later used in PowerBI reports. - I handle building the ETL pipelines and job schedules by using PySpark, creating the data sources that are needed for the PowerBI reports and also help with creating and deploying the PowerBI reports in the online service.
- Worked as a Data Architect to create the data model, ETL processes and outlined the documentation template for migrating data stored in an old DBASE database to SQL Server 2016. - Created custom T-SQL Scripts and migration pipelines automated with Azure DevOps that allowed versioned T-SQL scripts to be ran and create specific versions of databases for the application developers. - During my time on the project I outlined the general data model, set up documentation best practices using Enterprise Architect, created T-SQL templates that could be called from SQLCMD, templates which could be "injected" with specific table-based migration logic. - I handled discussions with the client, discovery sessions and laid out the general plan and major milestones that needed to be hit in order for the project to be able to continue in a straightforward way, without much further involvement from my side.
- I worked as a Data Scientist on a greenfield project where I had to predict errors in a factory production line. - The purpose was to create a machine learning algorithm for predictive maintenance so that a production line could investigate or review the state of a certain step in the production line, so that fewer faulty products reach the end of the production line. - The algorithm would predict the likelihood of an error occurring in a certain step of the production line and an on-site engineer would investigate if it's safe or not to continue the production.
- Data conversion and migration from older systems. Refactoring and updating SQL code to use newer functionalities in order to use a better set-based approach. - Recommendations and 1 on 1 discussions with developers as well as organizing trainings on querying best practices, performance tuning, index tuning, set-based thinking and other best practices. - Identifying and improving performance for low performing queries through query rewriting, index tuning. Administration tasks involving backups and automating restores via Powershell scripts. - Database schema analysis, recommendation and implementation of denormalization for some functionalities in order to improve read performance.
- Reviewing and assisting application developers with writing Oracle database scripts. - Working on improving query performance for complex queries. - Actively trying to improve the overall PL/SQL knowledge, relational and set-based thinking in the teams of developers through one on one discussions with developers and technical presentations which focus on explaining different database concepts like: - set-based thinking - database/table design - indexing strategies - querying best practices - Monitoring and resolution of issues that occur during Goldengate replication. - Monitoring and managing deployment of database scripts on development environments. Resolving issues with invalidated objects which can occur during deployment.
Working with SQL Server and Oracle on data layer maintenance (level 3 support) and development for Yardi Voyager ERP solution.
Daily tasks include development of: - Custom financial reports using in-house reporting tools (YSR, Columnar, Scripting, AdHoc) - Custom financial and non-financial data export in XML format for interfacing with other software products - Custom correspondence templates - Custom Stored Procedures to handle complex logic situations
I am also currently involved in training SQL to the new employees and also giving periodical trainings to my colleagues on multiple features of SQL that can increase efficiency and effectiveness in our team.
As Pentalog’s Talent Sourcing branch, SkillValue relies on a pool of 400,000+ Tech & Marketing Specialists – including 15,000+ Freelancers, a comprehensive catalog of IT assessments, available projects and job opportunities. Our SkillValue consultants are always ready and willing to help you boost your career.
Stay in the know with Pentalog tech & business updates
WHAT WE'RE ABOUT
Pentalog is a digital services platform dedicated to helping companies access world-class software engineering and product talent. With a global workforce spanning more than 16 locations, our staffing solutions and digital services power client success.