Thiruchenduran K.

Thiruchenduran K.

Big Data Engineer specialized in orchestrating Data Lake and Enterprise Shared Services Solution

Chennai , India

Experience: 11 Years

Thiruchenduran

Chennai , India

Big Data Engineer specialized in orchestrating Data Lake and Enterprise Shared Services Solution

27401.1 USD / Year

  • Immediate: Available

11 Years

Now you can Instantly Chat with Thiruchenduran!

About Me

Passionate Data Engineer and results driven Solutions Architect with 11 Years of Experience. Self-motivated and passionate data architect, with strong strategic thinking, interpersonal and leadership capabilities in a complex and matrix environmen...

Show More

Portfolio Projects

Description

The Perform AI service and solution portfolio has been designed to address mission-critical AI domains such as industrialization of core enterprise processes necessary for an AI-First or AI-driven business to take root and be sustained. The transformation of strategic enterprise processes and operations by applying AI-based solutions and technologies. AI, perhaps like no other technology-driven disruption yet, affords the opportunity to fundamentally re-create or introduce entirely new ways of working, products and services, and business models. AI Ops Log Analytics is one of the key use cases of Perform AI

Roles and Responsibilities

  1. Designed end to end solutioning for Ai Ops powered Log Analytics
  2. Architected and implemented Web Log Analytics as use case for a Dell Computers using LSTM model and forecasted critical parameters which would help energy saving
  3. Designed All in one platform for implementing AI/ML models right from data ingestion, storage, model training, testing, evaluation and cognitive remediation
  4. Working on research and development for building new gen use case for Ai Ops and Log analytics
  5. Key member in building JARVIS model for Capgemini

Show More Show Less

Description

The Data Hub provides a set of services that can be consumed by applications and users. These are grouped into the following, Platform Services (The platform services are granular low-level services that can be consumed directly by users or within applications. These are the key building blocks for data solutions). Solution Services (Each solution service represents a composite, pre-built, ready-to-use solution made up from underlying platform services). Fundamental Services (All Platform and Solution Services rely on a set of fundamental services blocks to provide low-level capabilities such as security, auditing, monitoring, etc.). Utility Services (The utility services provide capabilities for teams building solutions and accessing Data Hub services such as data science workstation, data analytics edge nodes, etc.)

Roles and Responsibilities

  1. Designed Master Data Management / Data Governance end to end flow for BNP internal clients
  2. Devised, Solutioned and Architected the migration of last gen banking system into Bigdata applications in distributed platform
  3. Key member in building initial key study for designing Dev and UAT cluster for Data hub and implementing security solutions
  4. Participated along with Principal architect in devising Kerberos, SSL and wire level security for MapR clusters
  5. Involved in multiple technological integration work POC studies such as Drill to SSRS integration, Drill to Power BI integration, Drill to Tableau integration
  6. Architected Archival engine framework which acts as a single point archival solution for Data Hub clients using Spark
  7. Architected numerous technology migration projects such as Informatica, Teradata, Oracle to Bigdata platform

Show More Show Less

Description

Electronic Box Office Reporting Analytics deals with box office reports from different distributors. These data need to be enriched and processed according to various business rules and finally to be stored into Hive and Oracle for multiple analytics data source. This deals with Warner Bros domestic market electronic box office data and theatrical analytics where performed based on box office reports, trade collection, play date, ticket type etc.

Roles and Responsibilities

  1. Designed and developed Ebor data processing component which connects to different sources and apply ETL rules based on business requirement
  2. Designed Ebor next gen framework for WB which accommodates traditional ETL for big data space
  3. Individual contributor right from project kick off call to production delivery phase handling entire life cycle management
  4. Performed Import and Export of data into HDFS and Hive using Spark/Scala and managed data from different sources such as flat files, S3, Oracle, etc.
  5. Responsible for building scalable distributed data solution to electronic box office business unit
  6. Processed gigabytes of theatrical transaction data using Spark and involved in data modeling
  7. Managed Hive Tables and created child tables based on partitions
  8. Loaded and transformed large sets of Structured and Semi Structured data using Spark/Scala
  9. Worked on Spark – Oracle integration for large set of structured data
  10. Developed scalable solution to process multi-dimensional data using Spark/Scala
  11. Processed XML, JSON, CSV files using Spark/Scala

Show More Show Less

Description

Transaction Management Warehouse is a back-office trading repository for Citi Institutional Clients. Data need to be enriched and processed according to various business rules using Spark and stored into Cassandra which acts as data source for MicroStrategy. Analytics were performed for stock quote analysis, put-call analysis and sentimental-quote analysis for large sets of structured, semi-structured and unstructured data and supporting systems

Roles and Responsibilities

1. Contributed in building an ETL framework using spark and Scala which acts as a reusable tool in different phases of data migration from traditional MPP’s to big data space.

2. Processed large Data sets of different data format and structural formats

3. Processed gigabytes of back office trading data by applying different enrichment rules and lookups using Cassandra and Scala and Spark

4. Identifying data quality issues, help testing team in creating test cases, reviewing mapping documents with relevant stakeholders

5. Build data systems that speak to different database platforms, enabling product and business teams to make data driven decisions.

6. Develop ETL transformations using spark and Scala based on mapping documents

7. Involved in Detailed Logical and Physical data modeling

8. Involved in architecture and design approach for the project.

9. Developed data ingestion phase using spark that connects with salesforce DB to S3

10. Used Datastax distribution to accomplish the task of end to end data processing

11. Interact with relevant stakeholders on daily standup call and production issues

Show More Show Less