Now you can Instantly Chat with Madhumaran!
Data & Analytics
Impact Analytics - TELECOM
- North bound: Fetching raw Json data from home analytics tool impact (Rabbit MQ) and data cache happens in internal Kafka.
- South Bound: Kafka integration using connect and connectors with respect to customer DL and S3
- Enabling telemetry Data cache (multitenant) and Data visualization through spark SQL in Zeppelin
- Requirement Gathering: Develop using java framework and follow agile methodology to deliver or integrate user story as a helm deployment in Open stack cloud environment
- Automation: Using Raddish Framework with python we automated the above requirement as a system test, and we run through Jenkins nightly build
- Monitoring: log monitoring using fluentd and Kibana EFK
Healogics - Health Care
- Project Description:
- Helogics Should Merge Assisting nurses in the nursing management of each patient through ongoing monitoring and evaluation of the effectiveness of the patient’s treatment plan
- And paper report documents actions taken and/or results taken compared to the 9 essential steps of wound healing
- This new effort and design will provide an opportunity to more effectively provide patient care and adhere to compliance requirements for patient related health care data.
- The tool will employ patient treatment order data captured in the Helogics source data architecture as identified by bi and downloaded to a separate Helogics it defined data architecture to be accessed within each clinic on a daily basis. Utilization of electronic mediums, such as an iPod or android tablet, laptop or desktop, will be instituted in order to eliminate the paper reporting currently in place.
- The tool will provide for all instances of patients’ wound(s) in a pictorial, visual data representation of weekly wound healing progress and treatments where wounds are measured and compared against the Helogics 9 steps of healing and wound care treatments are guided and directed by same, along with captured provider notes, enhancing case management and safeguarding patient data.
- BI integration Layer: Batch Job - downloading (.csv,.pdf,Json) files from sftp location using Spark Scala, persisting data to Apache Cassandra
- Service Layer: Spring Restful web services understand the business requirements and drive the application development of the project, sharing and documenting Json: req/Res, Unit Testing with DEV environment, moving build to QA-AWS (Linux) Machine
- Build: Dev and QA environment (AWS Linux server)
- Participated in Agile ceremonies - Sprint planning, Daily scrum meeting, Sprint Retrospection
ANA - Home analytics
- will read data from sftp server
- will parse the data and validate busness logic
- then will post to kafka
- spark will read the data from kafka-topic and will do some transformations
- save the data in the hive
- Spark batch job: to read data from hive and aggregate save the data in hbase
- will read data from phoenix client(Hase) and pass it to UI throgh Rest call from UI
- scheduling the tasks in jira will pass it to juniours and guide them accouringly
- writing streaming and batch jobs using spark
- will write aggrigation job's in spark