Now you can Instantly Chat with VAMSI!
About Me
Versatile, high-energetic individual, targeting assignments in Application Development with an organization of repute 4 years of experience in Technical Project Management of on projects based on Big Data Technologies-Hadoop, HIVE,HBASE, SPARK & Kafk...
Show MoreSkills
Portfolio Projects
Description
The purpose of the project is to develop an end to end solution from injestion to analytics. Developing an end to end OLAP cube based solution. The solution is based on the open source Big Data s/w Hadoop, Angular & REST.
Responsibilities:
- Leading offshore team using Agile methodology.
- Practiced backlog grooming, release and sprint planning, daily standups, impediment removals.
- Collaborating closely with the software development, product and business teams on developing a OLAP based hadoop product.
- Managing from concept to delivery of the OLAP based hadoop product.
- Creating and managing the estimates, project plan, project schedule, resource allocation to ensure that targets will be reached
- Involved in all aspects of OLAP platform development including collecting requirements, writing high-quality documents, doing sprint planning, and coordinating all efforts to scope, schedule, and deploy new feature.
- Working on front end development using Angular.
Environment: Angular JS, Rest, Hadoop, HDFS, Hive, MapReduce, Spark, Kafka
Show More Show LessDescription
Description:
The purpose of the project is to store terabytes of log information generated by the Telecom website and extract meaning information out of it. The solution is based on the open source Big Data s/w Hadoop. The data will be stored in Hadoop file system and processed using Spark which in-turn includes getting the raw data from the Servers. Process them to obtain product and pricing information, Extract various reports out of the product pricing information and Export the information for further processing.
Responsibilities:
- Delivered project needs on time and within the agreed acceptance criteria in a hybrid methodology environment as they attempted to transition to an Agile Methodology.
- Developed, managed and tracked project plan to implement requested features
- Facilitated grooming and planning sessions with team
- Tracked and reported on project progress.
Environment : HADOOP, HDFS, Hive, UNIX,SPARK,Scala Flume, Oozie. HBASE
Show More Show LessDescription
Description:
4medica is the nation's leading provider of cloud-based clinical data exchange, which provides clinicians with a unified, real-time view of patient information across disparate care locations. The company's flagship clinical integration platform, Integrated Electronic Health Record (iEHR), builds upon organizations' existing technologies to supply the exact level of health connectivity needed to address meaningful use requirements, from basic health information exchange to integration with existing electronic health records (EHRs), practice management systems and other healthcare applications.
Responsibilities:
- Understood process requirements and provided use cases for business, functional & technical requirements
- Managed programming code independently for intermediate to complex modules following development standards; planned and conducted code reviews for changes and enhancements that ensured standards compliance and systems interoperability
- Interacted with users for requirement gathering; prepared functional specifications and low-level design documents
- Provided overall leadership to the entire project team including managing deliverables of other functional team leaders
- Communicated with internal/external clients to determine specific requirements and expectations; managed client expectations as an indicator of quality
- Created and managed the estimates, project plan, project schedule, resource allocation and expenses to ensure that targets were reached
- Worked with relevant Resource Managers for project staffing and resource releases
Environment: HDFS, Apache Pig, Hive, SQOOP, Java, UNIX, SQL
Show More Show LessDescription
Description:
Genome analysis is used to analyze human genome data using Hadoop. A single human genome contains about 3 billion base pairs. This is less than 1 gigabyte of data but the intermediate data produced by a DNA sequences, required to produce a sequenced human genome, is many hundreds of times larger. Beyond the huge storage requirement, deep genomic analysis across large populations of humans requires enormous computational capacity as well. Efforts exist for adapting existing genomics data structures to Hadoop, but these don’t support the full range of analytic requirements. Our approach is to implement an end-to-end analysis pipeline based on GATK and running on Hadoop.
Responsibilities:
- Writing pig scripts to process the Genome data.
- Writing the script files for perform Hadoop operations.
- Processing data and loading to HDFS.
- Handled importing of data from various data sources, performed transformations using Hive and loaded data into HDFS.
- Injected the data from logs and relational databases using Flume and SQOOP.
Importing and exporting data into HDFS, Pig, Hive and HBase using SQOOP
Environment: HDFS, Apache Pig, Hive, Hbase, Sqoop, Flume
Show More Show LessDescription
The purpose of the project is to store terabytes of log information generated by the Telecom website and extract meaning information out of it. The solution is based on the open source Big Data s/w Hadoop. The data will be stored in Hadoop file system and processed using Spark which in-turn includes getting the raw data from the Servers. Process them to obtain product and pricing information, Extract various reports out of the product pricing information and Export the information for further processing.
Show More Show LessDescription
4medica is the nations leading provider of cloud-based clinical data exchange, which provides clinicians with a unified, real-time view of patient information across disparate care locations. The companys flagship clinical integration platform, Integrated Electronic Health Record (iEHR), builds upon organizations existing technologies to supply the exact level of health connectivity needed to address meaningful use requirements, from basic health information exchange to integration with existing electronic health records (EHRs), practice management systems and other healthcare applications.
Show More Show LessDescription
Genome analysis is used to analyze human genome data using Hadoop. A single human genome contains about 3 billion base pairs. This is less than 1 gigabyte of data but the intermediate data produced by a DNA sequences, required to produce a sequenced human genome, is many hundreds of times larger. Beyond the huge storage requirement, deep genomic analysis across large populations of humans requires enormous computational capacity as well. Efforts exist for adapting existing genomics data structures to Hadoop, but these dont support the full range of analytic requirements. Our approach is to implement an end-to-end analysis pipeline based on GATK and running on Hadoop.
Show More Show Less