Yogesh F.

Yogesh F.

Senior Consultant

Dombivili East , India

Experience: 12 Years

Yogesh

Dombivili East , India

Senior Consultant

28800 USD / Year

  • Immediate: Available

12 Years

Now you can Instantly Chat with Yogesh!

About Me

Currently associated with Capgemini, Navi Mumbai as Senior Consultant. Resourceful in managing the entire software development operations involving requirement gathering, development of functional specifications, design & development and coordination...

Show More

Portfolio Projects

Description

DevOps Engineer

Project: Monaco DevOps
Client: McLaren Automotive
Duration: 9 Months
Project Location: Mumbai, India
Technologies Used: Groovy, Python, Jenkins, Gradle, Artifactory, Confluence, JIRA, Da Vinci Developer, Da Vinci Configurator, MATLAB Simulink Software, Preevision, DOORS
Team Size: 4
Description:
DevOps Implementation in Electronic Controller Unit ( ECU )
The Domain Controller Unit of is a Electronic Controller Unit which is generated by doing an integrated Build of Application Software and Basic Software. Application software is developed in MATLAB and Simulink software whereas Basic Software is generated by Da Vinci Developer and Da Vinci Configurator tools. By Using DevOps the Integration of Basic Software and Application Software is performed in a automated way. There is a configurable option to provide the dependencies and version of application software. Once the dependencies are resolved the application software is extracted and integrated with basic software and then Basic Software Configuration is performed in a Automated Way. Once Integration and Configuration is done, validation and code generation is performed. During Validation the Errors are listed and resolved in a automated way. Code is compiled to generate Binaries. Binaries also contain elf and srec files which are pushed into artifactory. A2l Files are then generated which are then zipped and pushed into Artifactory. The Entire Process has multiple tasks which are implemented in a Gradle Application. The Entire Process is executed using a Jenkins Job. There are two types of jobs One is to integrate whole process and another is to only compile and generate Binaries. Build can also be performed by not choosing any application software and only using BSW.
Roles:
The scripts meet the Domain Controller build requirement such that:
1. The basic software (BSW) for the domain controllers is maintained separately and released into this integration using dependency management
2. The build environment for the integration is provisioned within the scripts.
3. The integration tools and projects are version controlled
4. The application software (ASW) is downloaded and extracted using dependency management
5. The ASW is automatically integrated with the BSW. For example:
a. Imports ASW Matlab generated arxml files in to Davinci Developer
b. Adds runnable to task in Davinci Configurator. If these runnable are created by Application team
c. Connects Sender/Receiver and Client/Server ports between Application/BSW. If these ports are added by Application team
d. Adds Min/Max values for Compu methods which are used in Calibration Parameters
e. Reports validation errors from Davinci Configurator to the log
f. Generates BSW & RTE Dynamic files
g. Integrates Dynamic files with BSW static files and ASW Matlab generated files(.c, .h)
h. Reports compilation errors to the log
6. Multiple application compositions can be allocated to the domain controller (manually).
7. Failures, warnings, and errors are reported in the build log available on Jenkins
8. Within an application composition the following aspects are possible to add or remove automatically: components, runnable, parameters, signals, ports (provided they don't break the external interface of the composition).
9. The scheduling of the software can be defined by the application engineers (using Simulink).
10. The final software build is versioned according to semantic versioning.
11. The release note is automatically generated describing the dependency tree and change logs.
12. The final integrated software build, release note, and logs are uploaded to Artifactory as an IVY artefact.
13. All four domain controller software releases are performed using this workflow on Jenkins.
14. The process is repeatable such that a previous build can be re-created.
15. The project can be maintained with minimal effort using plugins and libraries supporting reuse across the domain controllers with their many branches.
16. Documentation is available to support the ongoing maintenance and use of the above.
The scripts meet the PREEvision exports to DOORS requirement such that
17. The monitoring of selected Artefacts version number within Artifactory
a. The monitoring action shall be scheduled to time
i. The Scheduled time should be capable of modification
b. The list of monitored Artefacts should be capable of modification

18. Creation of a config file for Artefact to DOORS mapping
a. Config file captures all content related to the following parameters
i. Target Artefact Repository Module Name
ii. DOORS module & requirement ID
iii. Author email
b. Config file to be capable of modification to increase or decrease artefacts / DOORS modules

19. Creation of a Jenkins job to execute subsequent scripts to the perform the following actions:
a. Downloading of Artifactory target file
b. Opening Excel
i. Formatting of Excel table to target template
c. Opening DOORS
i. Opening target DOORS Module
ii. Performing update operation of target artefact
iii. Saving DOORS Module & closing program
d. Trigger of an Email notification
i. Email notification to capture DOORS module link to assigned recipient
ii. Email text to be capable of modification

20. Generation of log files & documentation
a. Reports to be generated in case of failed task execution
21. Documentation to capture the functional behavior of each script

Show More Show Less

Description

Project: Digital Factory
Client: TE Connectivity
Duration: 10 Months
Project Location: Mumbai, India
Technologies Used: C, C++, Python, JavaScript, Java, Thing Worx, Ping Identity, Ping Federate, Shell Script, Perl, SUSE SLES12 Linux, CentOS, SAML.
Team Size: 12
Description:
Plant Dashboard
TE Connectivity factory machine operators lack means to capture machine KPIs. When operators need to assess operating efficiency or downtime, they are unable to easily access data necessary for performing reports or machine adjustments. This project is about developing the plant dashboard in Thing Worx to display the different machine KPI’s like OEE, Utilization, Quality, SPC, etc. The purpose of this dashboard is to monitor the plant machines parameters at plant, zone and machine levels by using different charts as widgets. In each widget data filters are added to capture live data, last week’s data, last month’s data, and last year’s data. The Dash Board can be access based on user role such as Operator, Zone Leader and Plant Manager. The User logon is integrated to the Active Directory and Single Sign On is implemented. Ping Federate tool is used to implement federated security architecture. Plant Dash Board Application running using Thing Worx acts as Service Provider and interacts with Ping Federate software which acts as Certificate Authority Server. For Security reasons TE did not provide direct access to ADFS but they have provided the IDP metadata file and public key. This will be used by Ping Federate to talk to IDP for user logon request and response. User Agent Requests through Browser for the first time login. The Request is redirected by ping federate to IDP which takes user inputs and based on credential validation in ADFS allows the login to Thing Worx. Once credentials are validated the SAML response is sent from IDP to Ping Federate which contains list of configured attributes which are sent as claims. Ping Federate passes the SAML response to Thing Worx Application. Based on the mappings done in SSO Authenticator service in Thing Worx the User extensions get populated. User extensions properties are used to populate the User Interface and role-based access is applied based on the user’s role received in SAML claims.
KPI List:
• OEE
• Status
• Performance and Scrap
• Downtime
• Utilization
• Quality
• SPC

Roles:
• Worked as a Security Architect for client TEC, Designed and Developed Security Architecture. Implemented Single Sign ON, SSL/TLS/HTTPS, Role Based Access Implementation, Session Management, etc.
• Worked on Development of charts which can show live as well as historical data for different machine KPIs.
• Worked on IOT Plant Dashboard User Interface Application Design and Development for Role based configurable Plant Dashboard for showing different Machine’s KPIS, also handled Cloud Migration and Security implementation for it.
• Mentoring Team and Conducting Trainings.

Show More Show Less

Description

Project: System and App Security for Unity (VNX) Storage Product (Vnxe Midrange Product Security)
Client: DELL-EMC
Duration: 13 Months
Project Location: Mumbai, India
Technologies Used: Linux, C, C++, Python, MYSQL, Perl, Shell Scripts, Accurev, Eclipse, ESX, SUSE SLES12, Nessus, STIGs, VNC, Google Protocol Buffers & ZeroMQ
Team Size: 4
Description:
VNX Storage System and Application Security
The UNITY architecture continues the evolution of the VNX/Vnxe product set by continuing to breakdown the barriers between the two separate products. UNITY was about building a converged data path stack and integrating it with a new management model based on a database built from the system environment (rather than polling the data path on demand). UNITY was not about just doing integrated systems, but instead will have multiple hardware deployments integrated into a single management image to scale to larger and larger deployments (as well as reduce the fault domain by distributing the load across more hardware components). The UNITY goal was to finally integrate both data path and management stacks into a single deployable software base that can cover integrated systems all the way up to multiple hardware deployments using a single management image. Platform software evolved along the way to accommodate this convergence. In UNITY, each SP had a separate copy of the SUSE LINUX OS to boot using a root file system on the SSD mounted on the SP. One key addition to the base LINUX environment was the use of the CSX software libraries and executables. These libraries allowed kernel level system software to execute in user space processes, supporting device drivers, memory mapping and other privileged kernel level operations. Most of the UNITY software was designed to run in user space, but there were some kernel level libraries that were loaded to perform certain services not possible from user level.
Roles:
• Analyzed, reviewed & updated SRS, HLD & ADD
• Engaged in the design and development of:
o Security Hardening Application to find and fix vulnerabilities mentioned in STIGS belonging to different components of VNX
o Email Mining Application
o Search Engine Application
o Block Chain Application using Supply Chain Management Use Case
o Plant Dash Board Application using Thing Worx
• Built the architecture by making use of latest technology open source software such as Protocol Buffers and ZeroMQ
• Conducted Unit & Integration Testing and mentored team members

Show More Show Less

Description

Project: Policy Charging and Control Rules Function (PCRF)
Client: MTN
Duration: 15 Months
Project Location: Bangalore, India
Technologies Used: Linux, C, C++, PHP, MySQL, Perl, Shell Scripts, CVS, Eclipse, ROBOT Framework, Flex, SVN, VMware Player, TCP/IP, Diameter & Radius Protocols
Team Size: 16
Description:
For mobile operators in the emerging markets who need a cost efficient policy management solution, Comviva’s Policy Control and Charging Rules Function (PCRF) is a standards based solution that was focused to provide cutting edge technology, superior performance & easy management.
PCRF is centralized decision-making point for mobile operators to effectively manage and monetize the data traffic. PCRF is compliant with 3GPP PCC architecture. It acted as central decision-making point. It dynamically controlled network resources with real-time policies. PCRF enabled dynamic allocation of bandwidth and access to network resources by interfacing with PCEF.
Roles:
• Devised plan for design and development for PCRF, USSD, Provisioning APIs and Selfcare
• Designed conakry requirements
• Prepared migration scripts for migration of subscribers from one network to Comvvia’s MDP Platform
• Performed PCRF migration from MySQL 5.1 to MySQL 5.6
• Conducted test automation using ROBOT Framework
• Acted as a part of Comviva’s Interview Panel and conducted interviews in top campuses which include PSG, RVCE, VIT & so on

Show More Show Less

Description

Project: Mediation, Balance Manager and Profile Manager (OSS BSS 4G Billing)
Client: AT&T Corp. (American Telephone and Telegraph)
Duration: 2 Years 10 Months
Project Location: Kuala Lumpur, Malaysia
Technologies Used: Solaris 9/10, HP-UX, Oracle, C++, Java, JDBC, Fusion works, DSD (Data Stream Decoder) Scripts, Shell Scripts, CVS, Eclipse, HP Quality Centre, Webtrax, Prism, Oracle, SQL Developer & TOAD
Team Size: 50
Description:
Billing mediation was the connecting link between a telecom network that enabled a user to make a call and the billing system that bills the user for that call. The network generated different types of records known as Call Detail Records (CDRs) which were collected by the mediation system, validated, filtered, correlated, normalized into billable format, and sent downstream for billing. At AT&T, the entire mediation system was divided into two parts – Voice Mediation and Data Mediation. While the voice mediation system processes records generated during voice calls, the data mediation system processes records generated during all non-voice calls. For example, internet, email, IM & MMS.

Balance Manager managed requests from different external systems in order to provide control over accounts, subscribers, plans, orders and balances for different 3G devices such as netbook, iPad, DataConnect cards. The users bought time-based and/or usage based plans and used 3G device to surf the internet or download data, music or videos over the internet. Balance Manager provided many services such as creating customer, accounts and subscriber and provided near real time rating of usage generated by the subscribers. Balance Manager was a compilation of modules designed to manage account and subscriber balances at the network edge.

Profile Manager is a repository of various types of profiles to manage subscribers’ data. It was implemented as a Fusion works plug-in component to manage different types of profiles. The purpose of the Profile Manager application, within the scope of the AT&T EOD project was to act as an SPR. It was used to hold many different types of profiles, which will be used by the application to do the following:
• Control user access to account hierarchy
• Control user access to functions/methods
• Store network, roaming, and PDP profiles
• Store subscriber specific attributes
• Store user specific attributes

Profile Manager returned the requested profiles to the requesting application and the requesting application acted upon the data returned.
Roles:
• Developed and Managed:
o Charge Back Feature Implementation in Balance Manager
o Active Active Project Implementation in Data Mediation
o Birdy Project Implementation in Balance Manager Application
o Sony Vita Coupon Feature Implementation in Balance Manager
o Implementation of design of Profile Manager Application
o ATTOM Wholesale Billing and Invoicing
o Digital Life Instalment Billing feature addition
o IMSI 311180 addition in Voice and Data Mediation
o Automated Test Framework (ATF) Design and Development for Data Mediation Streams PGW, SGW, CSG and SGSN
• Written:
o Automated Shell script to find out ERRORs in LOGS on all Balance Manager production servers multiple instances taken at a time
o Unix Shell scripts to find out the performance measures such as CPU, Memory and I/O for individual correlators for each streams in mediation
o UNIX shell scripts to do validation for deployment activity in Mediation and Balance Manager
o PL/SQL Procedure for Border Enrichment for PGW and CSG streams
o Triggers to insert old values in audit tables while updating and deleting and insert new values in audit tables while inserting for entire Profile Manager Database
o PL/SQL function to check if process already exist and running in Mediation Database
• Fixed the issue of:
o Overlapping calls in Voice Mediation
o ERROR_STORED in BL_STATISTICS_COUNT table not captured for COC and CTC records when there are validation errors which were send to error pool

Show More Show Less

Description

Project: Kerberos (Worked in HP Networking Labs NWL Team)
Client: Hewlett Packard (HP)
Duration: 1.2 Year
Project Location: Mumbai
Technologies Used: C, C++, Multithreading, HP-UX, Clearcase, Clearcase Jazz, Subversion (SVN), Gdb & QUIX
Team Size: 40
Description:
Kerberos used secret-key cryptography, which let entities communicating over networks prove their identity to each other while preventing eavesdropping or replay attacks. It provided data stream integrity (detection of modification) and secrecy (preventing unauthorized reading) using Data Encryption Standards such as DES, 3DES, and AES. Kerberos was based on the concept of a trusted third party that performed secure verification of users and services. In the Kerberos protocol, this trusted third party was called the Key Distribution Center (KDC).
Kerberos was used to verify that users and the network services they use were really who and what they claim to be. To accomplish this, a trusted Kerberos Server issues tickets to users. These tickets, which have a limited lifespan, were stored in a user's credential cache and can be used in place of the standard username-and-password authentication mechanism. The ticket can then be embedded in virtually any other network protocol, thereby letting the processes implementing that protocol to be sure about the identity of the principals involved.
Roles:
• Worked on:
o KDCD ticket expired issue for debugging of threads synchronization problem
o Kerberos Client Security Fix Release
o Kerberos enhancement for Kdestroy utility using IOCTL apis
o Migration of DCE and Kerberos Source code from Rational ClearCase and/or jazz ClearCase to iSVN (Subversion)
o Kerberos Server Release which included a number of bug fixes
o Implementation of deny local and minimum_UID feature in PAM Kerberos

Show More Show Less

Description

Project: SMSC (Short Message Service Centre) known as Sandesh
Client: Reliance Communications Ltd. & Reliance Telecom Ltd.
Duration: 3 Years
Project Location: Mumbai
Technologies Used: Solaris 9/10, Rational Rose, Sun Studio, Quantify, Purify, Clear Case, Clear Quest, Ethereal, C++, SMPP, SNMP, Socket Programming, Multithreading, STLs, SS7, SIGTRAN, Signalware, Ulticom, SOAP & OSA Parlay
Team Size: 25
Description:
A converged SMSC used for providing generic messaging service in the Reliance Network. It can be used for SMPP, IS41 and GSM interfaces. SANDESH received messages over IS41, MAP and SMPP interfaces and delivered the messages on IS41, MAP and SMPP interfaces to Reliance and Non-reliance subscribers and applications. SANDESH stored the messages in persistent memory, if needed. SANDESH supported the SNMP interface for interfacing with the NMS of Reliance. SANDESH had MML interface called SUI, which can be used for provisioning. SANDESH supported RTC Protocol interface towards Prepaid Billing System for Real time charging of prepaid subscribers.
Roles:
• Developed mobile number portability implementation, SMS routing & protocol switching enhancements
• Developed Single Point code Support for Sandesh
• Implemented CDR Agent Module for generating CDRs for cell broad cast application
• Performed SIGTRAN stack provisioning and lab set-up for doing Unit and Integration Testing
• Developed Real time Charging of GSM Numbers, International SMS, CRBT, Fraud Lockouts Implementation, 8 level Series Charging Implementation & 91XX Series for Aircel Number Range
• Executed design of running multiple application processes to increase traffic handling capability in Pay Channel
• Resolved timeouts problem in pay channel and dual debiting production issue

Show More Show Less

Description

VNX Storage System and Application Security The UNITY architecture continues the evolution of the VNX/Vnxe product set by continuing to breakdown the barriers between the two separate products. UNITY was about building a converged data path stack and integrating it with a new management model based on a database built from the system environment (rather than polling the data path on demand). UNITY was not about just doing integrated systems, but instead will have multiple hardware deployments integrated into a single management image to scale to larger and larger deployments (as well as reduce the fault domain by distributing the load across more hardware components). The UNITY goal was to finally integrate both data path and management stacks into a single deployable software base that can cover integrated systems all the way up to multiple hardware deployments using a single management image. Platform software evolved along the way to accommodate this convergence. In UNITY, each SP had a separate copy of the SUSE LINUX OS to boot using a root file system on the SSD mounted on the SP. One key addition to the base LINUX environment was the use of the CSX software libraries and executables. These libraries allowed kernel level system software to execute in user space processes, supporting device drivers, memory mapping and other privileged kernel level operations. Most of the UNITY software was designed to run in user space, but there were some kernel level libraries that were loaded to perform certain services not possible from user level.

Show More Show Less

Description

For mobile operators in the emerging markets who need a cost efficient policy management solution, Comvivas Policy Control and Charging Rules Function (PCRF) is a standards based solution that was focused to provide cutting edge technology, superior performance & easy management. PCRF is centralized decision-making point for mobile operators to effectively manage and monetize the data traffic. PCRF is compliant with 3GPP PCC architecture. It acted as central decision-making point. It dynamically controlled network resources with real-time policies. PCRF enabled dynamic allocation of bandwidth and access to network resources by interfacing with PCEF.

Show More Show Less

Description

Billing mediation was the connecting link between a telecom network that enabled a user to make a call and the billing system that bills the user for that call. The network generated different types of records known as Call Detail Records (CDRs) which were collected by the mediation system, validated, filtered, correlated, normalized into billable format, and sent downstream for billing. At AT&T, the entire mediation system was divided into two parts – Voice Mediation and Data Mediation. While the voice mediation system processes records generated during voice calls, the data mediation system processes records generated during all non-voice calls. For example, internet, email, IM & MMS. Balance Manager managed requests from different external systems in order to provide control over accounts, subscribers, plans, orders and balances for different 3G devices such as netbook, iPad, DataConnect cards. The users bought time-based and/or usage based plans and used 3G device to surf the internet or download data, music or videos over the internet. Balance Manager provided many services such as creating customer, accounts and subscriber and provided near real time rating of usage generated by the subscribers. Balance Manager was a compilation of modules designed to manage account and subscriber balances at the network edge. Profile Manager is a repository of various types of profiles to manage subscribers data. It was implemented as a Fusion works plug-in component to manage different types of profiles. The purpose of the Profile Manager application, within the scope of the AT&T EOD project was to act as an SPR. It was used to hold many different types of profiles, which will be used by the application to do the following: Control user access to account hierarchy Control user access to functions/methods Store network, roaming, and PDP profiles Store subscriber specific attributes Store user specific attributes Profile Manager returned the requested profiles to the requesting application and the requesting application acted upon the data returned.

Show More Show Less

Description

Kerberos used secret-key cryptography, which let entities communicating over networks prove their identity to each other while preventing eavesdropping or replay attacks. It provided data stream integrity (detection of modification) and secrecy (preventing unauthorized reading) using Data Encryption Standards such as DES, 3DES, and AES. Kerberos was based on the concept of a trusted third party that performed secure verification of users and services. In the Kerberos protocol, this trusted third party was called the Key Distribution Center (KDC). Kerberos was used to verify that users and the network services they use were really who and what they claim to be. To accomplish this, a trusted Kerberos Server issues tickets to users. These tickets, which have a limited lifespan, were stored in a users credential cache and can be used in place of the standard username-and-password authentication mechanism. The ticket can then be embedded in virtually any other network protocol, thereby letting the processes implementing that protocol to be sure about the identity of the principals involved.

Show More Show Less

Description

A converged SMSC used for providing generic messaging service in the Reliance Network. It can be used for SMPP, IS41 and GSM interfaces. SANDESH received messages over IS41, MAP and SMPP interfaces and delivered the messages on IS41, MAP and SMPP interfaces to Reliance and Non-reliance subscribers and applications. SANDESH stored the messages in persistent memory, if needed. SANDESH supported the SNMP interface for interfacing with the NMS of Reliance. SANDESH had MML interface called SUI, which can be used for provisioning. SANDESH supported RTC Protocol interface towards Prepaid Billing System for Real time charging of prepaid subscribers.

Show More Show Less