Nishant V.

Nishant V.

Experienced Blockchain Developer with 8+ years of overall IT and Software development experience.

Hyderabad , India

Experience: 8 Years

Nishant

Hyderabad , India

Experienced Blockchain Developer with 8+ years of overall IT and Software development experience.

44841.2 USD / Year

  • Immediate: Available

8 Years

Now you can Instantly Chat with Nishant!

About Me

Profile Summary

  • Blockchain software engineer with 8 years of software development experience in BFSI and fin-tech.
  • Strong Programming skills in Solidity, Python, JS....
  • Strong Programming skills in Solidity, Python, JS.
  • Hands-on experience  with building smart contracts on Ethereum using tools like Truffle Ganache, Web3 & Metamask.
  • Understanding of smart contract development cycle
  • Excellent  knowledge of consensus algorithms like Proof of work, POW, Proof of Stake, POS and cryptography.
  • Deep Knowledge about decentralized finance (DeFi),  yield farming and lending financial products.
  • Passionate about building world class Dapps on blockchain.
  • Extensive experience in Agile & DevOps along with version control tools like Git.
  • Experience working collaboratively in a global team.
  • Proactive communicator with excellent communication skills.

Show More

Portfolio Projects

Description

DeFi (Decentralised Finance) Bank Dapp - Ethereum, Smart contracts

  • Decentralised (DeFi) Bank Dapp on traditional banking products
  • Built robust Smart contracts using Solidity programming.
  • Setup end to end project setup on blockchain using ganache.
  • Token and defi bank Smart contracts deployment to Ethereum.
  • Built DeFi Dapp allows users to connect & interact with the smart contract.
  • Created unit tests using JavaScript to test smart contracts
  • Code base available on GitHub and Dapp deployed to GitHub.

https://github.com/itznishant?tab=repositories

Skills: Solidity, DeFi, Dapps, Smart Contracts, Ethereum, Git, Hardhat, React JS, Web3.JS, Ganache, Metamask

Show More Show Less

Description

  • Built on robust smart contracts in Solidity
  • Instant crypto swap Exchange with no time wait needed to fill orders.
  • Test driven development (JS) using Mocha test framework / Chai.
  • App Frontend built with React JS components and HTML.
  • Displays ETH & Token Balance in app UI
  • Displays account details in the UI along with custom avatar.
  • Tests built using Mocha / Chai assertion libraries.
  • Basic tests for token & token sale smart contracts deployment.

Show More Show Less

Description

Election Voting DAPPusing Ethereum Smart contracts

  • Election voting DAPP (Decentralised Application) built on Ethereum.
  • Smart contract for the project built using Solidity programming
  • DAPP allows accounts to vote for listed candidates in an election.
  • Tests to validate for double voting & voting for invalid candidates.
  • Built on Ethereum Blockchain using Smart contracts on Solidity.
  • Front end web 3 app interface to vote for listed candidates & view election results.
  • Smoke tests / Unit tests using truffle test framework/ JavaScript & smart contracts deployment with truffle migrate
  • Integrating accounts using Metamask

Skills: Solidity, JavaScript, Smart Contracts, Ethereum, Git, Truffle framework, Ganache, Metamask, HTML, web 3

Show More Show Less

Description

GSNA GTRF CMB Scoring:

Description:

GTRF is a trade finance product live in multiple countries. The global scoring model for GTRF tracks transactions and customer behavior based on typologies and scenarios to identify risky customers & counterparties with an objective to reduce suspicious cases as well as False Positives. This is extremely critical to business as this represents customers at highest risk.

Goals:

Reduce suspicious cases / False Positive of the scoring model by using multi-dimensional data of the customers.

EC (Entity Consolidation):

Finding over linked / under linked entities for customers in multiple countries.

Responsibilities:

  • Completion of assigned sprint developmenttasks.
  • Development / Enhancements of existing scenarios and typologies.
  • Extensively used Python,Scala on Apachesparkonenterprise data lake.
  • Collaborating with Domain SMEs to tune threshold by BTL/ATL Testing.
  • Presenting demos to business stakeholders.
  • Involved in IMR review process to prepare model documentation.

Show More Show Less

https://www.linkedin.com/pulse/hyderabad-neighborhood-analysis-capstone-project-vemulakonda

Geo spatial Analysis of Hyderabad neighborhoods using python for retail client.

Description

Description:

A retail firm wants to set up supermarkets in Hyderabad city but is not sure about the location. Chosen locations should ideally have a considerable population, near to work centers/residential districts for easier access to a large number of citizens. Business objective is to find suitable neighborhoods to open stores.

  • Extracting house prices data for neighborhoods using web scraping libraries like BeautifulSoup and converting to data frame using Pandas.
  • Getting geographical coordinates (latitude, longitude) for neighborhoods using geojson data and creating maps for visualizing using Folium.
  • Filtering relevant neighborhoods using python Boolean filtering.
  • Capturing Venue information for neighborhoods using Foursquare API.
  • Converting categorical venue categories to numeric by one hot encoding.
  • Finding optimal k using elbow method and clustering neighborhoods using most common venues data using k-means clustering.
  • Visualizing the clusters on the Folium map and identifying patterns and suggesting suitable neighborhoods after examining the clusters.

Show More Show Less

Business problem: Predict the customers likely to churn in next quarter.

Description

Description:

Customer Churn Analytics for a UK based Retail firm.

Business problem: Predict the customers likely to churn based on historical transactional data and demographic data.

Responsibilities:

  • Understanding of business problem to identify the outcome to be achieved.
  • Extracting sample data from AWS S3 into Python using pandas.
  • Data cleaning by renaming existing columns and dropping un-necessary columns.
  • Analysis of numeric variables using pandas describe method.
  • Identifying key columns (data attributes) using correlation plots with target variable.
  • Creating derived features from data (ex: Frequency, Recency, No. of transactions etc.) to increase predictive power of the churn model.
  • Identifying outliers using box plots and missing values (NaN) treatment using dropna, fillna functions, data transformations wherever required to enhance data quality.
  • Model building using sklearn and evaluation using sklearn.metrics like precision/recall and log loss.
  • Presenting findings to business through graphs and plots.

Show More Show Less

Auto mapping leverages Machine learning for classifying documents.

Contribute

•Extracting text data from Documents into python. •Building ML classification algorithm from scratch to classify documents in IC role. •Training model for baseline acceptable evaluation metrics score.

Description

Responsibilities:

  • Extracting text data from Documents into python.
  • Building ML classification algorithm from scratch to classify documents in IC role.
  • Training model for baseline acceptable evaluation metrics score.
  • Collaborating with functional team for training data and labels.

ML Algorithm for Classification of documents:

Features:

  • Multiclass classification models used.
  • Train/test split to reduce overfitting of the model on training data.
  • Python library used for data extraction from pdf documents to text.
  • Data Randomisation with python random function.
  • Wide array of Python data structures like lists, tuples and sets used.
  • Made use of string functions, regular expressions, lambda (single line functions) for data cleaning.
  • Lemmatization (root word) used for improving quality and reduce text features.
  • Count Vectorizer and Vocabulary used for feature selection.
  • Model trained to predict class with 92+ % score.
  • Evaluation using Confusion matrix, classification report and accuracy score.
  • Matplotlib and seaborn libraries used for visualizing data as graphs/charts and in confusion matrix.
  • Predicted probabilities for each document label along with label inoutput.
  • Model deployed as web service API and consumed from application.

Show More Show Less