Mayank R.

Mayank R.

Software Engineer

Ahmedabad , India

Experience: 5 Years

Mayank

Ahmedabad , India

Software Engineer

USD / Year

  • Start Date / Notice Period end date:

5 Years

Now you can Instantly Chat with Mayank!

About Me

I am a AI/ML developer. I am currently working on network security and monitoring tool product. I like to work on whatever I like innovative and interesting. I am specialized on web API design and development, web scraping, networking, Big data an...

I am having a master degree in Computer system and has worked on many technologies and I am used to work on high performance and high scalability projects. I am expert in network security programming and data analysis.  

- Responsible for working on a range of projects, designing appealing websites
and interacting on a daily basis with graphic designers, back-end developers.
- Developing and maintaining the front end functionality of websites.
- Participating in discussions with clients to clarify what they want.

- Simultaneously managing several databases and reporting tools.
- Contacting external webmasters to confirm link placements.
- Handling Java development including design & troubleshooting of applications, conducting gap analysis including validation of needs in conjunction with onsite & offsite teams
- Improving data processing and storage throughput by using Hadoop framework for distributed computing across a cluster of up to twenty-five nodes.
- Building customized memory indexes for high performance information retrieval using Apache Lucene and Apache Solr, as well as an optimized Graph Database with up to 10Billion edges.
- Applying machine learning algorithms in order to identify the most significant features across different datasets.
- Creating Proof of Concepts from scratch illustrating how these data integration techniques can meet specific business requirements reducing cost and time to market.
- Primarily used Scala to write cloud computing applications.
- Worked with cutting edge cloud technology using Heroku and Hadoop.
- Also Utilized Java, Scala and Python for cloud engineering.
- Configured web servers (IIS, nginx) to enable caching, CDN application servers, and load balancers.
- Deployed and supported Memcache-AWS ElasticCache.
- Involved in maintenance and performance of Amazon EC2 instances.
- Diagnose issues with Java applications running in Tomcat or JBoss.
- Involved in designing and developing Amazon EC2, Amazon S3, Amazon SimpleDB, Amazon RDS, Amazon Elastic Load Balancing, Amazon SQS, and other services of the AWS infrastructure.
- AWS data backup (snapshot, AMI creation) techniques, along with data-at-rest security within AWS.
- Developed Python based API (RESTful Web Service) CRM system using Flask, SQLAlchemy and PostgreSQL.
- Translation of designer mock-ups and wireframes into an AngularJS front-end
- Knowledge of Node.js and frameworks available for it (such as Express, StrongLoop, etc depending on your technology stack).
- Good understanding of server-side templating languages (such as Jade, EJS, etc depending on your technology stack).
- Implemented GRPC to connect java and python for transfering data.
- Implemented Vertx to connect Java and R for transfering data.
- Having Knowledge of Network protocols like TCP/IP, UDP.


IT Skills

- Java Framework: Spring, Spring Booting, Hibernate, Play, groovy and grails, Apache Ant, EJB, Jasper Report, Java FX, Servlet, JSP, .
- Python: Django, Flask, Falcon, Pyramid.
- BigData Analysis: Hadoop, Apache spark, heroku, Hbase, Cassandra, Hive, High Charts, R programming, SQOOP, Zookeeper.
- Cloud Computing: AWS
- Database: Oracle, MySql, postgresql, MongoDB., SQLite, Memcached, MariaDB, H2.
- Scala Framework: Play
- Ruby and Rails
- Docker
- Machine Learning: Python, R Programming, Matlab
- Natural Language Processing: NLTK, OPENNLP
- Artificial Intelligence: Tensorflow, Pytorch, Deeplearning4j
 

Show More

Portfolio Projects

Description

  • The original goal for log classification was to develop an automated means of notifying users when problems occur with their applications based on the information contained in their application logs. Unfortunately logs are full of messages that contain warnings or even errors that are safe to ignore, so simple “find-keyword” methods are insufficient. In addition, the numbers of logs are increasing constantly and no human will, or can, monitor them all. In short, log classification was to employ natural language processing tools for text encoding and machine learning methods for automated anomaly detection, in an effort to construct a tool that could help developers perform root cause analysis more quickly on failing applications by highlighting the logs most likely to provide insight into the problem or to generate an alert if an application starts to produce a high frequency of anomalous logs.

Show More Show Less

Description

  • Log Reduce groups messages with similar structures and common repeated text strings into signatures, providing a quick investigative view, or snapshot, for the keywords or time range provided.

Show More Show Less

Description

NLP search technology is much more than keyword lookups from a dictionary. It’s a real-time parser that examines the search query to understand meaning, intent and context. In seconds, it then produces highly-efficient queries, accurate results, and powerful visualizations.

  • Natural language processing engine enables plain-English search.
  • Automatically generates highly-optimized query.
  • Intuitive search interface and powerful search suggestions.
  • Creates multiple reports and visualizations from a single search.
  • Rich search results are returned in real-time.
  • Enables correlations across multiple data sources.

Show More Show Less

Description

  • This incremental clustering is designed using the cluster’s metadata captured from the K-Means results. Incremental clustering outperformed when the number of clusters increased, number of objects increased, the length of the cluster radius decreased, while the incremental clustering outperformed when the number of new data objects are inserted into the existing database. In incremental approach, the K-means clustering algorithm is applied to a dynamic database where the data may be frequently updated. And this approach measure the new cluster centers by directly computes the new data from the means of the existing clusters instead of rerunning the K-means algorithm. Thus it describes, at what percent of delta change in the original database up to which incremental K-means clustering behaves better than actual K-means.

Show More Show Less

Description

  • Document clustering analyses written language in unstructured text to place documents into topically related groups or clusters. Documents such as web pages are automatically grouped together so that pages talking about the same concepts are in the same cluster and those talking about different concepts are in different clusters. This is performed in an unsupervised manner where there is no manual labeling of the documents for these concepts, topics or other semantic information. All semantic information is derived from the documents themselves. The core concept that allows this to happen is the definition of a similarity between two documents. An algorithm uses this similarity measure and optimizes it so that the most similar documents are placed together.
  • The K-tree algorithm uses the k-means algorithm to perform splits in its tree structure.

Show More Show Less

Description

  • Anomaly detection is an algorithmic feature that identifies when a metric is behaving differently than it has in the past, taking into account trends, seasonal day-of-week, and time-of-day patterns. It is well-suited for metrics with strong trends and recurring patterns that are hard to monitor with threshold-based alerting.

Show More Show Less

Description

  • Outlier detection is an algorithmic feature that allows you to detect when a specific group is behaving different compared to its peers. For example, you could detect that one web server in a pool is processing an unusual number of requests, or significantly more 500 errors are happening in one AWS availability zone than the others.

Show More Show Less

Description

  • Forecasting is an algorithmic feature that allows you to predict where a metric is heading in the future. It is well-suited for metrics with strong trends or recurring patterns. For example, if your application starts logging at a faster rate, forecasts can alert you a week before a disk fills up, giving you adequate time to update your log rotation policy. Or, you can forecast business metrics, such as user sign-ups, to track progress against your quarterly targets

Show More Show Less

Description

- Responsible for working on a range of projects, designing appealing websites
and interacting on a daily basis with graphic designers, back-end developers.
- Developing and maintaining the front end functionality of websites.
- Participating in discussions with clients to clarify what they want.

- Simultaneously managing several databases and reporting tools.
- Contacting external webmasters to confirm link placements.
- Handling Java development including design & troubleshooting of applications, conducting gap analysis including validation of needs in conjunction with onsite & offsite teams
- Improving data processing and storage throughput by using Hadoop framework for distributed computing across a cluster of up to twenty-five nodes.
- Building customized memory indexes for high performance information retrieval using Apache Lucene and Apache Solr, as well as an optimized Graph Database with up to 10Billion edges.
- Applying machine learning algorithms in order to identify the most significant features across different datasets.
- Creating Proof of Concepts from scratch illustrating how these data integration techniques can meet specific business requirements reducing cost and time to market.
- Primarily used Scala to write cloud computing applications.
- Worked with cutting edge cloud technology using Heroku and Hadoop.
- Also Utilized Java, Scala and Python for cloud engineering.
- Configured web servers (IIS, nginx) to enable caching, CDN application servers, and load balancers.
- Deployed and supported Memcache-AWS ElasticCache.
- Involved in maintenance and performance of Amazon EC2 instances.
- Diagnose issues with Java applications running in Tomcat or JBoss.
- Involved in designing and developing Amazon EC2, Amazon S3, Amazon SimpleDB, Amazon RDS, Amazon Elastic Load Balancing, Amazon SQS, and other services of the AWS infrastructure.
- AWS data backup (snapshot, AMI creation) techniques, along with data-at-rest security within AWS.
- Developed Python based API (RESTful Web Service) CRM system using Flask, SQLAlchemy and PostgreSQL.
- Translation of designer mock-ups and wireframes into an AngularJS front-end
- Knowledge of Node.js and frameworks available for it (such as Express, StrongLoop, etc depending on your technology stack).
- Good understanding of server-side templating languages (such as Jade, EJS, etc depending on your technology stack).
- Implemented GRPC to connect java and python for transfering data.
- Implemented Vertx to connect Java and R for transfering data.
- Having Knowledge of Network protocols like TCP/IP, UDP.


IT Skills

- Java Framework: Spring, Spring Booting, Hibernate, Play, groovy and grails, Apache Ant, EJB, Jasper Report, Java FX, Servlet, JSP, .
- Python: Django, Flask, Falcon, Pyramid.
- BigData Analysis: Hadoop, Apache spark, heroku, Hbase, Cassandra, Hive, High Charts, R programming, SQOOP, Zookeeper.
- Cloud Computing: AWS
- Database: Oracle, MySql, postgresql, MongoDB., SQLite, Memcached, MariaDB, H2.
- Scala Framework: Play
- Ruby and Rails
- Docker
- Machine Learning: Python, R Programming, Matlab
- Natural Language Processing: NLTK, OPENNLP
- Artificial Intelligence: Tensorflow, Pytorch, Deeplearning4j

Show More Show Less