Abdul Waheed
- +923115744031
- awaheed.ds@gmail.com
- abdulwaheed01.github.io
- Islamabad, Pakistan
Objective
Seeking a responsible position in a prestigious organization enabling me to utilize my talents and experience, with a willingness to develop new skills & grow with the company.
Personal Profile
Experienced software engineer with a passion for developing innovative programs that expedite the efficiency and effectiveness of organizational success. Well-versed in technology and writing code to create systems that are reliable and user-friendly. Skilled leader who has the proven ability to motivate, educate, and manage a team of professionals to build software programs and effectively track changes. Confident communicator, strategic thinker, and innovative creator to develop software that is customized to meet a company’s organizational needs, highlight their core competencies, and further their success.
Work Experiences
Full Stack Developer
Responsibilities:
- Developed entire frontend, mobile apps and backend modules using AngularJs,ionic,NodeJs and MongoDB.
- Developed remote integration with third party platforms by using RESTful web services.
- Implemented code to perform CRUD operations on MongoDB using PyMongo module.
- Involved in analysis and design of the application features.
- Actively involved in developing the methods for Create, Read, Update and Delete (CRUD) in Active Record.
- Developed entire frontend and backend modules using AngularJs on NodeJs.
- Involved in designing user interactive web pages as the front end part of the web application using various web technologies like HTML, JavaScript, Angular JS, JQuery, AJAX and implemented CSS for better appearance and feel
- Implemented MVC architecture in developing the web application with the help of Django framework.
- Involved in code reviews using GitHub pull requests, reducing bugs, improving code quality, and increasing knowledge sharing
- Experience in debugging and troubleshooting programming related issues.
Full Stack Developer
Responsibilities:
- Developed entire frontend and backend modules using python,django,jQuery and javascript.
- Developed remote integration with third party platforms by using RESTful web services.
- Engaged in video and image processing tasks to enhance application functionality
- Engaged in creating Custom Thumbnails for Videos
- Actively involved in developing the methods for Create, Read, Update and Delete (CRUD) in Active Record.
- Involved in designing user interactive web pages as the front end part of the web application using various web technologies like HTML, JavaScript, Angular JS, JQuery, AJAX and implemented CSS for better appearance and feel
- Implemented MVC architecture in developing the web application with the help of Django framework.
- Involved in code reviews using GitHub pull requests, reducing bugs, improving code quality, and increasing knowledge sharing
- Experience in debugging and troubleshooting programming related issues.
Python Developer
Responsibilities:
- Developed remote integration with third party platforms by using RESTful web services.
- Implemented code to perform CRUD operations on MongoDB using PyMongo module.
- Worked in MySQL database on simple queries and writing Stored Procedures for normalization.
- Involved in analysis and design of the application features.
- Actively involved in developing the methods for Create, Read, Update and Delete (CRUD) in Active Record.
- Developed entire frontend and backend modules using Python on Django Web Framework.
- Involved in designing user interactive web pages as the front end part of the web application using various web technologies like HTML, JavaScript, Angular JS, JQuery, AJAX and implemented CSS for better appearance and feel
- Implemented MVC architecture in developing the web application with the help of Django framework.
- Involved in code reviews using GitHub pull requests, reducing bugs, improving code quality, and increasing knowledge sharing
- Experience in debugging and troubleshooting programming related issues.
Python Developer
Responsibilities:
- Involved in software development life cycle (SDLC) of tracking the requirements, gathering, analysis, detailed design, development, system testing and user acceptance testing.
- Actively involved in developing the methods for Create, Read, Update and Delete (CRUD) in Active Record.
- Developed entire frontend and backend modules using Python on Django Web Framework.
- Involved in designing user interactive web pages as the front end part of the web application using various web technologies like HTML, JavaScript, Angular JS, JQuery, AJAX and implemented CSS for better appearance and feel
- Knowledge of cross-browser and cross-platform development of HTML and CSS based websites
- Implemented MVC architecture in developing the web application with the help of Django framework.
- Involved in code reviews using GitHub pull requests, reducing bugs, improving code quality, and increasing knowledge sharing
- Developed remote integration with third party platforms by using RESTful web services.
- Implemented code to perform CRUD operations on MongoDB using PyMongo module.
- Worked in MySQL database on simple queries and writing Stored Procedures for normalization.
- Involved in analysis and design of the application features.
Projects
The goal of the project is to have a Dashboard that help TRA users to explore and analyze diff activities.
- Summary of diff reports
- Create Graphs for diff reports.
- Show reports in tabular form
- Filter options
- login/logout
- Created Head Maps to explore Zip Code analysis
- Option for user to share their applied filters
- user can export reports in pdf,csv and excel
AngularJs
NodeJs
Typescript
HTML5
CSS
Express
MongoDB
Mongoose
OpenShift
Google geolocation API
Git
The video website that brings people and their favorite things together in an intimate way through drama, improvisation and just plain video fun. Advances in video capture and formatting have afforded people everywhere the opportunity to create compelling video at the click of a smart-phone button. These advancements offer us the ability to capture ours lives and our favorite things all in a matter of seconds when the moment strikes us, when our creativity is peaked or when we have that exciting experience we simply can't wait to share with others! Skigit enables you to connect with others and offers you the opportunity to collaborate with businesses to give through your generosity and compassion without spending a single penny. Skigit is not only fun, it makes a difference and we're excited to have you!.
- Registration page for both general and business user
- Upload a skigit (video) both through file and through direct link like youtube.
- User have a option to like,make as favorite,share,plugged and imbed a skigit
- user can follow other user and also can share skigit with them
- diff social login options
- User can comment on a skigit and also have a option to share that on facebook.
Python
Django
Redis-Server
HTML5
CSS
Celery
Postgres
jquery
javascript
social auth
android and ios app
RedRover App
The goal of the project is to have a app that help users to explore resorts, nearby attractions, their availability and other related information. It will also help user to make booking for a resort.
- Registration page having User detail + RV information, amenities and activities.
- Show resorts on Map.
- search options with filters.
- Resorts Detail page that covers all relevant information about resort
- Nearby attractions of resorts and with detail view.
- Forum for local community in which user can create thread,posts and make discussion.
- User can book a resort.
- Activity Calender that shows ongoing events.
- Option for user to save their favorite resorts, attractions and event posts.
- Created web view for data entry
Ionic
AngularJs
NodeJs
Typescript
Figma
HTML5
CSS
Express
MongoDB
Mongoose
OpenShift
Google geolocation API
Git
Urcloud
The goal of the project is to have a user portal to consume services from cloudstack infrastructure.
- Registration page having User detail + Choose the PLAN, add option (More RAM, More DISK, More CPU, More Bandwidth) password reset / mail to confirm registration.
- Recurrent payement with stripe gateway price depend on PLAN choosen.
- On cloudstack create user/project by api and generate api key/secret of account and store it on django db + create user resources quota depend on plan .
- Dashboard: Summary-> list limit of the account, list instances, show wan ip address…) with API .
- Profil => details, edit address, edit password .
- Billing => plan details, card informations from stripe, add coupon, stop plan .
- Invoices => details, number, date, pdf .
- Tickets => add, show, conversation mode.
- Instance => Add (based on zone, flavor, storage, template/iso, network with API
- Instance => details, edit, stop, start, console, add delete/volumes to VM, add network interface to vm, attach/detach iso with API 2 Portal - Compute 3/5 Private Net
- Private Network : create, delete with API
- Firewall : show public IP, add NAT rules, add firewall rules with API
- Site can be easy translatable to others languages.
Python
Django
Reactjs
HTML5
CSS
Cloud Stack API
Stripe API
Nginx
Gunicorn
Automatic summarization of resume using Named Entity Recognition (NER)
One of the key challenges faced by the HR Department across companies is to evaluate a gigantic pile of resumes to shortlist candidates. To add to their burden, resumes of applicants are often excessively populated in detail, of which, most of the information is irrelevant to what the evaluator is seeking. With the aim of simplifying this process, through our NER model, we could facilitate evaluation of resumes at a quick glance, thereby simplifying the effort required in shortlisting candidates among a pile of resumes.
- 200 resumes were downloaded from an online jobs platform.
- Used Dataturks online annotation tool to generate dataset.
- Used 180 resumes for Training dataset and 20 for testing
- For Model Training: used SpaCy Model,
- sigmoid gradient descent optimizer
- 20 number of epochs
- and 0.2 dropout rate
- used Accuracy, Recall, Precision, F1-score to evaluate performance
Python
Pandas
Matplotlib
Numpy
SpaCy Model
Dataturks
NER
Jupyter Notebook
Automated Feature Selection and Churn Prediction using Deep Learning Models
In this competitive world, mobile telecommunications market tends to reach a saturation state and faces a fierce competition. This situation forces the telecom companies to focus their attention on keeping the customers intact instead of building a large customer base. According to telecom market, the process of subscribers (either prepaid or postpaid) switching from a service provider is called customer churn. Several predictive models have been proposed in the literature for churn prediction. The efficiency of any churn prediction model depends highly on the selection of customer attributes (feature selection) from the dataset for its model construction. These traditional methods have two major problems: 1) With hundreds of customer attributes, existing manual feature engineering process is very tedious and time consuming and often performed by a domain expert: 2) Often it is tailored to specific dataset, hence we need to repeat the feature engineering process for different datasets. Since deep learning algorithms automatically comes up with good features and representation for the input data, we investigated their applications for customer churn prediction problem. We developed three deep neural network architectures and built the corresponding churn prediction model using telecom dataset.
- Cell2Cell is one of the largest wireless companies in the USA with more than 10 million customers and its average monthly churn rate is 4%.
- We conducted the experiments on a large churn dataset from the Teradata Center for Customer Relationship Management of Duke University.
- For churn prediction, Cell2Cell collected several data about its customers including 1) customer care service details, 2) customer demography and personal details, 3) customer credit score, 4) bill and payment details, 5) customer usage pattern, and 6) customer value added services, totaling to 71 variables from 71, 047 customers.
- Used Python inbuilt libraries such as scikit-learn, pandas, numpy for various data mining tasks.
- we implemented the entire workflow in IPython Notebook which runs in a browser for easy interaction.
- We used Keros library for developing proposed deep neural network architectures.
- Make comparison of four models (baseline, small FNN, large FNN, CNN) with Accuracy.
Python
Sk-learn
Pandas
Matplotlib
Numpy
Keras
CNN
FNN
Jupyter Notebook
Detect Phishing Urls Using Data Mining Approach
Phishing is an online criminal act that occurs when a malicious webpage impersonates as legitimate webpage so as to acquire sensitive information from the user. Phishing attack continues to pose a serious risk for web users and annoying threat within the field of electronic commerce.This project focuses on discriminate between legitimate and phishing URLs. We use dataset containing phishing and legitimate urls and extract features from that urls. These features are then subjected to Decision Tree Classifer and association mining rule-Apriori. The obtained results shows the proposed technique improve the accuracy of the proposed phishing URL classification system.
- Mine data for phishing urls from phishtank and phishstat, and for legitimate urls from alexa.
- Extract just urls from raw data.
- Extract eleven features from urls.
- Apply Decision Tree Classifier and Apriori on dataset with all features and note the results.
- built confusion matrix, and then use built function Accuracy, Recall, Precision, F1-score to evaluate performance.
- Show comparison of both models with evaluation methods in a table.
- Now use Feature Engineering technique i.e Anove F-test to select best five features.
- Also use K-Fold Cross Validion to assess the performance of predicted model. It also helps to generalize the model to an independent dataset.
- Again make comparison of both models with evaluation methods before and after features engineering.
Python
Sk-learn
Pandas
Matplotlib
Numpy
K-Fold Cross Validation
Decision Tree
Apriori
Predict Student Performance
Student retention is an important issue in education. While intervention programs can improve retention rates, such programs need prior knowledge of student performance. The aim of this project to evaluate whether machine learning is a viable approach for predicting student performance and then choose the best method for prediction. Basic methodology was to build multiple prediction models using different machine learning methods, such as Decision Tree, naïve Bayes, and linear regression etc. Then, prediction results of different models were compared in terms of their effectiveness.
- The data set used in this project was originally used in a research done at the University of Minho, Portugal (Cortez and Silva, 2008). It contains information about 395 students and has 31 different variables.
- Creating features (variables) in a data set to improve machine learning results. This can be done by creating a model to test the correlation of the variable with the dependent variable
- In the case of this project, feature selection was done by observing the output of the linear regression model to find how much correlation each variable has with the dependent variable.
- The second use of feature engineering in the project is the modification of variables. This can refer to combining multiple variables to create a new variable, calculating a variable differently so that it can be used better in classification, or categorizing a variable so that it has a limited range of possible values.
- Use python built-in functions for the methods selected for this project, which are linear regression, decision tree, and naïve Bayes classification, SVM, Logistic Regression and KNN.
- built confusion matrix, and then use built function Accuracy, Recall, Precision, F1-score to evaluate performance.
- Show comparison of all models with evaluation methods in a table.
Python
Sk-learn
Pandas
Matplotlib
Numpy
Data Preprocessing
Linear Regression
Decision Tree
Naive Bayes
KNN
SVM
Logistic Regression
Honeypot Analysis And Visualization
To productively visualize and analyze profiles of cyberattacks, we developed an HAV (Honeypot analysis and visualization) tool with improved design providing advanced analytics like attacks detected per day, attacks detected per sensor, no. of attacks from some IP, blacklisted IPs, techniques used by attackers, infected files, top attacks etc. The HAV feature a GUI and also provide an API for research and studying the attacker techniques in order to build secure systems in the future.
- Write python script that extracts infomation of attacks from server and loads into MongoDB.
- Develop a Web application in MEAN stack to display sensors detail on web.
- Admin/Members can view the Attacker’s location shown in real time on the map.
- Provide facility for Admin/Members to be able to download graphs images/pdf of all analyzed information like total attacks detected so far, attacks detected by specific honeypot, attacks detected per IP, attacks detected per port, attacks detected per protocol, attacks detected per day per honeypot etc.
- Build an API that is secure and cannot be accessed without a proper token
Python
MongoDB
AngularJs
Node.js
HTML
CSS
Git
slack
Certifications
IBM Data Science Professional Certificate
-
What is Data Science
Issued Oct 2019 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/verify/TN84ZCG9MLLG -
Open Source tools for Data Science
Issued Oct 2019 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/verify/GMJ5DZB25RH7 -
Data Science Methodology
Issued Nov 2019 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/verify/5M5WJM44JTM4 -
Python for Data Science and Al
Issued Dec 2019 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/verify/GWGRVVV7XP25 -
Databases and SQL for Data Science
Issued Feb 2020 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/verify/GS3TVME8QYJ8 -
Data Analysis with Python
Issued Feb 2020 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/verify/Z94CLSZMZMN3 -
Data Visualization with Python
Issued Feb 2020 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/verify/HTZ9FXLVC8ND -
Machine Learning With Python
Issued Feb 2020 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/certificate/FJPCJGME22NR -
Applied Data Science Capstone
Issued March 2020 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/certificate/UU9FDK5YMNG3
Introduction to Big Data
-
Issued Oct 2019 | No Expiration Date
Issuing Organization: Coursera
Credential ID: https://www.coursera.org/account/accomplishments/verify/UTCL4Q5LPEHB