16 innovative teams, equally crazy ideas, 24 hours of battle, indiscriminate doses of caffeine, Live Band, cool tech talks came together to result into one hell of a Hackathon! Join Aijaz Ansari as he revisits Saama’s first ever Hackathon that produced some novel ideas and discoveries. Relive the hacker within you!
This year, on March the 2nd and 3rd, Saama’s offices were abuzz with fervour action as the participants readied themselves for a tech-battle to claim the title of being the ‘Coolest Saama Techie’. What each team planned to build was special – DIY project that teams themselves chose. After a preliminary evaluation round based on the novelty of the idea and the real-world application of the project, we set sail on our very first Hackathon journey. “All aboard!”
Hackathons (codefests or hack-days) have been around for a while and people from Saama have been participating in a few, on their own. Growing event popularity with Saamaites inspired us to host one internally too. And the response was astounding. 16 teams (of 3 member at the max) signed up to participate. They packed for 24 hours with all the tech they could lay their hands on. We just ensured there was enough healthy food, people to cheer and an unlimited dose of lemonade, coffee and colas!
Here’s rounding up awesome projects and cool techies but first some numbers;
48 participants
16 projects
4 panellists
400 in audience and…
Amazon’s Echo Dot, Raspberry Pi and Chromecast up for grabs!
Talk to your search engine
Team: The Parrot
Team Members: Pawan Singh, Sushant Soni and Nikita Dane
Project Brief: A pluggable voice control search engine, which takes voice command and automatically finds the source system (Elastic Search / Wiki) to fetch data and also gives output in voice format and also performs sentiment analysis.
Technology used:
- Python
- Web speech tool kit speech recognition
- Word dictionary to perform sentiment analysis
Machine-read your prescription – no matter how bad your doctor writes!
Team: Codiator
Team Members: Rahul Askar, Prakash Tok and Deepak Kumar
Project Brief: Use of “Jaro-Winkler Distance Algorithm” to extrapolate text from image and then map readable / half / malformed text with medicines available in the market. This data is then used to analyze medicine recommended by the physician to treat a patient.
Technology used:
- Python: Image to text conversion, Pre-processing to remove stop words, algorithm to match the symmetrical words and parallel processing.
- MySQL: Storing the Medicine, Physician and Region details.
- Java: Visualizing the analysis.
This eye is watching. Always…
Team: Private Eye
Team Members: Narendra Shukla, Ajay Jadhav and Pratiksha Deoghare
Project Brief: Image processing using R and R related tools along with Python. Features such as; Image display, Object Recognition, Face & Emotion Detection, Image Filters with 9 different edit features, Twitter Integration, Instagram Integration, Spatial Images and Space-time Objects were successfully showcased.
Technology used:
- R-Shiny UI: Used to fetch and render images
- Python and R: Displayed list of objects identified in the image along with their confidence level
- Roxford: Used for facial features like Age, Gender, Beard, Mustache was identified. Emotions were tagged as Happy, Angry, Disgusted, Sad etc.
- magicK and EBImage: Provided nine different options for image editing like Border, Crop, Rotate, Blur, Flip flop, Charcoal, Oilpaint, Edge, Negate.
- twittR: Twitter connection was established by getting (Consumer Key), (Consumer Secret), (Access Token) and (Access Secret) from Twitter after registration. We extracted TWEETS and fetched IMAGES from them.
- instaR: Instagram connection to get (Client-ID) and (Client Secret) to access their Sandbox environment. We were able to extract (Recent Posts) and (Profile Pictures).
- (rgdal), (sp), (tmap): We explored SHAPE files, Spatial Polygons and Coordinate Reference Systems. How to join attributes. How to Clip and Aggregate Spatial Images. Data used was London Boroughs Population, Sports Participation, Crime Rate and Metro Stations.
- (gstat), (spacetime), (plm), (maps), (maptools) : We explored SPATIAL, TEMPORAL and DATA values. We ran PANEL Linear Model on (US States Production) data.
Coaded and Cubed, or not?
Team: CUBEfied
Team Members: Anshuman Nalavade and Neha Bapat
Project Brief: The purpose of the project was to demonstrate the usefulness of having OLAP CUBE (multi-dimensional database) capabilities on top of transactional data. Thereby, we can avoid disadvantage like hardcoded query string sitting behind a widget in a dashboard and the need for creating multiple dashboards to render different data sets.
Technology used:
- Spark, Core Java: For building cubes
- AngularJS: For showing client side widgets
- OLAP Cube: For supporting Big Data
Check the temperature of your insurance claim
Team: Trident
Team Members: Sukumaar Mane, Anilkumar Miriyala and Vijay Pawan Nissankararao
Project Brief: The project involved the use of sensors to detect insurance fraud in Logistics. A case-in-time where a meat supplier claims that meat turned during transportation and hence loss to business. The project demonstrates how sensors transported with food-items will show why the item expired and if the claim is authentic/valid.
Technology used:
- Apache Spark: As processing engine
- MongoDb: As NoSql storage for sensors data
- Redis: As in-memory storage for compliance rule
Diagnose skin cancer using voice
Team: Hacking Squad (Winner)
Team Members: Amol Kokate, Lokanath Panda and Pallavi Koti
Project Brief: To build a Skin Cancer diagnosis tool to help dermatologists classify Melanoma using Image Processing. Also, using Voice analytics, reduce human intervention with the application by using voice commands like “UPLOAD” or “DIAGNOSIS”, etc. This tool can also be used by Ophthalmologists to detect eye / retina defects.
Technology used:
- R Studio
- Shiny App: To fetch and render images
- R Programming
- JavaScript
- Voice and Image Processing APIs: To process images once rendered
Machines and Algorithms now read images too!
Team: Aimers
Team Members: Anudeep Purwar and Swapnil Jagdale
Project Brief: The project helps with clearing the document which is riddled with wrinkles and stains on it. The project uses Machine Learning and Algorithms which are used to improve brightness, remove creases and stains. The first step is to restructure the data from images into a flat file format that is suitable for input into standard machine learning algorithms. We used Least Squares Regression, Image Thresholding & Gradient Boosting Machines, Adaptive Thresholding, Canny Edge Detection & Morphology and Median filter function and Background Removal to improve the image quality.
Technology used:
- R Studio: For development environment
- Shiny by R: For UI development
Control all you can
Team: JARVIS
Team Members: Subhradip Bose, Mukesh Mogal and Mayur Gupta
Project Brief: A Voice Controlled home automation system using voice control, which a user can integrate in his home, office, or anywhere and can control the appliances with just a Mobile application using voice commands.
Technology used:
- Raspberry PI 2 (Raspbian Jessie): Control systems integration platform
- Python: Voice Processing
- HTML/CSS/JS: For the UI of the application itself
- Android: For the Mobile App used
Make your car talk to you
Team: Golden State
Team Members: Atul Pundhir, Rishi Ranjan and Jaydeep Deshmukh
Project Brief: Using 100’s of parameters like battery voltage, toxic gas emission, trip history, car speed, RPM, throttle usage, engine load, fuel rate, barometric pressure, etc. to test the state of a vehicle at any time and to diagnose the problem to run initial tests. This also helps car rental companies to know how their vehicles are treated when not in view. Rash driving, ignoring repair warnings, etc. can also be notified to analyse claims or detect fraudulent claims. A classic case of IOT implementation then
Technology used:
- Android: To build an app for Android Mobile which will get data from OBD and push it to AWS Cloud
- JavaScript, JQuery, HTML5 and CSS3 : Used for the UI
- Websocket : For connection and data transfer over the web
- AWS Cloud: To store data coming from Android Mobile App which was the source of data for dashboard
- OBD2 : Device that transmitted car data
Echo meets Eva
Team: Chatterbots
Team Members: Nashit Babber, Shantanu Vichukar and Sukarnn Taneja
Project Brief: Amazon’s Echo has a buddy – say hello to Eva! Eva is an automated bot that works on voice commands and responds to you in a natural voice. Features like date, time, weather check, crawl the World Wide Web, stock market updates, get answers, play songs, search for your friends on Facebook and tell their age, sex, relationship status and also suggest new friends based on your existing connections.
Technology used:
- Python: Custom built modules and functions such as brain module which was developed exclusively for Eva to decode the voice commands to text, urllib2
- Custom Function: To analyze the text converted from voice that were used to get the desired output
- beautifulsoup , lxml and speech synthesizer
Shop till you drop – a shopping experience like no other
Team: Ingenious Vikings
Team Members: Ankit Deo and Jaydeep K Sheth
Project Brief: This project improves the overall shopping experience by making the buyer aware of available offers and best deals. The user has a very simple authentication mechanism in which his/her picture acts as a gateway in the app. The picture is authenticated using image processing and user gets to know all the deals or offers in which he/she is interested based on the preferences. User can avail the same offers by just visiting the corresponding vendor.
Technology used:
- Google Materialize Design: For the mobile application
- AWS S3: For the storage of user data and images
- AWS Lambda Functions: To execute the logic in application
- AWS Cognito: For user authentication (One time access & Web Identity Provider)
- AWS SDK: For JavaScript in Browser
No parking tickets anymore
Team: Tech Pirates
Team Members: Abhijit Sonkusare, Deepa Chandak and Sneha Patil
Project Brief: This venture solves traffic issues like parking, emissions, time to park etc. with help of low-cost sensors, real-time data collection, and mobile-phone-enabled automated payment systems that allow people to reserve parking in advance or very accurately predict where they will likely find a spot.
Technology used:
- Python: For Communication between Ultrasonic Sensor and raspberry Pi
- WebApi: For Showing data on map
Happier trips with TripAdvisor
Team: GSN
Team Members: Gaurav Rupnar, Suvidya Khandagale and Nilesh Narkhede
Project Brief: This project classifies customer reviews and then runs sentiment analysis on data collected. It gives a holistic view of hotels on TripAdvisor for travellers and boarders to choose their place of stay. Visualization makes it easy to consume this data.
Technology used:
- Python: For Classification and sentiment analysis of reviews
- HTML5, JQuery and Bootstrap: UI for the application itself
A new home experience
Team: Pathbreakers
Team Members: Chintan Shah, Ojasvi Harsola and Chetan Kardekar
Project Brief: Using Alexa and Echo, the team built a whole new home experience by integrating e-commerce, home automation / PDA and social media. Through voice service, user can opt for transactions such as recharge mobile, pay gas and electricity bills, etc. We integrated wallet service for transactions. Moreover, if the user misses out on cell phone calls, he will be notified via Alexa device as soon as he is in contact with it. One can also tweet using voice to text on Alexa.
Technology used:
- Alexa: For IoT
- Echo: For interface
- REST API: For integration with multiple services
- Java / j2ee / Android SDK: UI and App
Let’s play a game!
Team: Sphinx
Team Members: Shiv Pratap Singh, Sandeep Kumar and Hema Sundar
Project Brief: The game involves two players and a system that connects them together. Player 1 sends an image to Player 2. The system reads the images for tags. Tags could be sun, boat, water, etc. When Player 2 receives the image, s/he tries to find an image that s/he thinks may have been tagged with very less probability by the algorithm and replies with it. If the algorithm identifies tag with high probability, Player 2 scores!
Technology used:
- Front-end: HTML5, CSS3, JavaScript, AngularJS, JQuery
- Server-end: Apache Tomcat, Maven, Java 7, Spring, Hibernate, Clarifai image processing Java API and MySQL.
Make your computer see
Team: DIP
Team Members: Sugesh Sugathan, Brijendra Singh Raghuwanshi and Priyanka Jain
Project Brief: The project taught our machine to identifying the different objects like cat, dog, plain, car, bird, deer, frog, horse, ship and truck etc. in a given image depending upon which is near to main object. We trained the machine on CIFAR-10 image set to classify 10 classes of low resolution images. The result (image classification) was most of the times 98% accurate.
Technology used:
- Python: Machine learning platform and language
- Dockers tool: Software container platform
- TensorFlow: Open source machine learning library
Wesley Dias, who was a part of Hackathon’s Jury quipped;
“It is extremely heartening and encouraging to see some of the brightest minds at Saama apply scientific thinking to data and build something so cool that also has real-world application. The ideas that we have received only showcases the immense potential that these young minds have to identify and fix critical problems by using data. We at Saama strongly believe that such innovation and problem solving abilities should be nurtured. Saama Hackathon aims at inspiring young minds to develop such creative technology solutions that will go a long way in making this planet a better one.”
Ecstatic? Why not! We are already conspiring toward our next Hackathon. So stay tuned for more.