Article Blog IoT November 16, 2011 3 minute read

Use your senses – Future of BI?

It is way past mid night and I am reading an interesting article on Mashable about Umami (umami.tv) launching a mobile app that uses the ‘hearing’ sense of a mobile device such as iPhone or iPad to deliver content on second screens.  There are several second screen apps in the market but this one takes the top spot by integrating the ‘hearing’ sense component to make relevant viewing decisions. The possibility of its integration with social checkin platforms like GetGlue (getglue.com) also looked quite tempting.

What has this got to do with BI? Well…not much right now. Mobile BI is still in its infancy and most BI apps on mobile are just a pretty presentation of the same old corporate data that need some pre-design work to fit within the form factor and the real estate available. Not sure if the mobile BI apps are really utilizing any sensing aspect of a mobile device like ‘hearing’, ‘vision’ and ‘geo-location’  at all.

Let us look at how other non-BI mobile apps use the sensing features of a mobile device.

  1. Vision: Google Goggles can use a mobile devices’ camera to scan any object and provide you with relevant information about the object like price, stores a product is sold in, etc.
  2. Hearing: iPhone’s SIRI can listen to your voice commands and respond to you. Similarly, build-in GPS also understands your voice commands and provides you with directions. Umami is another example of the use of the sense of hearing to deliver the relevant media content.
  3. Geo-location:  Yelp, Google maps and many other apps deliver content/information relevant to your geo location.

3senses-resized-600

How would a mobile BI app look like if it could incorporate these senses into it? Pardon me for running wild with my imagination here….

There are several data exploratory BI tools in the market that can provide relevant data/information through your random string searches. Why can’t the same be done using ‘voice commands’ instead? A BI user would simply ask a question (as if talking to a SIRI-like personal assistant app) and the output would be a combination of visual reports displayed on the mobile device and voice responses.  What goes behind the scenes is that the user’s voice gets converted to a ‘query’ that is executed against a set of data within the corporate databases or on the cloud and the results returned. Going one step further, the relevancy of the displayed information can be rated and would help refining the result output.

Similar to the use of Google Googles, we can imagine a BI user scanning a particular object or text in a document. The resultant output would again be a report or analysis of that particular object. Let’s take an example to understand better.  A field sales person scans one of his company’s product advertised in a magazine while he is travelling. That BI app would provide all relevant information about his company’s product like product description, sales info, customer reviews, etc. while on the go. All this is possible through a combination of data residing in corporate databases and social media.

Add geo-location capabilities to the above example and the sales person would receive information about the product relevant to the area/region he may be currently travelling. Law enforcement agencies probably use some of the sensing technologies to get information about license plates, driver history or criminal history of a suspect. But using geo-location, a law enforcement officer can get information about the area he is driving through (historical criminal incidents in the area, vehicular traffic information, etc.) through his mobile device and take appropriate actions.

I believe the possibilities are endless….and it is just a matter of time that we let the mobile devices use their senses to make us better informed.

Access Out-of-the-Box Features in 4 Weeks—Guaranteed.

Saama can put you on the fast track to clinical trial process innovation.