Intelligence In-Context:
The Rise of Context-Aware Apps
A delivery truck driver named John has a new application. He arrives at his destination, parks the truck, picks up his mobile device and steps out. As he unlocks his device's screen, the app instantaneously refreshes with the package information for the customer at that location. He didn't even need to touch the app... he smiles...
His device picks up the beacon at the door and orders the back doors to open. A green light goes on at the exact section where the package is…he smiles again.
His device is aware of the time of day and his customer. So, based on recent history, the system can predict that there’s only a 9% chance the customer is there. It takes action and prints out a notice to leave at the customer’s door, in case it’s confirmed that no one’s there. He can’t resist giggling a bit as he heads to the customer’s porch with the package in one hand and the notice in the other. The horror that is paper is now history, and his life is so much simpler.
This is a story of a context-aware app. Context-aware apps are the latest in a line of mobile apps that developers are increasingly being asked to create.
"When we aim to create applications, devices, and systems that are easy to use, it is essential to understand the context of use." — Albrecht Schmidt

What is Context-Awareness?
Generally, context-awareness is the ability of devices to sense their physical environment and adapt their behavior accordingly. A great example of context-awareness is how smartphones react to ambient light, adjusting their screen brightness for optimal readability. As soon as the user activates the device, it configures a setting to ensure the best possible experience given the environment.
The idea of context-awareness has been around since the 1990s, but it only took off recently. The widespread use of smartphones and their embedded location services, made possible by the packaging of a cheap GPS receiver, is the reason it is back in the public eye. Of course, GPS is just one of the sensors that are being crammed in these devices, opening up a set of new possibilities to mobile app developers.
Context-awareness dramatically improves the user experience. UX is generally about figuring out what users need and designing the interface that best meets them. In practical terms, this usually means figuring out where a feature goes without compromising all the others.
What makes this approach interesting is that, with this extra piece of context, context-aware apps can learn exactly what the user needs in a specific situation and give it to them without compromise.

Context-Aware Apps Sense, Understand and Adapt
If you’re thinking about building context-aware apps, here are three major areas to take into account, plus an optional fourth.
The capability to receive a wide set of inputs in different ways: Sense
Sense is about receiving a set of inputs from the environment or the device itself. Location is the most commonly known but there are many other useful sources, already available in today’s devices.
The major types of context inputs that have proven useful to apps are:
- Location: Identify the device’s current location (with user permission) or be notified when entering or leaving a location.
- Time: Identify the current date and time or the device itself notifies users when they’re in a different timezone.
- Activity: React to an outside occurrence, based on sensors, relevant to the user.
- Environment: Get information from a device’s sensors, movement, speed or time zones.
The device can receive the inputs either explicitly or implicitly:
- Gathered when the application is in use: The app accesses the current location, time or speed through APIs.
- Received as relevant contextual input: The app wakes up at a specific time, a location trigger, or other signal (for example, a beacon).

The Ability to Process These Inputs Into Relevant Pieces of Information: Understand
Understand is about making sense of the inputs received, usually with some kind of algorithm.
Inputs can be simple, such as feeding a precise location directly to the application. But it can easily increase in complexity and suddenly become a sequence of stored locations. Or the cross-referencing of several inputs to derive specific conditions.
You can use these inputs anonymously, but they are more powerful when analyzed in the context of a specific user or in a specific application state.
For example: when people go to their bank branches, the bank app in their devices can receive notification that they have entered that location. The apps then immediately let their users know whether the account manager is there or the status of any ongoing orders.
By crossing environmental inputs with user identity and application state, these context-aware apps for banking receive exactly the right information at exactly the right time and convey it to users.

The Customization of the User Experience Based on This Rich Information: Adapt
Devices may have information about the circumstances under which they can operate and based on rules, or intelligent stimuli, react accordingly. With digested context, an application can derive the user’s intent and adapt its interface to match the specific needs of the task.
The best user experiences are not about fitting user needs. They are about adapting to what the user needs.

Storing the Inputs and Deriving Patterns (Machine Learning, Anyone?) to Anticipate Future Scenarios: Predict
Apps gather and store data that they don’t use immediately for numerous reasons such as adapting to future user experiences. The inputs can be stored for a more ambitious goal: predict user behavior.
The device can silently capture the inputs and store them or send them to the server. With a consistent stream of data, machine learning can identify patterns and predict user behavior, thus anticipating user needs.

Going Forward, Contextually
This is just the beginning for context-aware apps. Available context continues to evolve and expand with new sensory information and increased device inter-connectability.
For example, wearable technology currently provides a stream of health information and further context on movement. In the near future, we might see voice input interfaces or face recognition reveal new dimensions on the user, like emotion.
Not only that, but the sophistication of outcomes is increasing to predict user needs and even automate specific actions. And, some do not need to be a screen interface. It can be a voice, a display in a wall, a wearable, or the opportunity provided by the Internet of Things, it can be just about anything.
Caution-Awareness
The increasing number of inputs and outcomes and the forward march of context awareness means more app complexity. Therefore, you should seriously consider new and different approaches to the apps you develop in this context-aware world.
The key is to look to automation and platforms that abstract some of the complexity, so that developers can focus more on the experience and less on the bit and the bytes. This way, they can make everyone’s lives better, one context at a time.