Now that both the iPad and Wolfram|Alpha iPad are available it’s time to really evaluate the capabilities of these platforms.
[disclaimer: last year I was part of the launch team for Wolfram|Alpha - on the business/outreach end.]
Obviously I know a great deal about the Wolfram|Alpha platform… what it does today and what it could do in the near future and in the hands of great developers all over the world. I’m not shy in saying that computational knowledge available on mobile devices IS a very important development in computing. Understanding computable knowledge is the key to understanding why I believe mobile computable knowledge matters. Unfortunately it’s not the easiest of concepts to describe.
Consider what most mobile utilities do… they retrieve information and display it. The information is mostly pre-computed (meaning it has been transformed before your request), it’s generally in a “static” form. You cannot operate on the data in a meaningful way. You can’t query most mobile utilities with questions that have never been asked before expecting a functional response. Even the really cool augmented reality apps are basically just static data. You can’t do anything with the data being presented back to you… it’s simply an information overlay on a 3d view of the world.
The only popular applications that currently employ what I consider computable knowledge are navigation apps that very much are computing real time based on your requests (locations, directions, searches). Before nav apps you had to learn routes by driving them, walking them, etc. and really spending time associating a map, road signs and your own sense of direction. GPS navigation helps us all explore the world and get around much more efficiently. However, navigation is only 1 of the 1000s of tasks we perform that benefit from computable knowledge.
Wolfram|Alpha has a much larger scope! It can compute so many things against your current real world conditions and the objects in the world that you might be interacting with. For instance you might be a location scout for a movie and you want to not only about how far the locations are that you’re considering you want to compute ambient sunlight, typical weather patterns, wind conditions, likelihood your equipment might be in danger and so forth. You even need to consider optics for your various shots. You can get at all of that right now with Wolfram|Alpha. This is just one tiny, very specific use case. I can work through thousands of these.
The trouble with Wolfram|Alpha (its incarnations to date) people cite is that it can be tough to wrangle the right query. The challenge is that people still think about it as a search engine. The plain and simple fact is that it isn’t a web search engine. You should not use it as a search engine. Wolfram|Alpha is best used to get things done. It isn’t the tool you use to get an overview of what’s out there – it’s the system you use to compute, to combine, to design, to combine concepts.
The iPad is going to dramatically demonstrate the value of Wolfram|Alpha’s capabilities (and vice versa!). The form factor has enough fidelity and mobility to show why having computable knowledge literally at your fingertips is so damn useful. The iPhone is simply too small and you don’t perform enough intensive computing tasks on it to take full advantage. The other thing iPad and similar platforms will demonstrate is that retrieving information isn’t going to be enough for people. They want to operate on the world. They want to manipulate. The iPad’s major design feature is that you physically manipulate things with your hands. iPod does that, but again, it’s too small for many operations. Touch screen PCs aren’t new, but they are usually not mobile. Thus, here we are on the cusp of direct manipulation of on screen objects. This UI will matter a great deal to the user. They won’t want to just sort, filter, search again. They will demand things respond in meaningful ways to their touches and gestures.
So how will Wolfram|Alpha take advantage of this? It’s already VISUAL! And the visuals aren’t static images. Damn near every visualization in Wolfram|Alpha are real time computed specifically to your queries. The visuals can respond to your manipulations. In the web version of Wolfram|Alpha this didn’t make as much sense because the keyboard and mouse aren’t at all the same as your own two hands on top of a map, graph, 3d protein, etc.
Early on there was a critical review of Wolfram|Alpha’s interface – how you actually interact with the system. It was dead on in many respects.
WA is two things: a set of specialized, hand-built databases and data visualization apps, each of which would be cool, the set of which almost deserves the hype; and an intelligent UI, which translates an unstructured natural-language query into a call to one of these tools. The apps are useful and fine and good. The natural-language UI is a monstrous encumbrance…
In an iPad world, natural language will sit back-seat to hands on manipulations. Wolfram|Alpha will really shine when people manipulate the visuals and the data display and the various short cuts. People’s interaction with browsers is almost all link or text based, so the language issues with Wolfram|Alpha and other systems are always major challenges. Now what will be interesting is how many popular browser services will be able to successfully move over to a touch interface. I don’t think that many will make it. A new type of services will have to crop up as iPad apps will not be simply add-ons to a web app, like they usually are for iPhone. These services will have to be great in handling direct manipulation, getting actual tasks accomplished and will need to be highly visual.
My iPad arrives tomorrow. Wolfram|Alpha is the first app getting loaded. and yes, I’m biased. You will be too.
Read Full Post »