A couple of days ago, “Gideon Klok”:http://serenity.no-ip.biz/ contacted me through IM and explained an idea he had.
The question he asked himself was as follows: If we could design a computer, that is used for teaching, from “scratch”, what would it look like, leaving all current ideas on what UIs look like behind? After putting his ideas on paper, I further extended this idea to more than just a computer for education purposes, it could be used for many purposes. A kind of very advanced scratch pad; the killer-app for the TabletPC?
First off, note that this is an extremely radical idea. It might not be implementable, but it’s a way of brainstorming. What would the opportunities be? How could things work? It might change your view on how things work now, how they’re good and how they can be improved or even replaced with better concepts. To keep things simple I’ll only focus on the UI, not hardware or further OS issues like logins, file systems or permissions.
Second, I let Gideon comment and make additions to a draft version of this piece of text, I’ll mark the additions he made.
*The basic idea*
Gideon’s idea is that this computer would replace the blackboard in class and allows pupils and teachers to colaborate. So it’s not a personal computer but a shared, common computer. It would consist of one huge touch screen (think blackboard size). Many people would be able to use it simultaneously, using, for example, their styluses (styli?) or “fingers”:http://web.media.mit.edu/~nicholas/Wired/WIRED2-10.html. Personally I think smaller (TabletPC sized) version of this device could also be very useful tools and much less expensive.
*The target audience*
Gideon proposed mainly educational use, but what he’s proposing could be used for much more. To take one example, designing software. When designing UML-diagrams you’re always moving these little classes around, moving methods and what not. Same goes for much more. Mathematics, you always have to fiddle some additional formula between those other two, what if we could easily move them? Desktop publishing, don’t need to explain. Mind me, I’m just thinking close to home, who knows what the needs of people with totally different professions are? If you look at applications that exist for drawing all kinds of diagrams and graphical products, you see all kinds of patterns arise. There’s always need for rearranging elements, adding new ones, deleting them and obtaining new ones. Why not develop a product that you could use for all these purposes and make it easy to extend to fit particular uses better?
There might be many more people who would find something like this useful. Do note this is not a replacement for normal laptops or PCs. In fact, the “easy” implementation would be software that runs on a Tablet PC.
The stylus would be the main device to use the computer with. It would replace both mice and keyboards. It is imaginable that there’s an internet connection available, or that you’d connect devices like cameras or scanners to it aswell.
The screen would be blank by default. On top of it you can place elements. Elements are virtual and could be anything, for example:
* Lines, curves, etc.
* Circles, squares or other shapes
* Pictures and photographs
* Pieces of text, math, musical score, etc.
* Video streams (big leap, not going to talk any further about animated elements, complicates things too much)
* Or compositions of those
Elements are created either by drawing them, or obtaining them from other sources (websites, scanners, cameras). All elements should be vector based or behave like vectors. Meaning that they could be resized and reshaped. The user is aided in this by the system.
Drawn circles are recognized and turned into perfect circles, drawn lines become perfect lines, handwritten text becomes (legible) printed text.
Personally I think the only way to handle these elements is by having the concepts of grouping and ungrouping. For example, if you’d write down a word and you’d ungroup it, it would become a bunch of letter elements.
Gideon about this:
bq. I think that you could stack these interpretations so that it’s both a group and a bunch of letters and a set of graphs. How you treat them depends on the kind of action you perform on them. You could assume that the proximity of the current action to nearby objects has some relevance on the meaning of the action. Somehow the handlers need to be able to examine the neighbourhood.
Gideon also formulated some design principles for the machine (when I offered him the draft version of this text). He says the UI should be direct, fluid, consistent and graceful. I’ll just quote him:
_Direct:_ give some feedback on every user action which is logical and meaningful. Also note that proximity and contact are two two different things. A finger near the screen can have a different meaning (grab, drag, drop, resize) then contact with the screen (ink, select, edit). Directness also means that changes do not need to be saved.
[Note: Finger actions are: Point, Touch, Select, Move/Resize/Reshape, Activate]
_Fluid:_ changes are done through animation: zoom, or something like this.
_Consistent:_ the system should be modeless: the same action give the same result every time, independent of context.
_Graceful:_ errors and such may only be local in this system and they should not bring anything else down, or lock anyone out.
As I said elements could be grouped and ungrouped. There should be a concept of selecting one or more elements (for example by circling around them). This is necessary to do grouping, but can also be used to move elements around (by dragging them) or deleting them (by striking them). To make handling elements easier “Exposé-like”:http://www.apple.com/macosx/features/expose/ features could be introduced. You’d say “I want to clear up this region” (explicit or only implied) and all elements currently there would be rearranged, without overlapping, to clear up that space. As all elements are vectors they can be resized as desired. This is something you’d use a lot, I’m sure.
One thing to remember however is that because there are no windows as bounde boxes, you don’t really have traditional applications. They are more “handlers” then anything else, they watch input patterns and react when they see something they recognize. For example, when a URL is written down and underlined, the website could be loaded into an area. The web browser’s work ends there. From this area elements could be obtained such as pictures, tables and pieces of text. Note that this is very different from the current experience which is one of visiting a place much more then downloaded or copied anything.
Other handlers you could imagine is when underlining something like “3 + 20 =” it would append the answer, or if underlining a formula would draw the graph.
*The board space*
Then there’s the space issue. What you could imagine is that the screen only shows a part of the board. The actual board could be of a (virtually) unlimited size. You would be able to zoom out and in to a new part. You could imagine a kind of bookmark or magic word (a kind of local hyperlink to a particular spot on the board) to quickly go to particular spots on the board.
What would make this device even more useful is if the space could be shared with other device owners, for example using Wifi (zeroconf). There wouldn’t be need for an enormous physical screen, each could have its own and could work on one board together.
Although not wanting to go into this too much, many of the concepts discussed are already available in current software. Handwriting recognition is in Tablet PCs, recognition of different shapes and turning them into their “perfect form” has also been done in Adobe Illustrator. And Macromedia Flash has also many features that could aid in this. And it as a whole is a lot like a vector drawing programme.
As said before, it’s just a wild idea. The purpose of this post is to put it on paper and see if I could figure out how, or even if, this would work. The idea is only in very early stages.