Wednesday, October 15, 2014

How to make mistakes

There is room for mistakes.
So make them.
Make them as quickly as possible.
Learn to learn from them quickly.

Mistakes will accelerate learning when embraced.


Tuesday, October 14, 2014

Do what hope does

Use what hope knows.
To see what hope sees.
And do what hope does

Become what hope hopes for.

Thursday, October 9, 2014

Checkle

Checkle: checkpoint circle

1) A rotating circle of tasks per role I have. 
2) Drop. Glance. Cloud. Overview. Admin power.

3) A project of Ben's

Wednesday, October 8, 2014

What displays are assuming

"What would smart glasses look like without a display?"

A display assumes two things about me:

Assumption #1: User always wants to see visually what the designer thought I would want to see at this point.

Assumption #2: Assumption #1 is important enough to allocate energy exertion on the battery.

While these assumptions are not surprising, our outlining these two assumptions prepares us to inform a critical discussion on the role of displays in what I, the user, really want to accomplish with my computers in the fabric of my daily life.


I. Foreword:
My ocean of displays in which I live has prompted this perspective -- especially with the onset of google glass.

II. Introduction:
I have found myself persistently wanting to interact with my computing devices sans display:
From writing with a toothpick (invisibly, but digitally recorded) on the back of my phone, tapping morse code in my fingers on the go, to navigating familiar apps via the computer in google glass by only touch and sound feedback.

I wish I could turn off the google glass display while I use it -- though useful at times, is many times an annoyance and unwelcome distraction -- as I am not in need of what it displays.
Especially when I get amply audio cues from what I am doing. Voice input, multi-gesture touch-input  (the three-inch touch interface that comprises the right side of google glass), acceleratometer input, and eye-tracking input, coupled with audio feedback (which can also read out loud to me what is displayed) , gives me plenty interactivity with google glass apps sans the display.

The display can be understood in regard to many apps as a training wheel (with exceptions).
Essential at first, but then, often - superfluous.

If a device is truly wearable and can thread its way a larger productive pattern of human behavior, we might consider rolling back a few notches on the default assumption that the display should be always active while actively navigating app's functions.

I'd like to 'see' more capabilities be manipulated & navigated without the need for constant multi-sensory (audio + visual) attention.

III. Argument:

It appears that displays carry assumptions about me, the user. And by coming into conflict with those assumptions I have found them.
Now because displays are used to portray views that the designer designed, I will now refer to displays as views.

A view assumes two things about me:

Assumption #1: User always wants to see visually what the designer thought I would want to see at this point.

Assumption #2: Assumption #1 is important enough to allocate energy exertion on the battery.

While these assumptions are not surprising, our outlining these two assumptions prepares us to inform a critical discussion on the role of displays in what I, the user, really want to accomplish with my computers in the fabric of my daily life.

This prompts the question:

What would google glass be like without the display?