Sketching with Arduino Uno

Taking my first small steps with Arduino I’ve started going through tutorials to get to know both the electrical circuitry and the programming.

Below is a simple program made for lighting 2 red LED’s and a Green LED, and changing the lighting when pressing a button. In the tutorial it’s called a spaceship interface.

It helped me get a better understanding of pinMode’s and also the digitalWrite function.

I’m looking forward to work more with it.

Screen Shot 2017-03-21 at 8.10.32 AM

Articulating interaction: Aesthetics of interactive systems

With a background as a graphic designer, I’m mainly used to treating aesthetics as visual communication but also something that is pleasing to the eye. It is a mix of functionality (Does it convey the message I want it to, in a clear way) and the “language” it speaks (what is the style like, is it agressive, pleasant, clean, cool etc).

In that regard I have not previously thought of aesthetics as something related to interaction.


Vinyl cover for Doom-metal band Gaia. In graphic design terms I would describe this as a minimalist, dark, psychedelic aesthetic.

In the text Aesthetic Interaction — A Pragmatist’s Aesthetics of Interactive Systems by Petersen et al. Aesthetic is described as follows:

“Any aesthetic experience is dependent on context: the life and abilities of the user, the affordances of the artefact and in what ever physical and social space the interaction takes place. We need to think of the aesthetic experience as more than a chance for contemplation, but rather as an event that resides in context informing the people who experience it and the people they experience it with.”

In this definition aesthetics encompasses a huge amount of factors. Basically human culture itself is part of the aesthetics. So how do we incorporate all of this in our thinking when we design?

They suggest using aesthetic experience as a fifth element when designing interactive artifacts, as show below in the table initially based on Bødker & Kammersgaard, 1984.


In this model aesthetic experiences in human-computer interaction is shown as something that intrigues and promotes playing-with, with the human as improvisator. It seeks to involve the user more personally in the using of the artifact, making the experience seem more ‘real’ and not feel as mediated/distanced as it could have.

Petersen et al writes as a closing remark:
“We set up frame for interaction, but it is up to individual user to interpret and explore the system. The perspective of aesthetic experience creates a frame for allowing the user to express herself through the interaction.”

My main take away from this, is that when I design interactive artifacts, I should have self-expression in the human in mind, and allow for improvisation and play.

Screen Shot 2017-02-28 at 1.23.52 PM.jpg

Time tracking device, ZEIo, allows for both practical use and improvisation, by turning the dice-shape to customizable surface

Analyzing interactions 3: Giving form to computational things: developing a practice of interaction design

By Anna Vallgårda (2014)

My main take away from the text, is something that has bothered me about some of the previous theoretical lenses. That they don’t seem to regard the different worlds that an app or website exists in. The physical form of the tablet, computer, phone or display, and what happens on the screen of the device. Finally it takes into account that displays are able to change what they show us.

She describes it like this:

“To overcome the conceptual gap between temporal form and the physical form in interaction design, we need to acknowledge that the computer in practice never appears by itself – it is always part of a composition with other constituents capable of providing form, color, and texture.” 

Elaborating on this, she describes the computer as a material to work with, much as un-refined aluminum. By combining it with other materials we can create something entirely new.

Using the computer, processor or other device as a material, we can create new things not previously possible, namely interactive artifacts, that must juggle the physical form, the temporal form of what the computer changes and the interaction gestalt (understood as the physical interaction properties of an object).


While none of this information is really new to me, it presents a different way in which to view interaction design. Though it remains unclear to me ,exactly what the practical use of it is.


Analyzing interactions 3: Interaction design and the design of aesthetic interactions

In this blogpost I will be analyzing my three chosen artifacts through the text by Youn-kyung Lim, Erik Stolterman, Heekyoung Jung, Justin Donaldson.

From the class exercise it seemed that this model is quite imprecise for analytical purposes. As with “Exploring Relationships Between Interaction Attributes and Experience” the different terms are very loosely described and are sometimes pretty ambiguous.

Google Maps


Google maps seems to me to be a quite networked app, over independence. There are many opportunities for external linking. Feeling-wise, maps seem like a very networked thing.


Using google maps feel like a continous action, that goes for many of the microinteractions such as the panning and zooming after entering an address. The same goes for ‘dragging’ the map with your mouse cursor.


I’d say a map is very direct, but it seems that most things concerning data and showing it in some way would be. We would gain nothing from getting shown a bunch of coordinates and algorithms, so indirectness seems pointless, at least with web sites, unless they are experimental and very ‘artsy’.

Movement, Pace & State

It’s hard to judge google maps on these criteria, as it completely depends on how you use it. Initially it is static, but has potential for movement and a lot of it. You could use it both ways. Pace and state is closely connected to this, it is entirely dependent on the user.

Orderliness & Proximity

Due to the nature of maps, it is very orderly. Otherwise it would be a useless map. The same goes for preciseness, it would be pointless with a map that was not precise.


This criteria seems unclear to me.


I find it hard to use this criteria on google maps as well. The description is quite unclear to me.


This theoretical lens is completely impractical for a physical non-technological artifact such as the stapler.

Microwave Oven

Most of the criteria do not seem suited for the microwave oven either. You could use some of them, but I don’t think it helps me tell much about the artifact.


Several of the criteria are so similar that they could have been a single category. A lot of the criteria are unclear to me. But for articulating interaction it might be useful, but I wouldn’t use this theoretical lens as a whole.

Analyzing Interactions 2: Microinteractions, designing with detail

Below I will be analyzing google maps, the stapler and the microwave oven with the terms Triggers, Rules, Feedback and Loops and Modes.

Google Maps

In analyzing, I will be using google maps to show a given location on a map.

Presuming that we are already on, the first trigger for our location-finding microinteraction would be to use the text field. As soon as we enter the site, there is a system initiated trigger, that activates the search field. We know that the white text fields are used to write in, and it even has a label saying “Search Google Maps” a small magnifying glass also helps people who might not be able to read, to figure out the function of the text field. Using the most interesting parts of the major rules for triggers:


I think google does this pretty well, the draw on our existing knowledge of text fields, but also elaborates with both a textual label and a symbol. The text field is placed on the map, indicating it’s relation to it. As long as you write the same thing in the search field, you will get the same result (But based on your google account history, it might be different from other people). As you start typing, google will start suggesting locations that are clickable, that is a way to bring the data forward. The search field is definitely the most visible thing apart from the map it self.


The suggestion of addresses while you type is also a rule. After typing, you can either pick the location you deem the correct one from a list, or you can just press enter, and google will pick the one it thinks you meant. A rule for the suggestions seems to be that it bases the result on your current geographic location. If you type an address that google has no suggestions for, it ask you to add the location to the map. If you press enter, you will still be taken to googles best guess.


Once you select a location, the map pans and zooms to the location on the map. A small red pin, shows you the exact location on the map. You also get a new bar to the left, providing relevant information about the location, such as pictures, opening-hours etc. As mentioned before, if you type a location that google is not sure about, it will ask you with text to add the location to the map. If you press enter, you will still be taken to googles best guess.

Loops and modes

For the described action, I dont think I encounter any loops or modes. Google does have modes available in that you can switch to the directions interface, or choose satellite view etc.


To start of with it feels weird to analyze this sort of low technological and practical artefact with a model that goes into such details of several steps of interaction. On the other hand, you know about the exact workings of an object such as this, nothing is really hidden from you. You could disassemble the stapler and look inside it, which y0u wouldn’t be able to with Google maps.


The quote “form follows function” associated with Louis Sullivan, comes to mind in a case like this. There is not really a trigger on the stapler, rather the whole stapler is a trigger. What it looks like, also determines most of its use, although you might not know that there are small metal staples inside, you will quickly find out, if you press/punch it.


There are no real rules with a stapler, except for the natural laws. I can staple pieces of paper (or finger) together, if I apply enough force to it. I can access the staple cartridge inside it, if I flip the upper part the other direction. If I don’t properly reassemble it, it wont staple anything.


Again these are all physical. You feel the resistance of the paper and stapler when you press down, when you reach a certain point, you feel the staple break through and a clacking sound informing you that the small metal staple has been bent against the lower portion of the stapler. If you try to staple something without having staples in it, the only feedback is that no staple comes out of it.

Loops and modes

You could look at the staple as having two modes. A regular mode and an open mode. The regular mode is used for stapling paper in locations where the inner space of the stapler allows the paper to fit. The open mode is used both for stapling in the middle of a larger paper and also to access the cartridge inside to reload the stapler with new staples.

Microwave Oven

This feels more like the sort of object the model was meant for. Again since we don’t have direct access to inner parts and the code inside, it is hard to describe the rules, loops and modes completely. The most we can do is try stuff out and see what happens.


The oven has many buttons and many triggers. The biggest trigger is the knob for setting time, weight and choosing from the auto programs. All of these buttons are labeled with text, but otherwise have no real information.


The oven cannot be started while open, and the oven will stop if the door is opened as well.


Both a rule and feedback is that pressing a button results in both tactile feedback, but also a beep. Turning the knob, changes what is shown on the small LED display. The selected amount of time for the oven to cook, for example. When the oven is started, a light is turned on inside the oven, and the plate inside will turn around. A humming sound from the oven also tells us that it is running.

Loops and modes

You could look at the ovens mode in two ways. Physical or in the display. Physically, the oven has an opened and closed mode as described above. Specific rules and limitations are in place when the door is open. Using the knob, you can go into auto-programme selection mode, or you can be in time-setting mode.

Comments on the model

A weakness of this model, is that you might not always be able to tell which triggers, rules are present from an outside perspective. Even with a lot of testing, you might not find them all. With hypercomplex interactions, such as big algorithms, no living human might be able to give an answer to what the exact rules are.

With some artefacts, like the stapler, there might not even be many/any microinteractions that are not also the feature of the artefact.

Some rules seem to be feedback and visa versa.

Analyzing Interaction 2: Exploring Relationships Between Interaction Attributes and Experience

In the following blogpost I will analyze my three chosen objects using the article ‘Exploring Relationships Between Interaction Attributes and Experience’.

The article revolves around creating a vocabulary of different polarities and using them to label different interactive objects, to describe them.

Analyzing the 3 artifacts with the model

Below I’ve tried to plot in my three artifacts using the model.


Good and bad

Below are a few criticism I have of the model:

‘Powerful vs gentle’ Is vaguely described, who is powerful? Is it a gentle way of handling the artifact? Or is the artifact gentle in itself? In the case of google maps, I feel the interaction is powerful because it almost spins the earth for me, or transports me from one end of the world to another with staggering speed. The stapler feels powerful because I use my own raw strength and the leverage the stapler provides, to hammer a small piece of metal through paper. The microwave oven feels powerful through it’s use of heat and changing whatever you put in it.

The attributes often make you wonder: “what do they mean by that?”. So a lot of the results are approximations or mere guesses.

It is also also unclear whether they are meant to represent truth, or represent a way to discuss a design. I think it is more suited for the latter.

What would make sense for this model, is to use it to test, whether an artifact invokes the feelings that I want to, and that it is clear in it’s purpose to many users. It has this quantitative strength that many other models do not. If many users have different opinions on the design, maybe it’s not clear enough, and we wont know how it would be received in an end market. It could also help give a direction for the marketing of an artifact. But since many of the terms are a bit unclear, I’m not certain how useful the results would be. 

It reminds me a lot of the BERT analysis ( To compare before/after situations of different versions of a design. I’ve used this before with success, to compare a desired result to a different users opinion of a final design, as a part of user testing and prototyping.

Analyzing Interaction 1: Design of Everyday Things

Using the same three objects as in my previous post (Google maps, stapler and microwave oven) I will try to analyze them through the theoretical work presented in Design of Everyday Things. Primarily in regards to Affordances, Signifiers, Feedback, Mapping and Constraints.

Google Maps

The first thing you notice on google maps is a blinking text field. We know these afford writing. It is writeable. In relation to mapping, the text field is placed on top of a map. By doing this we relate the two, and presume that typing the name of a place or an address, will take us to the location in question. When we press enter after typing, we immediately get feedback in form of a loading animation, and the map then pans and zooms to the location. We have other signifiers telling us that we can get more options, or even directions.



A stapler affords pushing/punching. The small opening between the two parts of the stapler allows for a piece to be put in there or a small stack, but not more. An interesting finding here is that staplers usually affords much more paper-containment than they can actually staple, a better design would be for them to have a smaller opening (a constraint) and thus give a realistic idea of it’s capability,. Once we get to know a stapler more, we might also figure out that it also affords opening (for putting in more staples) or to staple things to the middle of other surfaces. Since a stapler is a mechanical object, utilizing the force you apply to it, you get feedback in the form of sound, and also tactile feedback when the stapler cannot move further down.


Microwave Oven

A microwave oven has plenty of buttons to press and a knob to turn. These all afford pushing or turning. The door affords opening/closing, and the space within affords placing stuff within it. The designers of the oven have not done a lot of mapping with the oven.  Initially we have no clue how to access the auto programs labeled under the LED display. The knob for setting the time, is placed in the opposite end of the controls from the actual display telling us the set time. Instead they’ve given us various signifiers in the form of button labels.


My main take away from this theory was discovering something I think is a design fault with the tried and tested stapler. A seemingly small change could be made, to reduce frustration using one, in regards to only stapling half you stack of paper together. Illustrated below by random internet user: