Is Your Audience Looking? Audience Measurement Using Intel AIM Suite
May 10, 2013
On Friday May 10 at 4:10pm there will be a talk on measuring audiences for large digital displays.
The speaker is Dr. Abhishek Ranjan from Intel, and the talk will be held at the University of Toronto (MC 331).
Information dissemination using large digital displays and digital signage networks is getting increasingly popular. With such displays being ubiquitous, a question naturally arises: are people actually paying attention to the information being shown on those displays? Traditionally, finding an answer to this question has been a lengthy and onerous process including observational studies, interviews, and relevant data correlation. Recent advancements in Computer Vision and computing power of processors have made it possible not only to simplify and automate this process, but also to provide rich audience information that would have been unfeasible to gather using traditional means. In this presentation, I will talk about Intel AIM Suite, a Computer Vision based audience measurement system, and discuss some interesting implications of this novel audience measurement technology.
Abhishek Ranjan is a Senior Software Engineer at Intel Corp. Prior to joining Intel, he was one of the lead engineers at a Toronto start-up CognoVision Solutions Inc (acquired by Intel). He holds a Ph.D. and M.S. in Computer Science from University of Toronto. His primary research interests lie in the fields of applied Computer Vision and intelligent interfaces. He has authored numerous papers in HCI and Computer Vision conferences and journals.
A campus map is shown below. The Mechanical Engineering building is marked as MC, near the corner of Kings College Road and College Street.
Search Engine Optimization I: Introduction
April 13, 2013
“The Internet is not a thing, a place, a single technology, or a mode of governance. It is an agreement. ” – John Gage
You know, it wasn’t meant to be this way. When Vannevar Bush envisioned his version of the World Wide Web in the 1940s, it was called the Memex and it was a vast network with all kinds of paths running through it like trails in a park.
He even envisioned a new profession of trailmakers who would make paths through the hypertext that everyone could use. Imagine a path, for instance, that linked all articles relating to Human Evolution in the Wikipedia into a nice linear story about issues relating to human evolution in a nice coherent sequence. Well, I think you know that trail making really hasn’t happened on a large scale. Perhaps it was never practical. There just weren’t enough reference librarians in the world to blaze all the trails that were needed, and neither browsers nor websites were designed with trail making in mind.
But the Web is awfully big….”The Web universe is constantly expanding, so its size is unknowable. In 2008 Google noted that it had identified (but not actually indexed) over a trillion (1012) distinct URLs (Web addresses), and that several billion (109) new webpages appear daily (Alpert & Hajaj, 2008). Estimates suggest that Google indexes about 40 billion webpages” (Fletcher, W. H. (2012). Corpus analysis of the world wide web. The Encyclopedia of Applied Linguistics.).
40 billion unique pages, 50 billion unique pages? It depends on how you count them. I don’t know if you’ve ever looked for a mismatched sock in a sock drawer, but finding just the right information amongst billions of Web pages should be a lot harder.
So how on earth can we use the Web effectively? You know the answer. It’s search. It used to be that search was for specially trained librarians who would use special knowledge and complicated Boolean search queries to find stuff. But something remarkable happened in the 1990s, Ordinary people learned to use search engines. And search engines, particularly Google, learned how to give people pretty good answers, even if people weren’t generating particularly good queries. So it was a matter of figuring out what people were actually searching for based on the the words that people put into in the query. And by giving lists of search results, even if the one at the top wasn’t a good guess, people could just scan down to find something like what they were looking for. Amazingly the system worked. Almost everyone could type words into a search engine text box and get back stuff that they find interesting. Unbelievable. If you’d gone to a conference on information retrieval in the 1980s and told them this was going to happen they would have thought you were high on paint thinner.
So there we were in the late 1990s, and people were happy using search engines to find stuff on the Internet. And people were also figuring out lots of neat new ways to make money off the World Wide Web. In retrospect, what happened next wasn’t so surprising, but I’d be lying if I said I had seen it coming.
The Age of Google! There, I said it. Forget about the Anthropocene, whoever owned search on the Internet had just got an amazing franchise. And Google took over search in what seemed like a blink of an eye. And suddenly it was like a vast turnstile that went “click” every time someone wanted to find something on the Web. And if you wanted people to find your stuff when they were looking for stuff, then you had to persuade Google to put your stuff as close to the top of their search engine results as you could. And Google would help you, for a price, and after a while other people too. And all at once the noble science of search engine optimization was born.
Web 2.0 is Getting Whipped by the Big Five when it comes to Online Display Advertising
April 11, 2013
Who’s making money on the Internet? There are is a lot of Web 2.o publishing out there. Wouldn’t it be nice if the little guys were beating up on the big guys?
Internet Retailer (http://www.internetretailer.com/2013/04/02/us-online-display-ad-spending-soars) recently cited a market research report finding that U.S. marketers spent around $15 billion on Internet display ads last year, an increase of over 20% from the previous year. With another 20% jump expected in online display ads in 2013, this form of advertising is rapidly becoming a big business. While there is as yet no dominant player in this market (remember that we are not include the search engine advertising monster here), Google was essentially tied with Facebook in the leadership position, each with an approximately 15 percent market share of the online display advertising pie.
The image below comes from http://apac.mediamind.com/tag/display-advertising/
Yahoo! had a close to 10 percent market share (in third place) while fourth and fifth place went to Microsoft and AOL respectively (both with under 5 percent market share. These top five companies accounted for around half of the online display ad spending. Google’s market share was largely driven by YouTube, which had 163.2 million unique visitors in February, making it the fourth most-popular site on the web, according to Compete.com. Who know that storing and showing videos for free could be so lucrative?
As far as marketing goes it certainly helps to have a dominant platform. According to Compete.com, YouTube’s closest online video platform, Hulu, has less than one tenth the number of unique visitors.
Bottom line? For all the excitement about Web 2.0, don’t expect to see a lot of little guys getting rich off online display advertising. The big five companies are sucking up half of the revenues and that seems unlikely to change in the next little while. Given the year on year 20% growth in online display ad spending that’s good news for Google et al.
Brain to Brain Interfaces: Rats Get them First, but Death Row Inmates Next?
February 28, 2013
We’re pretty squeamish about doing inhuman experiments on people. But most of us don’t think twice about being inhumane with animals. There are so many examples out there and it is not just testing for the development of cosmetics. How about breeding mice so that they are likely to get cancer so that we can then test out anti-cancer drugs on them? There are lots of things that we do to animals that we wouldn’t want happening to us. And don’t get me started on the ethics of raising intelligent animals like pigs so that we can kill and eat them.
Maybe comparisons with other species is not fair because humans are “special” and we should not have the same standards for humans that we apply for animals. But there are so many interesting experiments that might be very informative and helpful, if they didn’t have ethical problems. And not all of those experiments are conducive to using animal models.
Sometimes we are able to do what would otherwise be unethical experiments in special circumstances. Consider the case of electrodes implanted directly into the brain. Normally the ethical way to record brain activity is to use electrical recordings from the scalp (EEG) or a form of medical brain imaging (e.g., PET or MRI). Each of these approaches has problems. The EEG data is noisy, and it is not well localized to specify areas in the brain. MRIs and PET scans are good at showing structural details in the brain but they are not good at saying when exactly something happened. The more ethical indirect recording methods don’t provide the fine spatial AND temporal resolution available from electrodes implanted in the brain. But we have found a way around these problems in some cases. For instance, people with severe physical disabilities sometimes get electrode arrays implanted inside their brains so that can communicate with computers by thinking. Ideally it’s a win/win situation where they provide research data while also getting a means of interacting with the world. And some people with severe epilepsy have had their brains probed with electrodes to collect scientific data about the brain works (while they are having brain operations to try and relieve their seizures).
But even though we have been creative in using special populations to carry out brain research that would be considered inhumane for the general population, there are still many fascinating experiments that we can currently perform on animals but can’t perform on people.
Consider the case of direct brain-to-brain communication. Imagine that you could know exactly what another person was thinking. It may not be for everyone, but intelligence services would love to have it when interrogating spies. Recently, fascinating research on this topic has been carried out with rats (http://www.bbc.co.uk/news/science-environment-21604005) by Professor Miguel Nicolelis and his colleagues
(http://www.nature.com/srep/2013/130228/srep01319/full/srep01319.html) at Duke University.
According to the BBC summary of the research: “The researchers first trained pairs of rats to solve a simple problem – pressing the correct lever when an indicator light above the lever switched on, to obtain a water sip [this was obviously a thirsty rat].
The researchers then placed the rodents in separate chambers and connected their brains using arrays of microelectrodes – each roughly one hundredth the diameter of a human hair – inserted into the area of the cortex that processes motor information.”
Here’s a picture of a rat showing the cap it is wearing and the wires that are transmitting its brain data.
One rat was designated as the “encoder”. Once this rat pressed the correct lever, its brain activity was delivered as electrical stimulation into the brain of the second rat – designated the “decoder”.The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward.
In order to receive the reward, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface. The following figures show how the experiment worked.
A light is switched on in the trained, encoder rat’s cage above one of two levers which will release a reward if pressed.
The encoder rat sees the light and presses a lever to receive a reward. As it does so, the brain signal is sent to the decoder rat’s brain.
The decoder rat interprets the brain signal, chooses the correct lever and gets a reward. If the decoder rat gets it right, the encoder rat gets an extra reward.
In other words, both rats were being rewarded for communicating with each other through their brain activity. And since it is an electronic link, if the communication works, it can work whether they are in neighboring cages in the same room, or if they are thousands of miles away from each other.
“The brain to brain interface (BTBI) allowed the decoder animals to repeatedly perform significantly above chance. This performance immediately dropped to chance levels when the cable was disconnected but the system remained fully functional.” So just in case you were wondering, the rats were not using ESP to perform the task.
And other point to note is that while some communication between the rat brains was occurring, it was by no means perfect. As can be seen in the following figure (from the journal paper) the decoder rat was pressing the correct lever about two thirds of the time. So the decoder was processing the wrong lever about a third of the time, but that’s still a lot better than getting half of the selections wrong (as would occur by chance alone).
I am intrigued by this study. It would be surprising if something similar couldn’t be made to work for human beings. But what about the problem of getting electrodes implanted in human brains for this study? Well here’s a suggestion. Why not ask people on death row to volunteer. Maybe they could be given an extra week of life for each inhumane experiment they participated in, or maybe they could be entered into a lottery where they had a chance to get their sentence commuted to life in prison if they did the experiment. For me, it’s hard to see what could be more inhumane than putting a person to death. And maybe it would be inhumane not to give people the chance to pay back their debt to society by providing valuable scientific data?
Now I know that some people regard a death sentence as necessary punishment and deterrence, but maybe we could take advantage of the situation for the good of science. Sure, it might be a biased sample of rogues and political prisoners that we would be collecting, but we are also talking about general brain processes that should probably transfer fairly readily to the general population.
February 10, 2013
In agile design, stories are the basis for identifying new features for software and applications. A story typical covers a basic unit of required functionality. In user experience design stories incorporate real people in real places doing real things (http://uxmag.com/articles/storyboarding-in-the-software-design-process).
Whitney Quesenbery and Kevin Brooks , in their book, Storytelling for User Experience, list the following benefits of stories:
• They help us gather and share information about users, tasks, and goals.
• They put a human face on analytic data.
• They can spark new design concepts and encourage collaboration and innovation.
• They are a way to share ideas and create a sense of shared history and purpose.
• They help us understand the world by giving us insight into people who are not just like us.
• They can even persuade others of the value of our contribution.
If we have a story, we could simply write it down as in a screenplay, but stories often come to life when they are accompanied by sketches, and in user interface design we can intermix user interface sketches and wireframes in with storyboard panels that emphasize the person and the context of usage (e.g., the person is using a mobile device in a railway station). Storyboards allow us to portray user interactions within a real world context.
Sketching storyboards is great but it is not so easy to edit and re-order sketched story boards. This is where cartoon and storyboarding software comes in. You can digitize your sketches as electronic images/photos and then import them into storyboarding software where you can create interesting storyboard layouts, reorder images, add sketches, and add text. you can also intermix hand-drawn sketches with software wireframes.
You can see an example of using storyboard software in the following video: http://www.youtube.com/watch?v=ByAuQ3AC84A&feature=player_embedded
Two tools for creating software storyboards are Indigo Studio and Comic Life.
Comic Life (http://en.wikipedia.org/wiki/Comic_Life) “is a comic desktop publishing computer program developed by plasq, originally for Mac OS X. plasq has since released versions of Comic Life for Windows and iOS. Comic Life was previously bundled with Intel Macs.
The app is designed to create comic pages using digital photos.”
The page below came from an early mini-review of Comic Life. (http://www.tuaw.com/2005/04/15/comic-live/)
I’m not artist, so I find storyboarding software to be extremely useful in creating a bridge between static wireframes and interactive experience.
It’s true that software prototyping tools now you to move from wire frame to interactive prototype very easily, but there is still a place for storyboard because it allows you to show the context of use of the application interface that you are designing.
Bottom line, storyboard is a great technique for UI designers, and there is good storyboarding software out there which can add a lot of value to wire frames and sketches as an adjunct to the process of building interactive prototypes.
Ten User Experience Books for Practitioners
February 10, 2013
I recently came across this list of user experience books (http://shaikhjees.com/top-10-best-user-experience-ux-books/16577/). It looks like a pretty good list and definitely focussed on the practical task of building user interfaces. Perhaps the most theoretical of the books is Don Norman’s book on the Design of Everyday Things, but even that one is pretty practical in describing the major principles of user-centred design. There are other lists and other books out there, so I’m not claiming that this is definitive. Originally it was presented as a “top ten” list, but I’d be reluctant to say that these are the top 10 books in the area, but they are certainly interesting ones.
1. Don’t Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition
2. A Project Guide to UX Design: For user experience designers in the field or in the making
3. Sketching User Experiences: Getting the Design Right and the Right Design
4. The Elements of User Experience: User-Centered Design for the Web
5. The Design of Everyday Things
6. Communicating Design: Developing Web Site Documentation for Design and Planning
7. Designing Interactions
8. Designing Interfaces: Patterns for Effective Interaction Design
9. Storytelling for User Experience: Crafting Stories for Better Design
10. Prototyping: A Practitioner’s Guide
Lean Startup Methodology Case Study: Try the New Event Recommendation Service
February 8, 2013
The Lean Startup method is really hot at the moment and it maybe the best way to get to know the method is to try it out. The idea is to roll out a “minimal viable product” to a bunch of initial customers, see if they are willing to use and and learn about what they do and don’t like about it.
We’re interested in building an event recommendation systems. We could have tried to use some fancy algorithms and artificial intelligence tools to make automated suggestions from a massive database constructed from event feeds and curated recommendations, but that would be a lot of work and it might be wasted if no one really wanted to use such a service. So, using the lean approach, we have created a kind of discussion board where people can post requests for event suggestions, and where others can make event recommendations in response to those suggestions. It’s a barebones tool that took us a couple of days to put together, but already it has features like commenting, voting, and karma points for participation.
Our event recommender MVP is at:
Please try it out!
In the first day of operation for our event recommendation MVP we managed to get five active users (including yours truly). We received six event requests (with a total of four recommendations on two of the events) and a total of 34 views on different event pages. We were also fortunate to get five feature requests/comments from one of the users which we are now working on for the next iteration of the MVP. Nothing to write home about, but it’s a start.
Our event recommender is very bare bones and we are still testing the idea that people would want to use such a thing. We are hoping that you will give it a try. It is one of those things where the more people use it, the more useful it will become. We currently have a policy where every event request gets answered with a recommendation within 48 hours. One of the hypotheses we are testing is whether or not people will be interested in making event recommendations as well as sending out requests for event recommendations.
This is an interesting Lean Startup case study and we hope that you will try the system out and follow it over the next few weeks as we seek to develop it further using the lean startup approach. Ideally it will not only be a valuable learning experience for you (and for us) but it will also help you plan events in your life as well as help you meet other people who are interested in some of the things that you are interested in.
You may be wondering why we are interested in event recommendation, when it seems to be a crowded space. Our answer is, yes a lot of people are trying to do event recommendation but so far no-one is doing it that well, or owns the space. Most recommenders focus on a particular class of events (e.g., concerts) or a particular location (e.g., San Francisco). We want to make our event recommender completely general and we want it to be pulled by people’s needs rather than pushed by the need to sell ticketed events or whatever. We believe in the power of social recommendation (a kind of Craigslist for events) and we want to be the place where people go when they want a high quality recommendation for an event need/request that nobody might have ever heard of before.
3D Printing is Slowly Taking Off
February 8, 2013
It seems that 3D printing is not one of those technologies that takes the world by storm like the smartphone, but more like a digital camera, slowly getting better and better until it eventually takes over. Maybe 3D printing will never be the answer for every manufacturing need, but there are an expanding number of applications where it can do really well. We’ll highlight two 3D printing advances that have long term implications.
The first is 3D printing of whole buildings. Sounds like science fiction, right? But this is what designer Enrico Dini claimed to be able to do with his D-shape printer back in 2010 (http://inhabitat.com/3-d-printer-creates-entire-buildings-from-solid-rock/). It is hard to know how practical this will be in future and it seems that Dini is thinking about the problem of making buildings on the moon out of moon rock.
Here is a more recent picture of Dini and one of his “buildings” but one gets the sense that this is still a long way from prime-time. In principle, through, 3D printing of buildings sounds like a great idea.
If you have ever seen a whole tract of housing being constructed at the same time you will appreciate what an assembly line process it is. First the diggers, then pouring the foundations, then the carpenters, then the plumbers, then the electricians, then the bricklayers. Ok, well I may not have it in exactly the right order, but you get the idea. Given that there is already a high degree of “programming” in construction it wouldn’t surprise me if 3D printing had a role to play, and particularly if we are prepared to make some compromises in what the building looks like (Dini’s technology makes curved buildings).
While we might still be some way from 3-D printing of buildings on a large scale, 3-D printing in medicine is actually here and being used. A good example was a jaw transplant in the Netherlands last June where an 83 year old patient received a jaw transplant. The twist in this case was that the jaw had been 3-D printed.
“The patient involved had developed a chronic bone infection. Doctors believed reconstructive surgery would have been risky because of her age and so opted for the new technology.”
3-D printing is also being explored as a way to build new bone (http://www.bbc.co.uk/news/technology-15963467). The idea is to use 3-D printing to create custom scaffolds on which new bone can be built by the body.
One final example of 3-D printing in medicine that we will cite here is the construction of artificial blood vessels on a 3D printer to assist in transplants of lab-created organs (http://www.bbc.co.uk/news/technology-14946808).
And it is not just construction and medicine. 3-D printing has also been touted as a threat to mass production in manufacturing (http://www.bbc.co.uk/news/business-14282091).
“It could make off-shore manufacturing half way round the world far less cost effective than doing it at home, if users can get the part they need printed off just round the corner at a 3D print shop on the high street.
“Rather than stockpile spare parts and components in locations all over the world,” he argues, “the designs could be costlessly stored in virtual computer warehouses waiting to be printed locally when required.”
I don’t know about you, but I like the idea of having local craftspeople and designers churning out products on demand and down the street. Right now we are shipping raw materials halfway around the world and then shipping finished goods back again. It seems like a lot of waste.
It’s clear that 3-D printing is currently a technology that is being driven by many diverse applications. Maybe eventually it will lead to a revolution in housing construction, or to maker communities of designers and craftsmen who build local products on demand from designs stored in computers. Maybe soon every woman will be able to wear her own unique designed dress. But each of the applications tends to use a different kind of 3-D printing form factor and technology. So expect to say more incremental gains rather than revolutionary advances as 3-D printing scales up.
Revising the Taxonomy of User Interface Terminology after 22 Years
February 5, 2013
In 1990 I created a taxonomy of user interface terminology. A taxonomy of user interface functions had been developed by my friend Jim Carter a few years earlier, but it focused on user interface functions and I wanted something a little broader that included things like the cognitive engineering background to human computer interaction and user interface design. It’s been 22 years since the taxonomy was published and the taxonomy is getting a bit long in the tooth considering how much the world of computing, devices, and user interfaces has changed in the interim. I looked around and didn’t find a lot of similar taxonomies, although there is one taxonomy from 1992 that is somewhat related (I will review that taxonomy in more detail in a separate post).
Here’s a quick overview of the 1990 taxonomy that I developed. The development of the taxonomy began with the assumption that the relevant topics could be organized into a hierarchical representation. While a tangled network might ultimately be more appropriate the strategy was to start by forcing everything into a tree structure and then create associative links across the branches of the tree a s found necessary.
The first step in developing the hierarchy was the selection of a relevant set of terms that spanned the field of human-computer interaction. In the original (1990) version of the taxonomy, the terms were chosen from three sources: section headings in the handbook o f human-computer interaction (Helander, 1988) ; index terms from a well-known book on user interface design (Shneiderman, 1987), and lecture notes from courses on cognitive engineering and intelligent interfaces that I had taught at the University of Southern California. The terms that were initially selected were further refined based on the comments of various colleagues and reviewers.
A sorting task was then used to organize the selected terms into a hierarchy. The terms were each written on separate 3 x 5 cards . The cards were then shuffled and spread over a large table top. The cards were then grouped together according to which cards appeared to belong together. This was done without any (conscious) preconceived notion of what the organization should be. Once each group was formed it was collected together (tied with a rubber band) and then assigned a label. After all the cards had been placed in groups, the groups themselves were formed into groups. This agglomerative process was continued until all the subgroups (and the terms that they contained) had been collected into a single supergroup representing the root of the tree.
The results of the sorting process were then transcribed onto a large sheet of paper and edited. This editing included revising the names of terms and categories to fit i n with standard usage, and the expansion of terms within a group. This type of expansion was found to be much easier after a basic structure (taxonomy) had been defined. The revised taxonomy was then passed to members of the user interface and hypermedia groups at the Institute of Systems Science, National University of Singapore, for their review. Further revisions then led to the version of the taxonomy that was published in the SIGCHI Bulletin in 1990.
The taxonomy was designed to organize terms in human-computer interaction for use in research and instruction. Thus its aim and character was different from a taxonomy of user-oriented functions that had been developed earlier (Carter, 1986). Carter’s taxonomy was concerned with the standardization of user interface functions or activities. In this taxonomy, we are concerned with the standardization of terminology over the discipline of human-compute r interaction. Our interpretation of human-computer interaction is broadly defined, ranging from cognitive science issues underlying user behaviour to design guidelines and tools.
The Four Main Categories of Terms
The taxonomy that resulted from the process described above had four main branches, which are described in this section. A detailed analysis of each of these branches will be given in the following section.
The four main branches (categories of terms) of the taxonomy are as follows:
1. The Basic Interface Mode
2. Cognitive Engineering
3. User Interface Engineering
To give you a flavor of the taxonomy, here is an example of the main sections for the user interface engineering branch of the taxonomy.
And here are the sub-branches for interaction evaluation (section 3.4 under user interface engineering).
In this taxonomy, the basic interface model is a simple characterization of the major components of all interfaces. In this model, a transaction begins with a (task-related) goal in the mind of the user. This then leads to a user behaviour (such as pointing with a pointing device and similar behaviours ) which occurs in the context defined by the current status of the task and the computer system. The behaviour and the context in which it occurs then jointly define an action that is carried out by the system (such as retrieving a file, or presenting requested information on the screen). Displays consist of information that is presented to the user for it s own sake, i.e, to be read or listened to. Effects are outputs from the system that are designed to assist users in interpreting the system’s actions and updating their mental model of system and task status. Forms are consistent models in which actions, effects, and displays are embedded. They generally conform to the notion of interface metaphors.
The basic interface model contains declarative knowledge about what user interfaces consist of. The second main category is cognitive engineering, which covers the areas of applied cognitive science that are relevant to understanding human-computer interaction.
The user interface engineering category includes subtopics that are relevant to the interface engineering process, without making any commitments as to what the (procedural) details of that process might be. While there have been numerous books and papers on the software engineering process, and on processes of design in general, the user interface design and implementation process was less well defined when the original version of the taxonomy was created in 1990. For instance, the overall process of user interface design was not directly addressed by Hartson and Hix (1989) in their lengthy review of human-computer interface development.
The fourth main category deals with general classes of application to which user interface design is directed. It is difficult to discuss broad issues in user interface design without regard to the type of application being considered. In addition, the type of interface often has a strong influence on the type of user interface that is developed and the general issues that are considered. For instance, user navigation issues tend to predominate in information technology interfaces, while cursor movement and text selection tends to be particularly important in applications such as word processing.
Each of the four main branches of the taxonomy is potentially controversial. For instance, there may be other ways of defining a basic interface model, or other topics in applied cognitive science that should take precedence over the topics selected in the taxonomy. However, while the details of each branch may be debatable, it seems useful to distinguish between the broad issues represented by each branch. The major impact of this taxonomy is that it distinguishes between the basic interface model (the what) and the interface engineering process (the how), and it also recognizes the importance of cognitive engineering as a foundation for human-computer interaction. In addition , the heterogeneity of user interfaces across different type s of application is also recognized.
Now, twenty-two years later, I feel that the taxonomy has withstood the test of time reasonably well. But perhaps it is time to produce a revised version. I don’t know how long this will take me, but your comments would be very welcome.
Carter, J.A. Jr. (1986) A taxonomy of user-oriented functions . International Journal of Man-Machin e
Studies, 24, 195-292.
Chignell, M. H. (1990). A taxonomy of user interface terminology. ACM SIGCHI Bulletin, 21(4), 27.
Hartson, H.R. and Hix, D . (1989) Human-compute r interface development: Concepts and systems for its management . ACM Computing Surveys, 21, 5-91 . Helander, M . (Ed .) (1988) Handbook o f Human-Computer Interaction . Amsterdam : North-Holland .
Helander, M. Handbook of Human Computer Interaction, 1988. Amsterdam, North Holland.
Shneiderman, B . (1987) Designing the User Interface Strategies for Effective Human-Computer Interaction. Reading, Mass.: Addison-Wesley.
Communication Needs of the Elderly
February 4, 2013
The Public Health Agency of Canada (PHAC) has put out some guidelines on communication with the elderly and we’ll summarize some of the most interesting points here to save you reading the whole document.
If you are interested, the pfd version of the document can be found at: http://www.phac-aspc.gc.ca/seniors-aines/alt-formats/pdf/publications/public/various-varies/afcomm-commavecaines/AFComm-Commavecaines-eng.pdf
The following excerpts give a flavor of the report which makes interesting reading for those interested in providing services, or creating applications, for seniors.
“Seniors currently (2005) make up 13% of Canada’s population—projections show that by 2036, they will account for close to 25% of the population.”
“Women account for 52% of seniors aged 65 to 69 and for 75% of those 90 years
or older. Differences in life expectancy between men and women have begun to narrow, a trend that is projected to continue.”
The following numbers surprised me a bit:
“As of 2001, almost all seniors (93%) were living in private households—45% with a spouse or partner, 27% alone, and 18% with a child or grandchild. Only 7% were living in an institutional setting (for example, a long-term care facility).” I would have expected a higher proportion in long term care. But maybe it’s a matter of definitions. Perhaps there are a variety of assisted living arrangements that aren’t classified as long term care.
“Among seniors, more than three quarters (77%) of men and just over half of women (52%) were married or in common-law living arrangements, as of 2001.” Another possibly surprising, but good statistic. Most seniors have partners, and men in particular are doing well with over three-quarters having partners. While isolation may be a problem for many of the elderly it seems that the majority have some sort of companionship.
The following finding is even more surprising and is not such good news:
“Over 80% of seniors have low literacy skills that do not enable them to cope well in today’s complex knowledge society, or to make effective use of such documents as transportation schedules, maps and charts. This is also the case with numeracy skills—with 88% of seniors lacking skills needed to manage effectively the mathematical requirements of a range of situations.”
Note that this figure doesn’t mean that people cannot read and write at all, but rather that they do not have enough of these skills to function well in our society. There are many complex tasks such as doing taxes or reading legal documents and many seniors have trouble doing them.
Seniors in Canada are a good market. “Senior households spent a total of $69billion in 1996 and will be spending a lot more now. “Most of their expenditures are on personal consumption. In 2003, for example, among couples aged 65 to 74 years, 74 cents of every dollar was spent on personal consumption, with the remainder going to taxes (16 cents), savings (4 cents), security (3 cents) and gifts/contributions (3 cents). Two thirds of personal consumption expenditures were on accommodation, transportation and food.”
From my own observation I would say that healthy seniors like to travel. For instance, a 70-year old that I know spends the worst part of the winter each year in Key West. When I’m in Toronto this time of year I feel a bit of a mug.
Seniors have more leisure time and disposable income than people in other age groups, but they are also afflicted by more disabilities.
PHAC reports that “more than four in ten Canadians aged 65 and older (43%) reported having a disability (condition or health problem) that limits their everyday activities, compared to about 17% of the population aged 15 and older. The disability rate rises with age—more than half (56%) of seniors aged 75 and older reported having an activity limitation.”
In 2003, 23% of families headed by a person aged 65 or older had access to the Internet from home—up from less than 5% in 1997. More recent findings (2004) show that almost one third (31%) of Canadian seniors are online.” And that number has increased since then and is likely to continue to increase. Those seniors who are connected often spend a lot of time online from home, providing good opportunities for businesses and marketers.The human body wears out over time and there are some natural changes that tend to occur to almost everyone, although with a lot of variation in age of onset. Here is a table from the PHAC report that looks at sensory changes with aging.
There are also cognitive changes with aging. These include changes in memory, reasoning and abstract thinking. affect a very small percentage of younger seniors, although the percentage does rise with age. These changes are often mild, although their prevalence and severity tends to increase with age.
Here’s an interesting checklist of how one should communicate with seniors.
The PHAC report provides a lot of helpful advice, some of which might seem to border on common sense.
The following checklist for website design seems particularly useful however, and was derived from material original published by two US agencies (the National Institute of Aging, and the National Library of Medicine).
A lot of problems arise for seniors in public places because they have a hard time hearing public announcements. Here is an example of how Vancouver Airport handles the problem:
“Vancouver International Airport has a specially tailored PA system and flight information displays equipped with telephone access for those who can’t read the displays. PA speakers are installed at 15-foot intervals, so that announcements can be broadcast at lower volume—more speakers at lower volume makes the message more intelligible. In some areas, announcements are presented visually on a board or video display. Check-in counters are also equipped with telephone handsets to amplify conversations between passengers and counter staff.”
Some of the guidelines for the elderly seem like they would make communication from effective for all age groups and not just the elderly. Here is an example of a print design checklist that looks like it is not just for the elderly.
Here’s a couple of final quotes from the report summary.
“Don’t try to persuade seniors you’re doing them a favour. Embrace senior-friendly communication because it is logical and makes sense for your program objectives or your bottom line: seniors bring their business to senior-friendly stores and businesses, and they’re loyal customers when they’re well served.”
“Seniors have time, energy and insight born of life experience—they’re a valuable asset to a society that respects them and takes the time to think about effective ways of reaching out to them.”
Bottom line, seniors are an important and ever increasing portion of the population. They are a good market, but they have special needs and those needs should be considered carefully in designing communications, products, and services.