Philip Guo (Phil Guo, Philip J. Guo, Philip Jia Guo, pgbovine)

What is HCI research? And what is its relationship to computer science?

Summary
Here is my attempt to explain what HCI (human-computer interaction) research is, what its main challenges are, and why it belongs in a computer science department. This document is biased by my own background as a computer systems researcher and the kinds of technical systems-building HCI work that I do.

What is HCI research?

To me, research in HCI involves both understanding how humans interact with computers and creating better ways for humans to interact with computers. A more expansive view makes HCI also about understanding how humans use computers to interact with other humans, and then creating better ways for humans to interact with other humans via computers.

By “computer” I mean any sort of computational device (e.g., smartphones, smartwatches, tablets, Internet-of-things) – not just a traditional desktop or laptop computer.

Here are two examples of HCI research.

Why is HCI important to computer science?

Because over 3 billion people around the world now directly interface with computers of some sort, via either traditional computers or mobile devices. To have computer science as a field only study computers for their own sake without taking humans into account is to ignore a core reason why computers were invented: to serve humans.

As cheesy as it sounds, I believe that humans and computers should be viewed together as a human-computer system to make the most of both sides' strengths. Do we want to keep working toward a future where we're replaced by machines running fully-automated algorithms, or one where we work symbiotically with machines? I'd much rather prefer the latter. Of course, full automation is preferable for many problems where it's too slow or tedious for humans to intervene, but I still often prefer for humans to be in control, albeit with machine assistance.

Why should HCI faculty be part of a computer science (CS) department?

HCI is a broad interdisciplinary field whose faculty have found good homes in many academic departments, including cognitive science, psychology, communications, the social sciences, information schools, and design schools. It also spans the spectrum from study-based research (understanding how humans interact with computers) to engineering-based research (creating better ways for humans to interact with computers). Thus, many HCI faculty thrive best in non-CS departments. That said, I strongly feel that the engineering-based systems-building end of HCI research firmly belongs in a CS department, since it's an extension of computer systems research. Here's why:

In computer systems research, we design, scale, and evaluate better computing systems for particular tasks (e.g., parallel programming, network routing). HCI naturally extends this research tradition by incorporating a human (or a group of humans) into the loop as part of the computing system. Thus, these human-machine hybrid systems must be designed taking into account both the physical constraints of the machine (e.g., processor speed, networking capabilities, caches) and the “meaty” constraints of the human brain (e.g., attention, memory).

(NB: My systems-centric view of HCI is shaped by my own biases as a former systems/programming-languages/software-engineering researcher before I moved into HCI. I have CS colleagues who came into HCI from AI backgrounds, and they view HCI as an extension of AI with humans in the loop.)

Throughout the history of CS as a field, there has been a progression of research emphasis from the theoretical to the applied – i.e., from pure mathematics to CS theory to algorithms to systems/AI. As CS now becomes a mature field, it's vital to incorporate HCI into the research portfolio on the applied end of any vibrant CS department. Top departments are all leading by example here with a strong HCI presence: e.g., Stanford, CMU, UC Berkeley, MIT, UW, UIUC, Georgia Tech, and many others.

A wonderful side benefit of having HCI in a CS department is that it naturally lends itself to collaborations with many other sub-fields of CS, since it's easy to find connections to human-related issues no matter what domain you work in.

What are the main challenges of HCI research?

Beyond the usual challenges of devising novel algorithms and system architectures, one of the central challenges of the sorts of technical systems-building HCI research that I do is to make interfaces that really work for real people in the real world.

It's surprisingly hard to make something people use effectively in the ways that you intended, since people aren't perfectly rational agents. Rather, people have all sorts of irrational quirks and rarely do what you expect when interacting with machines. There's no formal proofs of correctness and no elegant closed-form solutions to this very human problem. Thus, to make headway you need to first develop a deep understanding of how people work in your target domain (via both theory and firsthand observation) and then be willing to iterate extensively on your designs.

Another big challenge in HCI research is figuring out the proper degree of automation: i.e., when should you provide affordances for humans to make decisions, and when should you rely on the machine to do something automatically? Again, there's no clean closed-form solutions for the right thing to do in all cases.

Is HCI research about building flashy apps?

We often see cool-looking interactive apps resulting from HCI projects. Aren't you just doing what tech companies do? Where's the research?

Because parts of HCI are highly applied and pragmatic, one consequence of this type of research is that it often results in interactive apps of the sort that tech companies create. Even more confusingly, successful HCI projects sometimes directly lead to the founding of tech startup companies that build such apps!

However, merely building a flashy app that extends already-existing ideas in simple ways isn't sufficient to qualify as research. In HCI research, an app is merely an embodiment of novel interaction ideas and serves as a way to evaluate those ideas on real users. Just like in AI or computer systems research, the prototype itself isn't the end product – it embodies a research idea that can be empirically tested. Thus, for HCI research to be meaningful, there must be some deeper and more generalizable knowledge about human-computer interaction that is being validated by these prototype apps.

What new knowledge does HCI research generate?

HCI research generates new knowledge both about how people interact with computers and also about what kinds of user interfaces are most effective for people to perform tasks that are otherwise infeasible to do without computers.

For systems-building HCI research, oftentimes the knowledge generated is a novel invention that makes something possible that was otherwise infeasible to do by humans alone without machine assistance or, vice versa, by machines alone in a purely-automated fashion without human assistance. These systems are often interactive – i.e., involving both humans and machines working together cooperatively.

For more details, read Wobbrock's Research Contribution Types in Human-Computer Interaction.

Where's the theory behind all of these HCI systems?

Many sub-fields within CS draw upon theory from mathematics. Thus for many CS researchers, theory == mathematically-inspired formalism.

In contrast, HCI draws upon theory from fields such as cognitive science, psychology, design, social sciences, and communications. One way to frame HCI research is as a way to operationalize theoretical constructs from these fields, bringing them into the world by attaching them to computational entities.

In sum, good HCI research is heavily informed by theory (and also develops new theory), but just not the sorts of mathematical theory seen in traditional CS and applied engineering fields.

How is HCI research evaluated?

One traditional way to evaluate HCI systems research is via a controlled user study in a lab where you get people to use the system you've developed, possibly with a control group using a baseline. However, depending on what research questions you want to ask, other sorts of evaluations are more suitable, such as a longitudinal deployment to, say, a dozen users for one month, or a live online deployment to hundreds or thousands of users. Different forms of evaluation trade off scale for fidelity. Often you can make a more compelling argument by combining multiple forms of evaluation to triangulate on the likely truth.

What do you measure in an evaluation? Aren't metrics of success in HCI research inherently subjective?

Yes, whenever humans are involved, metrics of success are subjective, unless you're measuring very low-level features such as task completion times ... which may not be important for many kinds of tasks. Oftentimes I don't care if someone completes a task 15% faster (although a 10x speed-up probably matters!), but I do care if someone can do something qualitatively different and better with an interface than they could've done without it.

In CS research, perhaps the most objective sorts of truth are formal mathematical proofs (assuming you've framed the problems and assumptions realistically), followed by empirical measures of computing performance such as speed, memory usage, latency, and error rates (again given that you've set up proper assumptions; there can still be subjectivity in experiment design due to experimenters' human biases, even if supposedly-objective performance properties are being measured!). However, none of these forms of evaluation involve measuring humans. For HCI research, we develop interactive systems with humans in the loop, so many kinds of metrics will be as subjective as humans are; it's unreasonable to put the same burdens of measurement rigor on humans as we do on machines or mathematics.

Specifically, many kinds of new interactive systems cannot be tested in randomized controlled experiments of the sort that scientists are accustomed to, since their holistic user experience cannot be easily broken down into constituent parts. Even if specific parts of the system were ripped out and tested in isolation in a more rigorous way, it's still hard to make a convincing case for why composing those parts together yields something novel and useful. Thus, evaluations for these systems often come in the form of data from real-world deployments, usage surveys and interviews, and arguments for why the systems enable novel and significant forms of interaction that were previously infeasible.

Recognizing the challenges of HCI evaluation, in their paper Usability Evaluation Considered Harmful (Some of the Time) senior HCI veterans Greenberg and Buxton encourage researchers to take a broader view of research evaluations:

“As both an academic and practitioner community, we need to recognize that there are many other appropriate ways to validate one's work. Examples include a design rationale, a vision of what could be, expected scenarios of use, reflections, case studies, participatory critique, and so on. At a minimum, authors should critique the design: why things were done, what else was considered, what they learned, expected problems, how it fits in the broader context of both prior art and situated context, what is to be done next, and so on. These are all criteria that would be expected in any respected design school or firm. There is a rigour. There is a discipline. It is just not the same rigour and discipline that we currently encourage, teach, practice or accept. Academic paper submissions or product descriptions should be judged by the question being asked, the type of system or situation that is being described and whether the method the inventors used to argue their points are reasonable.”

- Greenberg and Buxton, Usability Evaluation Considered Harmful (Some of the Time), CHI 2008

(Evaluating User Interface Systems Research gives another largely-complementary perspective on HCI evaluations.)

Is there some subjectivity in HCI research evaluations? Of course. But what's the alternative? To throw up our hands and give up entirely? In the end, if we can't accept a certain degree of subjectivity in human-centered research, then we simply can't make any progress in developing better tools for humans to interact with computers. Instead we just have to stick with improving computers only in precisely measurable ways without taking humans into account, which makes us miss out on many kinds of potential discoveries and inventions that can improve the lives of billions of people around the world.

Created: 2016-02-29
Last modified: 2016-04-16
Related pages tagged as research:
Related pages tagged as human-computer interaction: