Virginia Tech® home

HCI Qualifier Spring 2023


The HCI Qualifying Exam in the Department of Computer Science tests students' ability to read and analyze HCI literature around a specific theme selected by the committee, find and analyze more literature related to the theme, synthesize their knowledge to demonstrate a deep understanding and interpretation of that literature, and develop their own novel ideas related to the theme. The format of the exam is a written response to one or more related questions in the style of a technical conference paper. There is no oral component to the exam.

Faculty Committee

Registered Students (to be updated)

  • Jacqueline Bruen
  • Sungwon In
  • Matthew Corbett
  • Logan Lane
  • Fei Shan
  • Andrew Jelson
  • Tianyu Ge
  • Hanwen Liu
  • Elham Mohammadrezaei

Registration and Withdrawal

Students must register by emailing the chair ( by the commitment deadline (see below). Please let the chair know about your advisor's name when you email.

Students may withdraw from taking the exam at any point prior to the public release of the exam questions (see dates below). Once the exam questions are released, the exam is considered "in progress" and withdrawal is prohibited. To withdraw or to ask questions about this policy, please email the exam chair.

Academic Integrity

Discussions among students of the papers identified for the HCI Qualifier are reasonable (and strongly encouraged!) until the date the exam is released publicly. Once the exam questions are released, we expect all such discussions will cease as students are required to conduct their own work entirely to answer the qualifier questions. This examination is conducted under the University's Graduate Honor System Code. Students are encouraged to draw from papers other than those listed in the exam to the extent that this strengthens their arguments. However, the answers submitted must represent the sole and complete work of the student submitting the answers. Material substantially derived from other works, whether published in print or found on the web, must be explicitly and fully cited. Note that your grade will be more strongly influenced by arguments you make rather than arguments you quote or cite.

Exam Schedule

  • 12/1/2022: release of reading list
  • 12/7/2022: last day for students to commit to taking the exam
  • 1/6/2023 : release of written exam
  • 1/20/2023 (11:59PM): student solutions to written exam due

Reading List

HCI qualifier exams ask that you reflect on important areas within HCI that are relevant to the research interests of the faculty on the committee and important to HCI and VT's CHCI. The committee identifies a reading list of relevant and important scholarly articles within these focus areas. Students are expected to read these articles closely and familiarize themselves with the ideas, concepts, and technologies described. It is expected that many of these articles will be referenced in the written qualifier exam. It is strongly recommended that students develop an understanding of these texts through discussions with fellow students who will be taking the exam. These discussions should take place PRIOR to the exam period, as the exam must be taken individually.

Here is the reading list for 2023 HCI Qualifying Exam. All students are supposed to read all papers in Theories in HCI and Research Methods. Then, they can choose the topic between Human-Centered AI and Mixed Reality. Therefore, students are required to read 12 papers in total.

Theories in HCI

1.     Kammersgaard, J. (1988). Four different perspectives on human–computer interaction. International Journal of Man-Machine Studies28(4), 343-362.

2.     Halverson, C. A. (2002). Activity Theory and Distributed Cognition: Or What Does CSCW Need to DO with Theories? Comput. Supported Coop. Work11(1–2), 243–267.

3.     Fekete, J. D., Wijk, J. J. V., Stasko, J. T., & North, C. (2008). The value of information visualization. In Information Visualization (pp. 1-18). Springer, Berlin, Heidelberg.

4.     Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., Lease, M., & Horton, J. (2013). The Future of Crowd Work. Proceedings of the 2013 Conference on Computer Supported Cooperative Work, 1301–1318.

Research Methods

1.     Pace, S. (2004). A grounded theory of the flow experiences of Web users. International Journal of Human-Computer Studies60(3), 327-363.

2.     Epstein, D. A., Liu, F., Monroy-Hernández, A., & Wang, D. (2022). Revisiting Piggyback Prototyping: Examining Benefits and Tradeoffs in Extending Existing Social Computing Systems. Proceedings of the ACM on Human-Computer Interaction6(CSCW2), 456:1-456:28.

3.     Dow, S., MacIntyre, B., Lee, J., Oezbek, C., Bolter, J. D., & Gandy, M. (2005). Wizard of Oz support throughout an iterative design process. IEEE Pervasive Computing4(4), 18–26.

4.     Ledo, D., Houben, S., Vermeulen, J., Marquardt, N., Oehlberg, L., & Greenberg, S. (2018, April). Evaluation strategies for HCI toolkit research. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-17).

Human-Centered AI 

1.     Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for Human-AI Interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–13.

2.     Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495-504.

3.     Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1–13.

4.     Jhaver, S., Birman, I., Gilbert, E., & Bruckman, A. (2019). Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator. ACM Transactions on Computer-Human Interaction (TOCHI), 26(5), 31:1-31:35.

Mixed Reality (AR, VR, XR)

1.     Speicher, M., Hall, B. D., & Nebeling, M. (2019, May). What is mixed reality?. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-15).

2.     Piumsomboon, T., Lee, G. A., Hart, J. D., Ens, B., Lindeman, R. W., Thomas, B. H., & Billinghurst, M. (2018). Mini-me: An adaptive avatar for mixed reality remote collaboration. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).

3.     Cordeil, M., Cunningham, A., Dwyer, T., Thomas, B. H., & Marriott, K. (2017, October). ImAxes: Immersive axes as embodied affordances for interactive multivariate data visualization. Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (pp. 71-83).

4.     Satriadi, K. A., Smiley, J., Ens, B., Cordeil, M., Czauderna, T., Lee, B., ... & Jenny, B. (2022). Tangible globes for data visualization in augmented reality. CHI Conference on Human Factors in Computing Systems (pp. 1-16).


Write a research proposal on a topic of your choosing related to either Human-Centered AI or Mixed Reality. Your proposal can also include both topics.  

The proposal should include at least the following components:

1.    Introduction: The proposal should be centered around one or more research question(s). Motivate and explicitly state your research question(s) in the introduction of the paper. Be sure that your proposed research actually addresses these questions.

2.    Related work (literature review). This should synthesize a summary of the state of the art on the chosen topic, identify relevant findings and guidelines, and identify gaps in the literature. Feel free to use the papers in the reading list.

3.    A proposed design. This may be the design of a novel technique, approach, system, or application, or it may be the design of an experimental testbed (tasks, conditions). In either case, provide detailed rationale for your design.

4.    One or more proposed studies. The study or studies can be of any type (e.g., design validation, hypothesis testing, phenomenological, exploratory) and can use any relevant methods. Provide detailed rationale for your study design(s). Feel free to include methods in the readings (Research Methods), but it is not required.

5.    Expected outcomes and Unique contributions: The proposal should give some indication of expected outcomes and the overall benefits of conducting the research. Include theoretical and/or practical implications and contributions. Feel free to use the papers in the reading list (Theories in HCI).

The document should make a compelling case for the need for the proposed research and clearly describe how the proposed research will be conducted. The proposal should identify and reference seminal related work and indicate how your proposed research will build and extend upon prior findings. You are expected to cite and make use of some of the publications on the qualifier list, as well as drawing from your own extensive set of readings and references.

The paper should be within eight (8) to ten (10) pages in the ACM CHI format (single column). Figures or diagrams are encouraged as appropriate. They count towards the page limit. However, references do NOT count towards the page limit (10).


Submit your paper in PDF format by email to the committee chair ( Submissions are due by 11:59 PM AOE (Anywhere on Earth) on January 20 (Fri), 2023.


After the written examination, the examining faculty will determine the student's score for the examination process. The score is between 0 – 3 points, depending on the student's performance on the written exam. (Note that there is no oral exam for the HCI qualifier.) These points may be applied toward the total score necessary to qualify for the Ph.D. The assessment criteria, as defined by GPC, are as follows.

Prime factors for assessment include being able to distinguish good work from poor work, and explain why; being able to synthesize the body of work into an assessment of the state-of-the-art on a problem (as indicated by the collection of papers); being able to identify open problems and suggest future work.

  • 3: Excellent performance, beyond that normally expected or required for a PhD student.
  • 2: Performance appropriate for PhD-level work.
  • 1: While the student adequately understands the content of the work, the student is deficient in one or more of the factors listed for assessment under score value of 2. A score of 1 is the minimum necessary for an MS-level pass.
  • 0: Student's performance is such that the committee considers the student unable to do PhD-level work in Computer Science.