Wadge et. al 2019

Abstract

Autism Spectrum Disorder (ASD) is diagnosed on the basis of communicative impairments observed in everyday social interactions. Although individuals with ASD show surprising proficiency on several lab tests of social cognition, face-to-face interaction proves problematic and has been associated with biases in processing biological and multimodal linguistic cues. Here, we provide empirical evidence characterizing a special interactional challenge raised by interpersonal communication in people with ASD, which persists even during interactions stripped of those biases. During online, experimentally-controlled interactions, both adults with ASD (N=22) and neurotypical adults (N=30) generated intelligible communicative behaviors toward their partners. Both groups showed a similar propensity for modifying their behavior after misunderstandings, indicating matched social motivation and cognitive flexibility. Yet, communicative success was lower when autistic individuals interacted with other individuals, both with and without ASD, to solve communicative problems that afforded multiple solutions. Neurotypical pairs navigated through those epochs of communicative ambiguity by taking recent signals into account, and aligned their conceptualizations of novel communicative behaviors. In contrast, pairs with one or more ASD members were less likely to produce communicative behaviors informed by their pair-specific history of interaction and mutual understanding. The precise characterization of the cognitive source of these communicative misalignments provides novel boundaries to the general notion that ASD is linked to altered mentalizing abilities. Furthermore, the findings illustrate the cognitive and clinical importance of considering human communication as a solution to a conceptual alignment challenge, and how ineffective the human communicative system is without this special interactional ingredient.

Read the manuscript at: https://psyarxiv.com/7nbms/

Wadge, H., Brewer, R., Bird, G., Toni, I., & Stolk, A. (2019). Communicative Misalignment in Autism Spectrum Disorder. Cortex.

Presented at the Society for Neuroscience Conference, San Diego CA, 3-7 November 2018. 

UX Intern @ Synaptics

My first professional experience in the world of UX was at a hardware company in San Jose, CA called Synaptics. They design hardware components – focusing on displays, touch, and other biometric products – that are then integrated into PCs and mobile devices. The UX team at Synaptics comes under the Biometric Division, and I was the usability intern in the summer of 2016 working under two researchers there.

My managers were from two different backgrounds, and together introduced me to different facets of usability, especially in the realm of hardware and tangible products. While one manager had a background in engineering, the other had previously done research in biomechanics. Working with the two of them highlighted the unique challenges of building hardware products, where the physical demands of both the electronics and human body limit the possible interaction.

For the three months that I was there, my managers had not set apart any specific projects for me, but instead let me contribute to whatever work was coming to them. This way, aside from working on a variety of products, I became familiar with the pace and perspective of research in industry. Until this internship, I had only known academic research projects in cognitive science, which extend for multiple years and look to answer questions by finding statistically significant trends in the data. Unlike my lab experience, the studies I conducted at Synaptics only took a few weeks each, and focused more on identifying flaws in prototypes or personal preferences about a certain technology. So by the end of the summer, I was pleasantly surprised to have completed four major studies in my three months there.

Facial Recognition as Secondary Authentication

My first study at Synaptics was an ongoing project that one of my managers let me collect data for as a way to get my feet wet. It was a competitive analysis between several facial recognition apps, focusing on the efficiency of technology and its use as a form of security.

Methods

To investigate the effectiveness and efficiency of the apps, we recorded the time required for enrollment and verification under indoor and outdoor lighting conditions from four angles. Enrollment refers to the process of creating the original image stored as the ‘key’ that each subsequent attempt at verification is compared to. Each application had a protocol for enrolling the user’s face, some of which involved rotating and making expressions to capture its nuances. After enrollment, subjects were asked to try to open the device or app by verifying their faces while we recorded the app’s accuracy and the time it took to process the scan. The subjects verified through each app multiple times, rotating to change the angle of lighting, and then repeating under sunnier conditions.

Screen Shot 2018-01-31 at 5.55.45 PM
The lab setup. The subjects rotated through the different positions (denoted in numbers) to change angles, and the light was turned on to recreate sunny/outdoor conditions.

After trying each one, subjects were asked to rank the four apps by preference. Finally, I compiled a short interview protocol to identify subjects’ feelings about the effectiveness and security of facial recognition.

Results

While the usability test and ranking described above highlighted users’ preferred apps, the qualitative data we collected showed surprising trends. Many subjects came into the lab feeling confident about the effectiveness of facial recognition technology, but left dissuaded after seeing it in action. All four apps struggled in the outdoor lighting conditions, and even rejected some subjects during verification. When asked about it, subjects reasoned that if the apps could not handle bright light, they would struggle even more with changes to hair, makeup, glasses, or darker settings.

Furthermore, subjects felt that the enrollment process was not always thorough enough, as they would sometimes be rejected during verification. This led to concerns about the security of facial recognition, and subjects felt that it was best used as a secondary form of authentication. They did not feel safe using it as the only security measure, especially for sensitive information such as in banking or email applications.

Trackpad Competitive Analysis

The next study I conducted at Synaptics was centered around their trackpad products and how they compared to a competitors’. The study was conducted using three laptops – two with Synaptics trackpads, and one with the competition. I have not included the results of this study, as they are directly pertinent to Synaptics products.

Methods

Subjects were asked to sit at each laptop (order of laptop was randomized across subjects) and complete a set of tasks that tested three main interaction types:

  • Single point contact: subjects saw a point on the screen that they had to click on. This tested for the precision users had when using each trackpad.
  • Zooming or pinching motions: subjects saw two outward or inward point arrows indicating the use of two fingers to expand or pinch in a certain angle. This tested for how effectively each trackpad processed simultaneous contacts
  • Typing: subjects saw some text to type. This tested for how well each trackpad negated contact near the space bar caused by the bases of the users’ thumbs.

Two Fingerprint Sensor Integration Studies (Patent Pending)

My last two studies focused on new Synaptics products that were being used in their clients’ devices. Both involved addressing the usability challenges of integrating fingerprint sensors beneath the glass (top most surface) of trackpads and cellphones. This was an interesting problem because there are several design benefits to having invisible sensors that are only noticeable when necessary. Such sensors give back large portions of real estate on devices’ surfaces, and allow for larger UIs. But they also introduce a new usability challenge, as users now have to interact with sensors that cannot be seen or felt. These studies revolved around finding new ways to enable interaction between users and fingerprint sensors. I designed experimental UIs for these studies with JavaScript, HTML, CSS, and Android Studio.

For the first study, we tried to address this question for Synaptics’ SecurePad, a trackpad integrated with a fingerprint sensor. HP was using SecurePads in it laptops and wanted to embed additional LEDs that would light up and flicker to indicate location of sensor, as well as authentication status. We worked with a team there and conducted a study to test the most user-friendly organization and behavior for these LEDs.

The second study was focused on Samsung’s upcoming (at the time) Galaxy S8 cellphone, and is part of work currently wrapped up in the IP process at Synaptics. For this study, I worked with my managers to come up with the experimental protocol, as well as scripts for the subsequent data analysis. In this study, we investigated how UI visuals presented during the fingerprint enrollment process affected the types of contact users made. Effective fingerprint enrollment involves maximizing the area of the finger exposed to the sensor. But when the sensor is invisible to the user, this becomes increasingly challenging. This study was meant to identify visual cues that could illicit contact from various regions of the finger. To model the contact made by subjects during the study, I used Python’s SciPy library.

BART Usability Assessment

During my last semester at Cal, I took a graduate course called Needs and Usability Assessment in the School of Information. The class was a whirlwind tour of UX research methods, how to conduct each type of study, and where they best fit in the product development process. The course was taught by a Director of UX at Salesforce.com, and was one of the most industry-oriented course I took at Cal – a huge change from the very academic neuroscience and psychology classes I had become accustomed to. The instructor taught through stories and personal experience with each research method, highlighting the pitfalls, strengths, and applications he had observed. I loved this class, and a lot of my personal philosophy about UX comes from his anecdotal advice.

The methods and topics covered in this class included:

  • Ethics and recruiting practices
  • Reporting and designing user research
  • Usability studies
  • Field studies and observations
  • Ethnographic research
  • Interviews
  • Diary studies
  • Focus groups
  • Contextual inquiries
  • Expert review
  • Competitive analyses
  • Surveys
  • Heuristic evaluations
  • Card sorting tasks
  • Usability and accessibility

For many of the methods that we covered, we had corresponding assignments where we had the chance to conduct those studies ourselves. As part of our final project for the semester, we had to pick at least three of these methods as part of an entire usability assessment.

For this final project, I was in a group with three other students who I had worked with on  field study and heuristic evaluation exercises before. When choosing a project topic, we wanted to work with something that many people had experience with and had access to. We picked the Bay Area Rapid Transit (BART) system, a train system that runs through East Bay and Peninsula regions of the San Francisco Bay Area. All UC Berkeley students are given a Clipper card with their student IDs, which you can charge and swipe to pay for rides on most public transit in the Bay Area. This made finding participants (and experiencing the subject matter ourselves) much easier than some of the past projects or assignments we had in the class (i.e. contextual inquiries for Adobe Illustrator).

Personally, I enjoyed working with something as widely used as public transportation, as it meant my subjects would come from a variety of ages and educational, cultural, and economic backgrounds. Variance along these dimensions makes for complex data (as we saw in our results for this project), but also makes for an interesting design challenge that pushes my problem solving skills. While there was no design component to this assignment, it foreshadowed the challenge of finding a best solution for such a conglomerate of users. Because of this assignment, I would like to, one day, do usability and UX work in the public sector.

Studies

When my team first sat down to discuss this project, we came up with a fairly large and general list of questions we wanted to address. These questions ranged from ticketing, to the actual trains, to station quality, and looked at the entire BART riding experience. We decided to start the assessment by getting a feel for the users and how they interacted with stations and trains physically. The first study we ran was an observational field study, where me and group mates stood in the Downtown Berkeley BART station near the entrances to the trains for one hour at 5 p.m. From here, we each wrote observations following the AEIOU framework, making note of riders’ time at the ticketing machines, whether they used Clipper cards, luggage, traffic, etc.

After looking at our notes collectively, we realized that the scope of our project was far too big, and we would not be able to tell one cohesive story with the time and resources we had; we were asking too many questions about too many different parts of the BART riding experience. We went back to our observations and decided to focus on only the ticketing aspect. It was a step in the BART experience that riders spent a lot of time on, and had more autonomy over. They could pick how to pay for their ride, as well as what type of ticket they got, making this step more active and participatory on the riders’ part.

To further focus our study, we chose to look at riders who did not use Clipper cards. While this contradicted one of our initial reasons for choosing BART, we realized that Clipper card users had a simpler experience that did not require them to interact with the ticketing machines as much as with paper tickets. Since part of the assignment was to run three different types of studies, we wanted to make sure we could still find substantial results and insight into the ticketing process which was unique to BART.

Using our results from the field study and changes to our initial research questions, we came up with a formal description for our project, with BART as our client.

  • Client Goal: Provide reliable and fast public transport for the general public
  • Problem: Outdated ticketing machines
  • Research Goals: 
    • Evaluate current machine usage and experience
    • Find potential improvements to make ticket buying faster and more intuitive
  • Target Users: Riders who buy paper tickets to ride BART

Keeping this project specification in mind, we moved on to our second study – cognitive walkthrough. We asked four subjects to walk us through their ticket buying process, looking for what actions seemed memorable, intuitive, or problematic. At each step, we asked follow up questions, and realized that subjects seemed to only notice the options they were interested in, and did not pay attention to other features offered by the machines’ UI. We also started noticing patterns on payment methods and ride frequency/habits, which we explored further in our next study.

For our final study, we chose to interview riders on their riding experiences and habits, why they chose to buy paper tickets, payment methods, and what issues they encountered. We designed a 12 question interview protocol and interviewed six subjects, two of which were from outside the Bay Area and had not used BART before. These interviews were a chance for us to delve deeper into behaviors we saw in the field study and cognitive walkthrough, and to answer any questions we still had.

Key Findings

The first trend we saw across our data was a distribution between frequency of use and payment method. Paper tickets can be paid for with either cash or card, and can be used multiple times by recharging (much like a Clipper card). This resulted in a 2×2 grid that we could categorize users into.

Screen Shot 2018-01-05 at 10.42.04 PM

These behaviors can partly be attributed to the fact that once a rider has bought a ticket, there is no way to get that money back, and riders must use any remaining balance towards their next ride. If they do not have enough for their next ride, they need to add more. While subjects liked having both payment options, many complained that paying the exact amount for rides on the machines was a hassle, because those amounts were uneven and the machines only give change in coins.

In analyzing our observations, we tried to group our data into the two metrics that we wanted to improve based on our second research goal: speed and ease of use. With respect to speed and time spent at the machines, many of our subjects were unhappy with the fact that when using card to pay, the machines default to adding $20. If a rider wants to change that, they can only increment the amount $1 or $0.50 at a time. So if a rider wants to pay for a single ride of $4, they would have to press a button 16 times.

When asked about ease of use, many subjects complained about how disconnected the ticketing process was from train routes and stops. The machines are not always located near a map of the BART system, and while there is a chart that lists costs to the various stops, it lists them alphabetically rather than geographically. This is especially problematic to riders from outside the Bay Area, who are likely to plan trips based on cities or sites, rather than specific BART stops.

Suggestions and Possible Improvements

To address the concerns raised by our subjects, we next compiled a list of improvements that BART authorities could implement:

  • Allow riders to purchase based on destination, instead of asking them to find and add specific amounts
  • Integrate route maps into the UI at the machines
  • Make exact fares easier to pay by making using whole, half, and quarter dollar amounts
  • Change the $20 default charge when using card, and allow riders to either increment with amounts more than $1, or enter values manually
  • Allow machines to give change in bills and not just coins

 

Sous Chef

For my last computer science class, I took a course on UI Design. Each semester, the course staff decided on a platform that students then have to use in their semester projects. For my semester, we were required to use Amazon’s Alexa platform to create an application that included some voice interaction.

Within my group, we divided roles so that we had one designer, two engineers, and one user researcher (me). These roles became less and less defined as we developed our app through the semester, and in the end, we were all designing and coding.

The project started with ideation. The requirements for the project were pretty sparse, so we were free to choose almost anything we wanted to work on. To test voice interaction, we had Fire Tablets, but the application could have been either a web or mobile app. We had a brainstorming session (à la IDEO), and generated about 50 ideas, that we grouped into larger categories such as ‘academic’, ‘fitness’, and ‘food’. In the end, we chose the ‘food’ category, and wanted to make a recipe app that guides users through recipes.

Unfortunately for us, this exact idea was the next homework assignment to familiarize us to AWS and voice technology. We had to come up with a new idea for our project and returned to the ideas that we had generated earlier. While we wanted to stay in the food category, we realized that recipe and cooking aid apps were fairly common, and if we were to stay in this space, we would have to address some other aspect of food production. Inspired by my other teammates, who all worked at a tea shop near campus, we modified our idea to move from recipe instruction to restaurant management. Sous Chef was born!

Through this app, we were trying to address two major concerns in restaurant kitchens: sanitary conditions and efficiency. Many kitchens still use paper receipts, and chefs end up touching paper, printers, and bells to mark the beginning and end of an order. Through voice capabilities, we wanted to minimize chefs’ contact with anything that wasn’t food – taking advantage of the hands-free quality of voice technology. This tied into our second goal of increasing efficiency, as it would cut down the time chefs spent washing their hands before returning to preparing food. Additionally, we wanted to cut down the time spent on individual orders by including a way to look across orders to find redundancies. One complaint that my teammates had about their experience at the tea shop was that after every order, the blenders and utensils needed to be washed. Because they didn’t have a good way of seeing all the orders coming in, they would end up making the same drink multiple times (and washing dishes in between) if several customers wanted it. We translated these issues into three main capabilities:

  • Display orders on the GUI as they come in
  • Start and finish orders through either touch or voice interaction
  • Display a summary of orders in the queue through voice command

Once we had our goals decided, the rest of the semester was spent researching, developing, testing, and redesigning our idea. We began with a competitive analysis and interviews with target users to identify specific pain points in how restaurants currently manage orders. Our competitive analysis showed that the many apps related to food or restaurants centered around recipes, or finance and business management. There were no apps meant to address order management, so we were happy that our project was still a novel idea in a fairly app-saturated industry.

For our interviews, we asked 3 restaurant cooks about their experiences in the kitchen. We began with more general questions such as what they liked and disliked about their job, and eventually honed in on the ordering process, with questions such as “How do you feel about your current system of receiving orders from the cashier and signaling that orders have been completed?” Because we were in the early stage of this project, we wanted to encompass the whole ordering process and its issues, rather than going into the specifics of what we planned on including in Sous Chef. 

We took our findings from our interviews, and condensed them into a list.

Top 5 User Needs:

  1. Being able to group the same orders together quickly to make batches of food rather than individual orders.
  2. Being able to quickly notice which item is next in the queue or which items is already made
  3. Being reminded of orders if forgotten
  4. Being able to quickly filter a list of orders, and see everything in the queue
  5. Being able to be immediately notified of the newest order without interrupting their current work

Generally, our interviews with target users reiterated the inefficiency of using paper receipts, but also brought to light another issue we had not initially considered: language barriers. Many restaurant cooks that come from other cultures have a hard time communicating with their co-workers, so having an app that only needs simple english commands to show orders and their status can help smooth the cooking and delivery process.

We combined these user needs and our own project goals to come up with wireframes and prototypes. Our subsequent class assignments were to iterate over these designs for multiple weeks until final presentations. When designing our very low-fi drawing, we decided that we wanted to simulate the current organization of restaurant slips lined up in a kitchen, to make the transition from paper to virtual orders easier for our users. We chose a horizontal design, with small squares with order numbers and items. We used Figma to produce a prototype and make changes.

Screen Shot 2017-08-05 at 12.31.10 PM
Sous Chef: Iteration 1

screen-shot-2017-08-05-at-12-39-18-pm.png
The Queue Summary pop up

This first version of Sous Chef’s design allowed users to start an order, mark it as complete, undo those changes, request a queue summary, and see what orders they had already completed. A new order would appear in the “Current Orders” list. To start any of the orders, users would click on the “Start” button or say, “Start order number …”. The corresponding slip would then move to the front of the list and be highlighted with a blue outline. To see what orders had not been started yet, users could say, “show me the queue summary” or simply, “queue summary”. To complete an order, they could either click the “complete” button under the order’s slip or say, “Complete order number …”. The slip would then move down to the “Last Completed” list.

The engineers on our team started to code for VUI, while I conducted usability tests with two friends that had worked in restaurants before. During the usability test, I asked them to complete three tasks: start an order, request an queue summary, and complete the order. To simulate the app’s functionality, I created multiple frames that I showed users as they tapped and spoke the various commands. After giving them the chance to become familiar with the prototype and completing the three tasks, I asked them a few final questions to get their feedback on the design and presentation.

Of the three tasks, requesting the queue summary was the most difficult for both users. Even though I told them that they could use either voice or touch to complete their tasks, they generally opted for touch, as they were more familiar with that type of interaction. Because the queue summary can only be requested through voice, they thought having to say something in the middle of tapping buttons was awkward and it felt strange talking to such a low fidelity prototype. However, in the interviews, they mentioned that in a restaurant kitchen, such voice capabilities would be very useful and cooks could really get used to it. In terms of design, the two users gave some contradictory feedback. While one user liked the colors and how orders were organized in the queue, the other felt that the colors and how we denoted orders that were being prepared could be changed.

We integrated their feedback into our next iteration, which was a functional web app. This version had adjusted colors and design, and a working VUI. We also added confirmations for deleting orders and “close” buttons to exit from pop ups such as these confirmations or the queue summaries.

Screen Shot 2017-08-07 at 2.54.05 PM
Sous Chef: Iteration 2. Orders in progress are now marked in green and colors are modified

Screen Shot 2017-08-07 at 3.02.25 PM
Iteration 2 includes confirmation pop ups when finishing an order

 

To test this version, we had expert reviews in class where we presented our project to two other groups. During these expert reviews, we first explained our apps and its various features, and then they explored the apps themselves and gave feedback. We received positive feedback on the simplicity of the app, as we had all our functionality was folded into one screen. They also complimented the idea itself, and felt that it was actually solving a problem outside of the classroom. Most of the suggestions we received were focused on specific design aspects: reviewers suggested that we remove many of the buttons to declutter the presentation, and move the notifications to the side to not block the orders. They also recommended making the “Completed Order” bar collapsable, to clear the space further. Other reviewers brought up the fact that we had not developed any way to place orders, and that in the end, we would also need a UI that faced diners or cashiers.

We took these recommendations and produced the last iteration of Sous Chef, which we demonstrated at our class showcase at the end of the semester. To address the issue of placing orders, we created a basic input screen to where diners could input their order. These were then uploaded to a database that our main page pulled from to display orders. Whenever the cook changed the order’s status to “in progress” and then “complete”, the database updated as well. This addition made demoing easier as well, and audience members could add their own orders to the queue, and see it move from start to finish. When developing this last version, we went back to our three goals and five user needs to see if Sous Chef addressed those requirements. Our final app did display all orders at once with status updates, included touch and voice interaction, showed a summary of all orders coming in to address redundancies, and gave notifications that did not disrupt current work — we had accomplished exactly what we had intended.

Screen Shot 2017-08-07 at 5.04.43 PM
Sous Chef: Iteration 3. Notifications are now moved to the bottom corner, and the number of buttons is reduced

 

screen-shot-2017-08-07-at-5-14-29-pm.png
The list of completed orders can now be collapsed, so chefs only see incomplete ones