publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2025
- JMIRCaregiving Artificial Intelligence Chatbot for Older Adults and Their Preferences, Well-Being, and Social Connectivity: Mixed-Method StudyBrooke H Wolfe, Yoo Jung Oh, Hyesun Choung, Xiaoran Cui, Joshua Weinzapfel, R Amanda Cooper, Hae-Na Lee, and Rebecca LehtoJ Med Internet Res, Mar 2025
Background: The increasing number of older adults who are living alone poses challenges for maintaining their well-being, as they often need support with daily tasks, health care services, and social connections. However, advancements in artificial intelligence (AI) technologies have revolutionized health care and caregiving through their capacity to monitor health, provide medication and appointment reminders, and provide companionship to older adults. Nevertheless, the adaptability of these technologies for older adults is stymied by usability issues. This study explores how older adults use and adapt to AI technologies, highlighting both the persistent barriers and opportunities for potential enhancements. Objective: This study aimed to provide deeper insights into older adults’ engagement with technology and AI. The technologies currently used, potential technologies desired for daily life integration, personal technology concerns faced, and overall attitudes toward technology and AI are explored. Methods: Using mixed methods, participants (N=28) completed both a semistructured interview and surveys consisting of health and well-being measures. Participants then participated in a research team–facilitated interaction with an AI chatbot, Amazon Alexa. Interview transcripts were analyzed using thematic analysis, and surveys were evaluated using descriptive statistics. Results: Participants’ average age was 71 years (ranged from 65 years to 84 years). Most participants were familiar with technology use, especially using smartphones (26/28, 93%) and desktops and laptops (21/28, 75%). Participants rated appointment reminders (25/28, 89%), emergency assistance (22/28, 79%), and health monitoring (21/28, 75%). Participants rated appointment reminders (25/28, 89.3%), emergency assistance (22/28, 78.6%), and health monitoring (21/28, 75%) as the most desirable features of AI chatbots for adoption. Digital devices were commonly used for entertainment, health management, professional productivity, and social connectivity. Participants were most interested in integrating technology into their personal lives for scheduling reminders, chore assistance, and providing care to others. Challenges in using new technology included a commitment to learning new technologies, concerns about lack of privacy, and worries about future technology dependence. Overall, older adults’ attitudes coalesced into 3 orientations, which we label as technology adapters, technologically wary, and technology resisters. These results illustrate that not all older adults were resistant to technology and AI. Instead, older adults are aligned with categories on a spectrum between willing, hesitant but willing, and unwilling to use technology and AI. Researchers can use these findings by asking older adults about their orientation toward technology to facilitate the integration of new technologies with each person’s comfortability and preferences. Conclusions: To ensure that AI technologies effectively support older adults, it is essential to foster an ongoing dialogue among developers, older adults, families, and their caregivers, focusing on inclusive designs to meet older adults’ needs.
@article{wolfe2025caregiving, author = {Wolfe, Brooke H and Oh, Yoo Jung and Choung, Hyesun and Cui, Xiaoran and Weinzapfel, Joshua and Cooper, R Amanda and Lee, Hae-Na and Lehto, Rebecca}, title = {Caregiving Artificial Intelligence Chatbot for Older Adults and Their Preferences, Well-Being, and Social Connectivity: Mixed-Method Study}, journal = {J Med Internet Res}, year = {2025}, month = mar, day = {13}, volume = {27}, pages = {e65776}, keywords = {older adults; technology use; AI chatbots; artificial intelligence; well-being; social connectedness; mobile phone}, issn = {1438-8871}, doi = {10.2196/65776}, url = {https://doi.org/10.2196/65776}, }
- TVCGTowards Enhancing Low Vision Usability of Data Charts on SmartphonesYash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, and Vikas AshokIEEE Transactions on Visualization and Computer Graphics, Mar 2025
The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically “see” the chart content. They ascribed these challenges mainly to the magnification-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.
@article{prakash2025towards, author = {Prakash, Yash and Khan, Pathan Aseef and Nayak, Akshay Kolgar and Jayarathna, Sampath and Lee, Hae-Na and Ashok, Vikas}, journal = {IEEE Transactions on Visualization and Computer Graphics}, title = {Towards Enhancing Low Vision Usability of Data Charts on Smartphones}, year = {2025}, volume = {31}, number = {1}, pages = {853-863}, keywords = {Bars;Visualization;Usability;Data visualization;Smart phones;Lenses;Data mining;Low vision;Graph usability;Screen magnifier;Graph perception;Accessibility}, doi = {10.1109/TVCG.2024.3456348}, }
2024
- ICMIImproving Usability of Data Charts in Multimodal Documents for Low Vision UsersYash Prakash, Akshay Kolgar Nayak, Shoaib Mohammed Alyaan, Pathan Aseef Khan, Hae-Na Lee, and Vikas AshokIn Proceedings of the 26th International Conference on Multimodal Interaction, San Jose, Costa Rica, Mar 2024
Data chart visualizations and text are often paired in news articles, online blogs, and academic publications to present complex data. While chart visualizations offer graphical summaries of the data, the accompanying text provides essential context and explanation. Associating information from text and charts is straightforward for sighted users but presents significant challenges for individuals with low vision, especially on small-screen devices such as smartphones. The visual nature of charts coupled with the layout of the text inherently makes it difficult for low vision users to mentally associate chart data with text and comprehend the content due to their dependence on screen magnifier assistive technology, which only displays a small portion of the screen at any instant due to content enlargement. To address this problem, in this paper, we present a smartphone-based multimodal mixed-initiative interface that transforms static data charts and the accompanying text into an interactive slide show featuring frames containing “magnified views” of relevant data point combinations. The interface also includes a narration component that delivers tailored information for each “magnified view”. The design of our interface was informed by a user study with 10 low-vision participants, aimed at uncovering low vision interaction challenges and user-interface requirements with multimodal documents that integrate text and chart visualizations. Our interface was also evaluated in a subsequent study with 12 low-vision participants, where we observed significant improvements in chart usability compared to both status-quo screen magnifiers and a state-of-the-art solution.
@inproceedings{prakash2024improving, author = {Prakash, Yash and Kolgar Nayak, Akshay and Alyaan, Shoaib Mohammed and Khan, Pathan Aseef and Lee, Hae-Na and Ashok, Vikas}, title = {Improving Usability of Data Charts in Multimodal Documents for Low Vision Users}, year = {2024}, isbn = {9798400704628}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3678957.3685714}, doi = {10.1145/3678957.3685714}, booktitle = {Proceedings of the 26th International Conference on Multimodal Interaction}, pages = {498–507}, numpages = {10}, keywords = {Graph perception, Graph usability, Low vision, Screen magnifier}, location = {San Jose, Costa Rica}, series = {ICMI '24}, }
- ASSETSUnderstanding Low Vision Graphical Perception of Bar ChartsYash Prakash, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, and Vikas AshokIn Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility, St. John’s, NL, Canada, Mar 2024
Bar charts are widely used for their simplicity in data representation, prompting numerous studies to explore and model how users interact with and perceive bar chart information. However, these studies have predominantly focused on sighted users, with a few also targeting blind screen-reader users, whereas the graphical perception of low-vision screen magnifier users is still an uncharted research territory. We fill this knowledge gap in this paper by designing four experiments for a laboratory study with 25 low-vision participants to examine their graphical perception while interacting with bar charts. For our investigation, we built a custom screen magnifier-based logger that captured micro-interaction details such as zooming and panning. Our findings indicate that low-vision users invest significant time counteracting blurring and contrast effects when analyzing charts. We also observed that low-vision users struggle more in interpreting bars within a single-column stack compared to other stacked bar configurations, and moreover, for a few participants, the perception accuracy is lower when comparing separated bars than when comparing adjacent bars.
@inproceedings{prakash2024understanding, author = {Prakash, Yash and Kolgar Nayak, Akshay and Jayarathna, Sampath and Lee, Hae-Na and Ashok, Vikas}, title = {Understanding Low Vision Graphical Perception of Bar Charts}, year = {2024}, isbn = {9798400706776}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3663548.3675616}, doi = {10.1145/3663548.3675616}, booktitle = {Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility}, articleno = {59}, numpages = {10}, keywords = {Graph perception, Graph usability, Low vision, Screen magnifier}, location = {St. John's, NL, Canada}, series = {ASSETS '24}, }
- TPDLAssessing the Accessibility and Usability of Web Archives for Blind UsersMohan Sunkara, Akshay Kolgar Nayak, Sandeep Kalari, Satwik Ram Kodandaram, Sampath Jayarathna, Hae-Na Lee, and Vikas AshokIn Linking Theory and Practice of Digital Libraries, Mar 2024
Web archives play a crucial role in preserving the digital history of the internet, given the inherent volatility of websites that constantly undergo modifications, content updates, and migrations, or even cease to exist altogether. Web archives ensure that present and historical web information will be available in the future for researchers, historians, students, corporations, and general public. Given their importance, it is essential for web archives to be equally accessible to everyone, including those with visual disabilities. In the absence of a prior in-depth investigation in this regard, this paper examines the status-quo accessibility and usability of five popular web archives for people who are blind. Specifically, we analyzed reports generated by an automated accessibility checker tool and also collected feedback from a user study with 10 blind screen reader users. The analysis of accessibility reports revealed issues that were common across the different archives, including a lack of text alternatives for images and the absence of proper aria labels. The user study showed that blind users struggled to do even basic search tasks to locate desired mementos or snapshots of websites saved in the archives. The participants also explicitly indicated that they found it strenuous to interact with web archives. Informed by these findings, we provide accessibility design suggestions for archives’ web developers and assistive technology developers.
- PACMHCIAll in One Place: Ensuring Usable Access to Online Shopping Items for Blind UsersYash Prakash, Akshay Kolgar Nayak, Mohan Sunkara, Sampath Jayarathna, Hae-Na Lee, and Vikas AshokProceedings of the ACM Human-Computer Interaction, Jun 2024
Perusing web data items such as shopping products is a core online user activity. To prevent information overload, the content associated with data items is typically dispersed across multiple webpage sections over multiple web pages. However, such content distribution manifests an unintended side effect of significantly increasing the interaction burden for blind users, since navigating to-and-fro between different sections in different pages is tedious and cumbersome with their screen readers. While existing works have proposed methods for the context of a single webpage, solutions enabling usable access to content distributed across multiple webpages are few and far between. In this paper, we present InstaFetch, a browser extension that dynamically generates an alternative screen reader-friendly user interface in real-time, which blind users can leverage to almost instantly access different item-related information such as description, full specification, and user reviews, all in one place, without having to tediously navigate to different sections in different webpages. Moreover, InstaFetch also supports natural language queries about any item, a feature blind users can exploit to quickly obtain desired information, thereby avoiding manually trudging through reams of text. In a study with 14 blind users, we observed that the participants needed significantly lesser time to peruse data items with InstaFetch, than with a state-of-the-art solution.
@article{prakash2024all, author = {Prakash, Yash and Nayak, Akshay Kolgar and Sunkara, Mohan and Jayarathna, Sampath and Lee, Hae-Na and Ashok, Vikas}, title = {All in One Place: Ensuring Usable Access to Online Shopping Items for Blind Users}, year = {2024}, issue_date = {June 2024}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {8}, number = {EICS}, url = {https://doi.org/10.1145/3664639}, doi = {10.1145/3664639}, journal = {Proceedings of the ACM Human-Computer Interaction}, month = jun, articleno = {257}, numpages = {25}, keywords = {Blind, Online shopping, Screen reader, Visual impairment, Web usability}, }
- CHIDiscovering Accessible Data Visualizations for People with ADHDTien Tran, Hae-Na Lee, and Ji Hwan ParkIn Proceedings of the CHI Conference on Human Factors in Computing Systems,
Honolulu ,HI ,USA , Jun 2024CHI 2024 Honorable Mention Award
There have been many studies on understanding data visualizations regarding general users. However, we have a limited understanding of how people with ADHD comprehend data visualizations and how it might be different from the general users. To understand accessible data visualization for people with ADHD, we conducted a crowd-sourced survey involving 70 participants with ADHD and 77 participants without ADHD. Specifically, we tested the chart components of color, text amount, and use of visual embellishments/pictographs, finding that some of these components and ADHD affected participants’ response times and accuracy. We outlined the neurological traits of ADHD and discussed specific findings on accessible data visualizations for people with ADHD. We found that various chart embellishment types affected accuracy and response times for those with ADHD differently depending on the types of questions. Based on these results, we suggest visual design recommendations to make accessible data visualizations for people with ADHD.
@inproceedings{tran2024discovering, author = {Tran, Tien and Lee, Hae-Na and Park, Ji Hwan}, title = {Discovering Accessible Data Visualizations for People with ADHD}, year = {2024}, isbn = {9798400703300}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3613904.3642112}, doi = {10.1145/3613904.3642112}, booktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems}, articleno = {64}, numpages = {19}, keywords = {ADHD, accessibility, color, data visualizations, pictographs, text amount}, location = {<city>Honolulu</city>, <state>HI</state>, <country>USA</country>}, series = {CHI '24}, }
2023
- PACMHCIEnabling Customization of Discussion Forums for Blind UsersMohan Sunkara, Yash Prakash, Hae-Na Lee, Sampath Jayarathna, and Vikas AshokProceedings of the ACM on Human-Computer Interaction, Jun 2023
Online discussion forums have become an integral component of news, entertainment, information, and video-streaming websites, where people all over the world actively engage in discussions on a wide range of topics including politics, sports, music, business, health, and world affairs. Yet, little is known about their usability for blind users, who aurally interact with the forum conversations using screen reader assistive technology. In an interview study, blind users stated that they often had an arduous and frustrating interaction experience while consuming conversation threads, mainly due to the highly redundant content and the absence of customization options to selectively view portions of the conversations. As an initial step towards addressing these usability concerns, we designed PView - a browser extension that enables blind users to customize the content of forum threads in real time as they interact with these threads. Specifically, PView allows the blind users to explicitly hide any post that is irrelevant to them, and then PView automatically detects and filters out all subsequent posts that are substantially similar to the hidden post in real time, before the users navigate to those portions of the thread. In a user study with blind participants, we observed that compared to the status quo, PView significantly improved the usability, workload, and satisfaction of the participants while interacting with the forums.
@article{sunkara2023enabling, author = {Sunkara, Mohan and Prakash, Yash and Lee, Hae-Na and Jayarathna, Sampath and Ashok, Vikas}, title = {Enabling Customization of Discussion Forums for Blind Users}, year = {2023}, issue_date = {June 2023}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {7}, number = {EICS}, url = {https://doi.org/10.1145/3593228}, doi = {10.1145/3593228}, journal = {Proceedings of the ACM on Human-Computer Interaction}, month = jun, articleno = {176}, numpages = {20}, keywords = {screen reader, online discussion forum, blind, assistive technology}, }
- TIISEnabling Efficient Web Data-Record Interaction for People with Visual Impairments via Proxy InterfacesJavedul Ferdous, Hae-Na Lee, Sampath Jayarathna, and Vikas AshokACM Transactions on Interactive Intelligent Systems, Sep 2023
Web data records are usually accompanied by auxiliary webpage segments, such as filters, sort options, search form, and multi-page links, to enhance interaction efficiency and convenience for end users. However, blind and visually impaired (BVI) persons are presently unable to fully exploit the auxiliary segments like their sighted peers, since these segments are scattered all across the screen, and as such assistive technologies used by BVI users, i.e., screen reader and screen magnifier, are not geared for efficient interaction with such scattered content. Specifically, for blind screen reader users, content navigation is predominantly one-dimensional despite the support for skipping content, and therefore navigating to-and-fro between different parts of the webpage is tedious and frustrating. Similarly, low vision screen magnifier users have to continuously pan back-and-forth between different portions of a webpage, given that only a portion of the screen is viewable at any instant due to content enlargement. The extant techniques to overcome inefficient web interaction for BVI users have mostly focused on general web-browsing activities, and as such they provide little to no support for data record-specific interaction activities such as filtering and sorting – activities that are equally important for facilitating quick and easy access to desired data records. To fill this void, we present InSupport, a browser extension that: (i) employs custom machine learning-based algorithms to automatically extract auxiliary segments on any webpage containing data records; and (ii) provides an instantly accessible proxy one-stop interface for easily navigating the extracted auxiliary segments using either basic keyboard shortcuts or mouse actions. Evaluation studies with 14 blind participants and 16 low vision participants showed significant improvement in web usability with InSupport, driven by increased reduction in interaction time and user effort, compared to the state-of-the-art solutions.
@article{ferdous2023enabling, author = {Ferdous, Javedul and Lee, Hae-Na and Jayarathna, Sampath and Ashok, Vikas}, title = {Enabling Efficient Web Data-Record Interaction for People with Visual Impairments via Proxy Interfaces}, year = {2023}, issue_date = {September 2023}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {13}, number = {3}, issn = {2160-6455}, url = {https://doi.org/10.1145/3579364}, doi = {10.1145/3579364}, journal = {ACM Transactions on Interactive Intelligent Systems}, month = sep, articleno = {13}, numpages = {27}, keywords = {Web accessibility, blind, low vision, visual impairment, screen reader, screen magnifier, data records}, }
- IUIAutoDesc: Facilitating Convenient Perusal of Web Data Items for Blind UsersYash Prakash, Mohan Sunkara, Hae-Na Lee, Sampath Jayarathna, and Vikas AshokIn Proceedings of the 28th International Conference on Intelligent User Interfaces,
Sydney ,NSW ,Australia , Sep 2023Web data items such as shopping products, classifieds, and job listings are indispensable components of most e-commerce websites. The information on the data items are typically distributed over two or more webpages, e.g., a ‘Query-Results’ page showing the summaries of the items, and ‘Details’ pages containing full information about the items. While this organization of data mitigates information overload and visual cluttering for sighted users, it however increases the interaction overhead and effort for blind users, as back-and-forth navigation between webpages using screen reader assistive technology is tedious and cumbersome. Existing usability-enhancing solutions are unable to provide adequate support in this regard as they predominantly focus on enabling efficient content access within a single webpage, and as such are not tailored for content distributed across multiple webpages. As an initial step towards addressing this issue, we developed AutoDesc, a browser extension that leverages a custom extraction model to automatically detect and pull out additional item descriptions from the ‘details’ pages, and then proactively inject the extracted information into the ‘Query-Results’ page, thereby reducing the amount of back-and-forth screen reader navigation between the two webpages. In a study with 16 blind users, we observed that within the same time duration, the participants were able to peruse significantly more data items on average with AutoDesc, compared to that with their preferred screen readers as well as with a state-of-the-art solution.
@inproceedings{prakash2023autodesc, author = {Prakash, Yash and Sunkara, Mohan and Lee, Hae-Na and Jayarathna, Sampath and Ashok, Vikas}, title = {AutoDesc: Facilitating Convenient Perusal of Web Data Items for Blind Users}, year = {2023}, isbn = {9798400701061}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3581641.3584049}, doi = {10.1145/3581641.3584049}, booktitle = {Proceedings of the 28th International Conference on Intelligent User Interfaces}, pages = {32–45}, numpages = {14}, keywords = {Blind, Screen reader, Visual impairment, Web accessibility}, location = {<city>Sydney</city>, <state>NSW</state>, <country>Australia</country>}, series = {IUI '23}, }
- ThesisEnhancing the Usability of Computer Applications for People With Visual Impairments via UI AugmentationHae-Na LeeState University of New York at Stony Brook, Sep 2023
People who are blind and visually impaired (BVI) struggle to interact with even accessible computing applications since the user interfaces (UIs) of these applications are not tailored for usable and efficient content access with assistive technologies such as screen readers and screen magnifiers. This is unsurprising because usability – the ease with which BVI users can do tasks in applications – has received far less attention compared to accessibility in both research and developer communities. Even few existing research efforts targeting the usability of interfaces for BVI users have mostly limited their focus to web browsing, and as such little has been done for other important applications such as productivity tools. Moreover, these efforts have predominantly targeted blind screen reader users, whereas the usability issues of low vision screen magnifier users have been largely underexplored. Motivated by this dearth of usability-enhancing efforts, I developed custom augmentation techniques for improving applications’ usability by dynamically extending their UIs with auxiliary interfaces that are especially tailored for either blind or low vision users. In an abstract sense, an auxiliary interface captures key segments in the corresponding application’s UI, and then presents these segments in an alternative format that is conveniently and efficiently navigable with screen readers or screen magnifiers. The auxiliary interface also mitigates the need for BVI users to manually navigate to-and-fro between different segments in the application’s UI (e.g., between document edit area and ribbon commands in a word processing application), thereby significantly lowering the BVI users’ interaction effort and time to do the application tasks. In this thesis, I present three such custom augmentation techniques for each of the following everyday application scenarios: (i) accessing commands in productivity applications; (ii) perusing web data records; and (iii) comprehending informal social media content. For each scenario, I first uncovered application-specific user requirements, and then leveraged this acquired knowledge to design and develop a usable custom auxiliary interface.
@phdthesis{lee2023enhancing, title = {Enhancing the Usability of Computer Applications for People With Visual Impairments via UI Augmentation}, author = {Lee, Hae-Na}, year = {2023}, school = {State University of New York at Stony Brook}, }
2022
- HTEnabling Convenient Online Collaborative Writing for Low Vision Screen Magnifier UsersHae-Na Lee, Yash Prakash, Mohan Sunkara, I.V. Ramakrishnan, and Vikas AshokIn Proceedings of the 33rd ACM Conference on Hypertext and Social Media, Barcelona, Spain, Sep 2022
Online collaborative editors have become increasingly prevalent in both professional and academic settings. However, little is known about how usable these editors are for low vision screen magnifier users, as existing research works have predominantly focused on blind screen reader users. An interview study revealed that it is arduous and frustrating for screen magnifier users to perform even the basic collaborative writing activities, such as addressing collaborators’ comments and reviewing document changes. Specific interaction challenges underlying these issues included excessive panning, content occlusion, large empty space patches, and frequent loss of context. To address these challenges, we developed MagDocs, a browser extension that assists screen magnifier users in conveniently performing collaborative writing activities on the Google Docs web application. MagDocs is rooted in two ideas: (i) a custom support interface that users can instantly access on demand and interact with collaborative interface elements, such as comments or collaborator edits, within the current magnifier viewport; and (ii) visual relationship preservation, where collaborative elements and the corresponding text in the document are shown close to each other within the magnifier viewport to minimize context loss and panning effort. A study with 15 low vision users showed that MagDocs significantly improved the overall user satisfaction and interaction experience, while also substantially reduced the time and effort to perform typical collaborative writing tasks.
@inproceedings{lee2022enabling, author = {Lee, Hae-Na and Prakash, Yash and Sunkara, Mohan and Ramakrishnan, I.V. and Ashok, Vikas}, title = {Enabling Convenient Online Collaborative Writing for Low Vision Screen Magnifier Users}, year = {2022}, isbn = {9781450392334}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3511095.3531274}, doi = {10.1145/3511095.3531274}, booktitle = {Proceedings of the 33rd ACM Conference on Hypertext and Social Media}, pages = {143–153}, numpages = {11}, keywords = {Accessibility, Assistive Technology, Low Vision, Online Collaborative Writing, Screen Magnifier, Visual Impairment}, location = {Barcelona, Spain}, series = {HT '22}, }
- TACCESSCustomizable Tabular Access to Web Data Records for Convenient Low-vision Screen Magnifier InteractionHae-Na Lee and Vikas AshokACM Transactions on Accessible Computing, May 2022
To interact with webpages, people with low vision typically rely on screen magnifier assistive technology that enlarges screen content and also enables them to pan the content to view the different portions of a webpage. This back-and-forth panning between different webpage portions makes it especially inconvenient and arduous for screen magnifier users to interact with web data records (e.g., list of flights, products, job advertisements), as this interaction typically involves making frequent comparisons between the data records based on their attributes, e.g., comparing available flights in a travel website based on their prices, durations. To address this issue, we present TableView+, an enhanced version of our previous TableView prototype—a browser extension that leverages a state-of-the-art data extraction method to automatically identify and extract information in web data records, and subsequently presents the information to a screen magnifier user in a compactly arranged data table to facilitate easier comparisons between records. TableView+ introduces new features aimed mostly at addressing the critical shortcomings of TableView, most notably the absence of interface customization options. In this regard, TableView+ enables low-vision users to customize the appearance of the data table based on their individual needs and eye conditions. TableView+ also saves these customizations to automatically apply them to the best extent possible the next time the users interact with the data records on either the same or other similar websites. A user study with 25 low-vision participants showed that with TableView+, the panning time further decreased by 8.5% on unfamiliar websites and by 8.02% on a familiar website than with TableView when compared to a screen magnifier.
@article{lee2022customizable, author = {Lee, Hae-Na and Ashok, Vikas}, title = {Customizable Tabular Access to Web Data Records for Convenient Low-vision Screen Magnifier Interaction}, year = {2022}, issue_date = {June 2022}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {15}, number = {2}, issn = {1936-7228}, url = {https://doi.org/10.1145/3517044}, doi = {10.1145/3517044}, journal = {ACM Transactions on Accessible Computing}, month = may, articleno = {14}, numpages = {22}, keywords = {visually impaired, low vision, screen magnifier, usability, Web accessibility}, }
- CHIImpact of Out-of-Vocabulary Words on the Twitter Experience of Blind UsersHae-Na Lee and Vikas AshokIn Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems,
New Orleans ,LA ,USA , May 2022Most people who are blind interact with social media content with the assistance of a screen reader, a software that converts text to speech. However, the language used in social media is well-known to contain several informal out-of-vocabulary words (e.g., abbreviations, wordplays, slang), many of which do not have corresponding standard pronunciations. The narration behavior of screen readers for such out-of-vocabulary words and the corresponding impact on the social media experience of blind screen reader users are still uncharted research territories. Therefore we seek to plug this knowledge gap by examining how current popular screen readers narrate different types of out-of-vocabulary words found on Twitter, and also, how the presence of such words in tweets influences the interaction behavior and comprehension of blind screen reader users. Our investigation showed that screen readers rarely autocorrect out-of-vocabulary words, and moreover they do not always exhibit ideal behavior for certain prolific types of out-of-vocabulary words such as acronyms and initialisms. We also observed that blind users often rely on tedious and taxing workarounds to comprehend actual meanings of out-of-vocabulary words. Informed by the observations, we finally discuss methods that can potentially reduce this interaction burden for blind users on social media.
@inproceedings{lee2022impact, author = {Lee, Hae-Na and Ashok, Vikas}, title = {Impact of Out-of-Vocabulary Words on the Twitter Experience of Blind Users}, year = {2022}, isbn = {9781450391573}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3491102.3501958}, doi = {10.1145/3491102.3501958}, booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems}, articleno = {608}, numpages = {20}, keywords = {Blind, OOV Word, Out-of-Vocabulary Word, Screen Reader, Social Media, Twitter, User Experience, Visual Impairment}, location = {<city>New Orleans</city>, <state>LA</state>, <country>USA</country>}, series = {CHI '22}, }
- IUIInSupport: Proxy Interface for Enabling Efficient Non-Visual Interaction with Web Data RecordsJavedul Ferdous, Hae-Na Lee, Sampath Jayarathna, and Vikas AshokIn Proceedings of the 27th International Conference on Intelligent User Interfaces,
Helsinki ,Finland , May 2022Interaction with web data records typically involves accessing auxiliary webpage segments such as filters, sort options, search form, and multi-page links. As these segments are usually scattered all across the screen, it is arduous and tedious for blind users who rely on screen readers to access the segments, given that content navigation with screen readers is predominantly one-dimensional, despite the available support for skipping content via either special keyboard shortcuts or selective navigation. The extant techniques to overcome inefficient web screen reader interaction have mostly focused on general web content navigation, and as such they provide little to no support for data record-specific interaction activities such as filtering and sorting – activities that are equally important for enabling quick and easy access to the desired data records. To fill this void, we present InSupport, a browser extension that: (i) employs custom-built machine learning models to automatically extract auxiliary segments on any webpage containing data records, and (ii) provides an instantly accessible proxy one-stop interface for easily navigating the extracted segments using basic screen reader shortcuts. An evaluation study with 14 blind participants showed significant improvement in usability with InSupport, driven by increased reduction in interaction time and the number of key presses, compared to state-of-the-art solutions.
@inproceedings{ferdous2022insupport, author = {Ferdous, Javedul and Lee, Hae-Na and Jayarathna, Sampath and Ashok, Vikas}, title = {InSupport: Proxy Interface for Enabling Efficient Non-Visual Interaction with Web Data Records}, year = {2022}, isbn = {9781450391443}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3490099.3511126}, doi = {10.1145/3490099.3511126}, booktitle = {Proceedings of the 27th International Conference on Intelligent User Interfaces}, pages = {49–62}, numpages = {14}, keywords = {Blind, Data records, Screen reader, Visual impairment, Web accessibility}, location = {<city>Helsinki</city>, <country>Finland</country>}, series = {IUI '22}, }
2021
- HTTowards Enhancing Blind Users’ Interaction Experience with Online Videos via Motion GesturesHae-Na Lee and Vikas AshokIn Proceedings of the 32nd ACM Conference on Hypertext and Social Media, Virtual Event, USA, May 2021
Blind users interact with smartphone applications using a screen reader, an assistive technology that enables them to navigate and listen to application content using touch gestures. Since blind users rely on screen reader audio, interacting with online videos can be challenging due to the screen reader audio interfering with the video sounds. Existing solutions to address this interference problem are predominantly designed for desktop scenarios, where special keyboard or mouse actions are supported to facilitate ’silent’ and direct access to various video controls such as play, pause, and progress bar. As these solutions are not transferable to smartphones, suitable alternatives are desired. In this regard, we explore the potential of motion gestures in smartphones as an effective and convenient method for blind screen reader users to interact with online videos. Specifically, we designed and developed YouTilt, an Android application that enables screen reader users to exploit an assortment of motion gestures to access and manipulate various video controls. We then conducted a user study with 10 blind participants to investigate whether blind users can leverage YouTilt to properly execute motion gestures for video-interaction tasks while simultaneously listening to video sounds. Analysis of the study data showed a significant improvement in usability by as much as 43.3% (avg.) with YouTilt compared to that with default screen reader, and overall a positive attitude and acceptance towards motion gesture-based video interaction.
@inproceedings{lee2021towards, author = {Lee, Hae-Na and Ashok, Vikas}, title = {Towards Enhancing Blind Users' Interaction Experience with Online Videos via Motion Gestures}, year = {2021}, isbn = {9781450385510}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3465336.3475116}, doi = {10.1145/3465336.3475116}, booktitle = {Proceedings of the 32nd ACM Conference on Hypertext and Social Media}, pages = {231–236}, numpages = {6}, keywords = {visually impaired, video, smartphone, screen reader, motion gesture, mobile interaction, blind, accessibility}, location = {Virtual Event, USA}, series = {HT '21}, }
- PACMHCIBringing Things Closer: Enhancing Low-Vision Interaction Experience with Office Productivity ApplicationsHae-Na Lee, Vikas Ashok, and IV RamakrishnanProceedings of the ACM on Human-Computer Interaction, May 2021
Many people with low vision rely on screen-magnifier assistive technology to interact with productivity applications such as word processors, spreadsheets, and presentation software. Despite the importance of these applications, little is known about their usability with respect to low-vision screen-magnifier users. To fill this knowledge gap, we conducted a usability study with 10 low-vision participants having different eye conditions. In this study, we observed that most usability issues were predominantly due to high spatial separation between main edit area and command ribbons on the screen, as well as the wide span grid-layout of command ribbons; these two GUI aspects did not gel with the screen-magnifier interface due to lack of instantaneous WYSIWYG (What You See Is What You Get) feedback after applying commands, given that the participants could only view a portion of the screen at any time. Informed by the study findings, we developed MagPro, an augmentation to productivity applications, which significantly improves usability by not only bringing application commands as close as possible to the user’s current viewport focus, but also enabling easy and straightforward exploration of these commands using simple mouse actions. A user study with nine participants revealed that MagPro significantly reduced the time and workload to do routine command-access tasks, compared to using the state-of-the-art screen magnifier.
@article{lee2021bringing, author = {Lee, Hae-Na and Ashok, Vikas and Ramakrishnan, IV}, title = {Bringing Things Closer: Enhancing Low-Vision Interaction Experience with Office Productivity Applications}, year = {2021}, issue_date = {June 2021}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {5}, number = {EICS}, url = {https://doi.org/10.1145/3457144}, doi = {10.1145/3457144}, journal = {Proceedings of the ACM on Human-Computer Interaction}, month = may, articleno = {197}, numpages = {18}, keywords = {accessibility, low vision, office productivity software, screen magnifier, usability, word processor}, }
2020
- ASSETSTableView: Enabling Efficient Access to Web Data Records for Screen-Magnifier UsersHae-Na Lee, Sami Uddin, and Vikas AshokIn Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility,
Virtual Event ,Greece , May 2020People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available flights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.
@inproceedings{lee2020tableview, author = {Lee, Hae-Na and Uddin, Sami and Ashok, Vikas}, title = {TableView: Enabling Efficient Access to Web Data Records for Screen-Magnifier Users}, year = {2020}, isbn = {9781450371032}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3373625.3417030}, doi = {10.1145/3373625.3417030}, booktitle = {Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility}, articleno = {23}, numpages = {12}, keywords = {low vision, screen magnifier, usability, web accessibility}, location = {<city>Virtual Event</city>, <country>Greece</country>}, series = {ASSETS '20}, }
- ASSETSScreen Magnification for Office ApplicationsHae-Na Lee, Vikas Ashok, and IV RamakrishnanIn Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility,
Virtual Event ,Greece , May 2020People with low vision use screen magnifiers to interact with computers. They usually need to zoom and pan with the screen magnifier using predefined keyboard and mouse actions. When using office productivity applications (e.g., word processors and spreadsheet applications), the spatially distributed arrangement of UI elements makes interaction a challenging proposition for low vision users, as they can only view a fragment of the screen at any moment. They expend significant chunks of time panning back-and-forth between application ribbons containing various commands (e.g., formatting, design, review, references, etc.) and the main edit area containing user content. In this demo, we will demonstrate MagPro, an interface augmentation to office productivity tools, that not only reduces the interaction effort of low-vision screen-magnifier users by bringing the application commands as close as possible to the users’ current focus in the edit area, but also lets them easily explore these commands using simple mouse actions. Moreover, MagPro automatically synchronizes the magnifier viewport with the keyboard cursor, so that users can always see what they are typing, without having to manually adjust the magnifier focus every time the keyboard cursor goes off screen during text entry.
@inproceedings{lee2020screen, author = {Lee, Hae-Na and Ashok, Vikas and Ramakrishnan, IV}, title = {Screen Magnification for Office Applications}, year = {2020}, isbn = {9781450371032}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3373625.3418049}, doi = {10.1145/3373625.3418049}, booktitle = {Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility}, articleno = {95}, numpages = {3}, keywords = {accessibility, low vision, office productivity software, screen magnifier, usability}, location = {<city>Virtual Event</city>, <country>Greece</country>}, series = {ASSETS '20}, }
- SMCRepurposing Visual Input Modalities for Blind Users: A Case Study of Word ProcessorsHae-Na Lee, Vikas Ashok, and I. V. RamakrishnanIn 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, May 2020
Visual ‘point-and-click’ interaction artifacts such as mouse and touchpad are tangible input modalities, which are essential for sighted users to conveniently interact with computer applications. In contrast, blind users are unable to leverage these visual input modalities and are thus limited while interacting with computers using a sequentially narrating screen-reader assistive technology that is coupled to keyboards. As a consequence, blind users generally require significantly more time and effort to do even simple application tasks (e.g., applying a style to text in a word processor) using only keyboard, compared to their sighted peers who can effortlessly accomplish the same tasks using a point-and-click mouse.This paper explores the idea of repurposing visual input modalities for non-visual interaction so that blind users too can draw the benefits of simple and efficient access from these modalities. Specifically, with word processing applications as the representative case study, we designed and developed NVMouse as a concrete manifestation of this repurposing idea, in which the spatially distributed word-processor controls are mapped to a virtual hierarchical ‘Feature Menu’ that is easily traversable non-visually using simple scroll and click input actions. Furthermore, NVMouse enhances the efficiency of accessing frequently-used application commands by leveraging a data-driven prediction model that can determine what commands the user will most likely access next, given the current ‘local’ screen-reader context in the document. A user study with 14 blind participants comparing keyboard-based screen readers with NVMouse, showed that the latter significantly reduced both the task-completion times and user effort (i.e., number of user actions) for different word-processing activities.
@inproceedings{lee2020repurposing, author = {Lee, Hae-Na and Ashok, Vikas and Ramakrishnan, I. V.}, title = {Repurposing Visual Input Modalities for Blind Users: A Case Study of Word Processors}, year = {2020}, publisher = {IEEE Press}, url = {https://doi.org/10.1109/SMC42975.2020.9283015}, doi = {10.1109/SMC42975.2020.9283015}, booktitle = {2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)}, pages = {2714–2721}, numpages = {8}, location = {Toronto, ON}, }
- SMCiTOC: Enabling Efficient Non-Visual Interaction with Long Web DocumentsHae-Na Lee, Sami Uddin, and Vikas AshokIn 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, May 2020
Interacting with long web documents such as wiktionaries, manuals, tutorials, blogs, novels, etc., is easy for sighted users, as they can leverage convenient pointing devices such as a mouse/touchpad to quickly access the desired content either via scrolling with visual scanning or clicking hyperlinks in the available Table of Contents (TOC). Blind users on the other hand are unable to use these pointing devices, and therefore can only rely on keyboard-based screen reader assistive technology that lets them serially navigate and listen to the page content using keyboard shortcuts. As a consequence, interacting with long web documents with just screen readers, is often an arduous and tedious experience for the blind users.To bridge the usability divide between how sighted and blind users interact with web documents, in this paper, we present iTOC, a browser extension that automatically identifies and extracts TOC hyperlinks from the web documents, and then facilitates on-demand instant screen-reader access to the TOC from anywhere in the website. This way, blind users need not manually search for the desired content by moving the screen-reader focus sequentially all over the webpage; instead they can simply access the TOC from anywhere using iTOC, and then select the desired hyperlink which will automatically move the focus to the corresponding content in the document. A user study with 15 blind participants showed that with iTOC, both the access time and user effort (number of user input actions) were significantly lowered by as much as 42.73% and 57.9%, respectively, compared to that with another state-of-the-art solution for improving web usability.
@inproceedings{lee2020itoc, author = {Lee, Hae-Na and Uddin, Sami and Ashok, Vikas}, title = {iTOC: Enabling Efficient Non-Visual Interaction with Long Web Documents}, year = {2020}, publisher = {IEEE Press}, url = {https://doi.org/10.1109/SMC42975.2020.9282972}, doi = {10.1109/SMC42975.2020.9282972}, booktitle = {2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)}, pages = {3799–3806}, numpages = {8}, location = {Toronto, ON}, }
- HCIIRotate-and-Press: A Non-visual Alternative to Point-and-Click?Hae-Na Lee, Vikas Ashok, and I. V. RamakrishnanIn HCI International 2020 – Late Breaking Papers: Universal Access and Inclusive Design: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Copenhagen, Denmark, May 2020
Most computer applications manifest visually rich and dense graphical user interfaces (GUIs) that are primarily tailored for an easy-and-efficient sighted interaction using a combination of two default input modalities, namely the keyboard and the mouse/touchpad. However, blind screen-reader users predominantly rely only on keyboard, and therefore struggle to interact with these applications, since it is both arduous and tedious to perform the visual ‘point-and-click’ tasks such as accessing the various application commands/features using just keyboard shortcuts supported by screen readers.In this paper, we investigate the suitability of a ‘rotate-and-press’ input modality as an effective non-visual substitute for the visual mouse to easily interact with computer applications, with specific focus on word processing applications serving as the representative case study. In this regard, we designed and developed bTunes, an add-on for Microsoft Word that customizes an off-the-shelf Dial input device such that it serves as a surrogate mouse for blind screen-reader users to quickly access various application commands and features using a set of simple rotate and press gestures supported by the Dial. Therefore, with bTunes, blind users too can now enjoy the benefits of two input modalities, as their sighted counterparts. A user study with 15 blind participants revealed that bTunes significantly reduced both the time and number of user actions for doing representative tasks in a word processing application, by as much as and respectively. The participants also stated that they did not face any issues switching between keyboard and Dial, and furthermore gave a high usability rating (84.66 avg. SUS score) for bTunes.
@inproceedings{lee2020rotate, author = {Lee, Hae-Na and Ashok, Vikas and Ramakrishnan, I. V.}, title = {Rotate-and-Press: A Non-visual Alternative to Point-and-Click?}, year = {2020}, isbn = {978-3-030-60148-5}, publisher = {Springer-Verlag}, address = {Berlin, Heidelberg}, url = {https://doi.org/10.1007/978-3-030-60149-2_23}, doi = {10.1007/978-3-030-60149-2_23}, booktitle = {HCI International 2020 – Late Breaking Papers: Universal Access and Inclusive Design: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings}, pages = {291–305}, numpages = {15}, keywords = {Accessibility, Word processor, Screen reader}, location = {Copenhagen, Denmark}, }
- HTTowards Personalized Annotation of Webpages for Efficient Screen-Reader InteractionHae-Na Lee and Vikas AshokIn Proceedings of the 31st ACM Conference on Hypertext and Social Media, Virtual Event, USA, May 2020
To interact with webpages, people who are blind use special-purpose assistive technology, namely screen readers that enable them to serially navigate and listen to the content using keyboard shortcuts. Although screen readers support a multitude of shortcuts for navigating over a variety of HTML tags, it has been observed that blind users typically rely on only a fraction of these shortcuts according to their personal preferences and knowledge. Thus, a mismatch between a user’s repertoire of shortcuts and a webpage markup can significantly increase browsing effort even for simple everyday web tasks. Also, inconsistent usage of ARIA coupled with the increased adoption of styling and semantic HTML tags (e.g.,
, ) for which there is limited screen-reader support, further make interaction arduous and frustrating for blind users.To address these issues, in this work, we explore personalized annotation of webpages that enables blind users to efficiently navigate webpages using their preferred shortcuts. Specifically, our approach automatically injects personalized ’annotation’ nodes into the existing HTML DOM such that blind users can quickly access certain semantically-meaningful segments (e.g., menu, search results, filter options, calendar widget, etc.) on the page, using their preferred screen-reader shortcuts. Using real shortcut profiles collected from 5 blind screen-reader users doing representative web tasks, we observed that with personalized annotation, the interaction effort can be potentially reduced by as much as 48 (average) shortcut presses.</p> </div>@inproceedings{lee2020towards, author = {Lee, Hae-Na and Ashok, Vikas}, title = {Towards Personalized Annotation of Webpages for Efficient Screen-Reader Interaction}, year = {2020}, isbn = {9781450370981}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3372923.3404815}, doi = {10.1145/3372923.3404815}, booktitle = {Proceedings of the 31st ACM Conference on Hypertext and Social Media}, pages = {111–116}, numpages = {6}, keywords = {transcoding, web accessibility, web screen-reading}, location = {Virtual Event, USA}, series = {HT '20}, }
2014
- ICIPDetecting Defects in Repeatedly Patterned Image with Spatially Different Level of NoiseDeokyoung Kang, Hae-na Lee, and Suk I. YooIn 2014 IEEE International Conference on Image Processing (ICIP), May 2014
Defect detection is to find unexpected peak regions in an inspection image. Stable Principal Component Pursuit (SPCP) decomposes a given image into three matrices, low-rank, sparsity, and noise which are used for detecting defects. Each of them contains repeated pattern, spatially narrow abnormal elements which are regarded as defects, and small magnitude elements respectively. However, if the noise level of the image is spatially varied, it is hard to separate noise appropriately using naive SPCP. To overcome the difficulty, we propose a novel sliding-window based SPCP algorithm. First, a repeated pattern of each sliding-window is converted to a matrix for SPCP. The noise level based on rank-one approximation is then estimated, and the matrix decomposition is performed. Finally, the sparsity values of all sliding-windows are merged by averaging, and then the averaged term is used for defect detection. The experimental results show that our algorithm outperforms the traditional approaches.
@inproceedings{kang2014detecting, author = {Kang, Deokyoung and Lee, Hae-na and Yoo, Suk I.}, booktitle = {2014 IEEE International Conference on Image Processing (ICIP)}, title = {Detecting Defects in Repeatedly Patterned Image with Spatially Different Level of Noise}, year = {2014}, pages = {3258-3262}, keywords = {Matrix decomposition;Noise;Noise level;Inspection;Organic light emitting diodes;Robustness;Approximation methods;Defect Detection;Robust Principal Component Analysis;Stable Principal Component Pursuit;Low-rank Sparsity Decomposition;Sliding-Window}, doi = {10.1109/ICIP.2014.7025659}, }
-