Skip to content
QualityLogic logo, Click to navigate to Home

Accessibility Industry Update: December 2025

Home » Blogs/Events » Accessibility Industry Update: December 2025

Welcome back to the accessibility industry update: your one-stop for everything new, noteworthy, and happening in the digital accessibility and assistive technology space. 

A few notes before we begin: 

  • We have opted to switch the release cadence of this newsletter from the beginning to the middle of each month. We often find ourselves hammered with emails on the 1st, and we know we aren’t alone. It is our hope that this change allows our readers more time to stay up to date. 
  • We have also added a job and opportunities section to the bottom of the newsletter. Now more than ever, there are exceptionally knowledgeable accessibility practitioners looking for a new role. Are you hiring? Reach out and we will send your position to thousands of readers for free.  

As always, let us know if you think we’ve missed something, or share the link with your colleagues or partners who may benefit from some or all of this information. You can also sign up to receive these accessibility updates via email.

Contents:


Upcoming Conferences
and Events 

Takeaways from
Sight Tech Global

The annual Sight Tech Global conference took place earlier this month, from December 9-10. STG is an annual virtual event that brings together technology pioneers and thought leaders to talk about how “rapid advances in AI and related technologies will fundamentally alter the landscape of assistive technology and accessibility.” With nearly twenty curated sessions, this year certainly did not disappoint! We got to sit in on all of them, here were the things we took away:  

  • This year marks a decade since Microsoft launched Seeing AI on the iOS App Store, which has grown into the most popular way that blind and low vision users read text, documents, currency, products, and more. Recent advancements with Copilot have made it possible to understand not just the digital world, but the physical world as well. 

Saqib Shaikh, the co-founder of Seeing AI, talked about how he uses this new technology in his life. Examples included determining which products may be accessible to him through image descriptions, teaching his kid how to tie knots for scouting, helping with DIY home improvement through tactile descriptions, and more. 

Two concerns remain top of mind: where AI is obtaining information, and the extent to which the target audience (mainly people with cognitive and physical impairments) are represented in training data. Perhaps the most exciting announcement was that Microsoft has partnered with Meta to bring the Seeing AI app to their Ray-Ban smart glasses for hands-free use.

  • Salesforce talked about how they are building accessibility at enterprise scale and going beyond mere compliance. This is something that many organizations aspire to do but tend to struggle with.

Salesforce attributes their ability to do it to constantly keeping accessibility top-of-mind through lunch and learns, webinars, blog posts, videos, internal training, accessible design systems/components, and more.

If accessibility is a training that you have once, it’s going to quickly be forgotten amidst everything else. On top of that, they place tremendous value in accessibility-specific metrics and goals/targets that keep employees accountable.

  • Aira, an on-demand service that connects blind people to trained staff over video, partnered with Google DeepMind to bring Astra (an incredibly advanced multi-modal AI) to their app. This allows users to jump on a video call with an AI that augments the capabilities of interpreters, who are kept in the loop to ensure users are receiving accurate information.  

Though most large-scale AI providers now allow you to video chat, Astra is different in that it can notice changes and react to them in real-time without the user needing to prompt it again.

  • A product manager, safety researcher, and software engineer from Waymo had an in-depth conversation about how their autonomous driving system works. Waymo vehicles are equipped with 29 cameras that provide a 360-degree view of the environment, radar that can detect objects even in poor weather conditions, and lidar that creates a detailed 3D map of the world.  

Their AI system has been trained on millions of miles of real-world driving plus billions of miles of simulated environments and situations. Many people are hesitant to hop in the passenger seat of a vehicle with no driver at the wheel.

However, all metrics demonstrate increased safety relative to human drivers thanks to a system that can see in all directions at once, that never gets tired or distracted, that always follows traffic laws, and that can react faster than any human driver. All of this is done in close collaboration with people that have different disabilities.

  • The AIMAC (AI Model Accessibility Checker) is a scoreboard that ranks different state of the art GenAI models on their ability to produce accessible code. They do so by instructing models to write a set of common web components (forms, tables, menus, dialogs/modals, etc.) without any accessibility specific guidance or language.

They then run benchmarks to determine the number of issues across different categories. The code is released open-source on GitHub, so anyone can inspect how it works and make changes. The conversation was highly technical in nature with not only info on where AI is headed, but how to fix the problems they are seeing. 

  • The accessibility of PDFs at scale continues to present a technical and financial challenge for organizations that want to make their resources more accessible, and that need to comply with EAA and Title II ADA requirements. With the Title II deadline looming ever closer, many institutions are running out of time.  

A product called DocAccess proposes a solution to this problem by combining a custom trained GenAI model with expert human oversight. The result is accessible HTML documents, in a fraction of the cost and time offered by most solutions in-market. The team talked about the issues they ran into when training their model, such as complex images, graphs, tables, and how they overcame these issues.

  • Meta talked about their Ray-Ban glasses, which offer onboard AI that can not only describe images but provide enhanced information about the wearer’s environment at an incredibly competitive price point (around $300).   

They discussed the benefits (such as the low profile when used as a form of assistive technology, integration with visual interpretation apps like Aira and Be My Eyes), along with limitations in accuracy and what they are doing about it.

They recently released two new features, LiveAI and more detailed descriptions for accessibility. They are also rolling out a third-party SDK that will make it possible for developers to bring their apps to the glasses.

  • Microsoft demonstrated other work they are doing in the realm of accessibility. Examples include as a chatbot (Ask Microsoft Accessibility) that is trained on official resources and proven to have a low hallucination rate when answering questions about their products.

Work is being done across the board to ensure that when someone discloses their disability or asks for an image of someone with a disability, the model responds in a way that is helpful instead of misconstrued or patronizing. Feedback is integral to Microsoft products and taken seriously, so users are encouraged to reach out with their experiences. 

  • Google revealed some of the new features in Talkback (like built-in Gemini powered image and content descriptions), Lookout’s Image Q&A feature, Pixel Magnifier with voice search, Guided Frame for photo composition, and the StreetReaderAI prototype. 

StreetReaderAI attempts to bring street view data to screen reader users through a unique approach that draws on techniques from games and navigation apps to bring about enhanced emersion and understanding.

  • The team at Scribely talked about the many problems with alternative text descriptions today (98% of the top e-commerce product pages have missing or completely useless alt text). They propose a “virtuous alt text cycle” where generative AI drafts image descriptions using rich technical, content, and intentional context supplied from systems like CMS, PIM, and DAM with human experts reviewing and refining outputs for precision and brand alignment. 

This creates a human-in-the-loop workflow: ingest and contextualize images, have AI create a first draft, perform expert review and refinement, publish, and then feed corrected descriptions back into models or prompts so AI can do better next time. 

  • Mike Buckley, the CEO of Be My Eyes, sat down for a candid conversation about the problems with AI as a means of alternative text description. There was a lot of critical feedback here. 

Perhaps best articulated when looking at a person wearing a headset and asking different AI models what it was: “One told me it was an eye massager. One told me it was an Apple Vision Pro. One told me it was smart glasses, and one told me it might be an AR headset. It’s none of those.” The crux of the issue is that we have so little data on people with disabilities included in modern datasets.

Similarly, the technology needs the ability to acknowledge uncertainty: instead of providing an answer when a question or image is unclear, notify the user, and work together to fix it until an accuracy threshold has been reached. Be My Eyes is actively working with OpenAI to solve this.

The findings paint a picture of an industry in transition, in more ways than one. While there has been an evident uptick in organizations treating accessibility as a business imperative instead of a compliance checkbox, the legal risk is greater than ever before, and many companies are still unprepared. 

If any of this was of particular interest to you, you can catch the full agenda online, which includes videos and transcripts for every session, no need to register!

It was an especially slow month in the legal department, perhaps the slowest this year.

In Converge Accessibility’s November 2025 Update, we got more information on supplemental (formerly pendent/ancillary) jurisdiction. In short, this is the ability for federal courts to hear related state claims, even though they wouldn’t ordinarily fall under federal jurisdiction. However, they are not required to keep the state claims (especially Unruh Act claims) when special circumstances exist such as California’s efforts to rein in high-volume disability filings for physical access barriers. In a recent Central District of California case, the court signaled it might drop the Unruh claim and keep only the ADA claim, which raises questions about how this approach should apply when the alleged barriers are on websites rather than in brick-and-mortar facilities. 

Additionally, a legally blind individual filed a class action lawsuit against HP in New York federal court, alleging that their website is not accessible to blind individuals. The claim specifically highlights missing alternative text, broken ARIA references, unlabeled buttons and inaccessible navigation menus. 

Finally, we saw an update from the National Association of the Deaf (NAD)’s latest case against the American White House regarding ASL interpretation. If this is the first time you’re hearing about it, the NAD first sued the Trump White House in 2020 for failing to provide in-frame American Sign Language (ASL) interpreters at televised COVID-19 briefings. The argument stated that doing so denied deaf viewers equal access to critical public health information. The case ended in a settlement limiting relief to a defined set of pandemic briefings.  

In the current administration, NAD and two Deaf individuals have brought a new lawsuit in the U.S. District Court for the District of Columbia seeking to expand these obligations. The ask is to require live, qualified ASL interpretation at essentially all presidential, vice-presidential, First/Second Spouse, and press secretary briefings and related public events that are streamed or recorded by official White House channels. The plaintiffs ground their claims in the Rehabilitation Act, the First and Fifth Amendments, and a mandamus theory, framing comprehensive ASL access as necessary for meaningful participation in civic life and timely access to government communications.

In response, the federal government has filed a detailed brief opposing a preliminary injunction arguing that earlier litigation and settlement preclude this broader suit, that the Rehabilitation Act does not guarantee a right to ASL at every event, and that existing captions and transcripts already provide “meaningful access.” The government also contends that forcing ASL interpretation across all such events would unreasonably constrain how the White House chooses to communicate and could create operational challenges for urgent, unscheduled announcements. 

You can read a Politico article outlining the situation, or the full brief opposing the measure here. 

What We’ve Been Reading 

  • In our training sessions and webinars, one of the things we cover quite frequently is the need to simplify everything until it cannot be simplified any more. This pervasive mindset is what ultimately results in products that people use without thinking about it, with the added benefit of enabling access for users with cognitive impairments, or that just don’t have the time to figure out something complex. Fable, the accessibility testing and user research company, published two phenomenal articles on designing for cognitive accessibility, a topic that is not covered nearly enough: 
  • Shopify Accessibility Lessons from a Small Business Saturday Purchase – UsableNet: Another year, another Black Friday shopping spree, where Americans managed to spend a record breaking $11.8 billion. In an attempt to draw focus away from the vast presence of large companies, the following Saturday is designated small business Saturday. This piece outlines a blind shopper’s experience browsing and purchasing from a Shopify storefront. 
  • Web Design / Dev Advent Calendars for 2025 – Adrian Roselli: Speaking of tradition, we’re closing in on the holiday season. This post shares a list of advent calendars for developers that can help you hone your knowledge while counting down to Christmas, possibly avoiding a few calories along the way. 
  • NVAccess, the non-profit behind the free and open-source NVDA screen reader, published an exciting Roadmap for 2026 and beyond. Short-term priorities include 64-bit migration, a secure add-on runtime environment, on-device image description, improvements to rendering math content, and more. Medium-term priorities call out things like a magnifier that runs alongside the screen reader, OCR improvements, support for natural Microsoft text to speech voices, end-to-end network communication for remote access, updated compliance with the ARIA specification, etc. 
  • Vispero Accounts: An Explanation, an Apology, and a Path Forward – Freedom Scientific: Vispero, the manufacturer of the popular JAWS screen reader, received a significant amount of Flack for requiring users to create accounts and divulge personal information in order to use the latest version of the software. This resulted in the desire to switch away to other products like NVDA, from organizations and users alike. They have since posted an apology with more details on what information is collected, and why. 

Jobs and Opportunities 

While we do our best to list opportunities here that we believe our readers will appreciate, QualityLogic does not explicitly endorse these companies. Should you decide to seek a position with one of them, please perform your own due diligence. 

This list is by no means comprehensive. For more, check out 


That’s a wrap for this month. As always, let us know if you think we’ve missed something, or share the link with your colleagues or partners who may benefit from some or all of this information. You can also sign up to receive these accessibility updates via email.

Interested in More Information?